Cognitive Science Conference – A personal summary

Talks I liked:

Jessica Hamrick – Mental Simulation and uncertainty of outcomes:

A nice and clean demonstration of how the number of noisy mental simulations can track the uncertainty of whether or not a ball bouncing in a box might go through a hole.

Link here

Neil Bramley – Staying afloat of Neurath’s boat:

A cool approach to how people might update their current causal models of a simple active learning task based on replacing small parts at the time. This could be a first step towards modelling what people actually do when they learn actively.

Link here

Jonathan Nelson – Sharma-Mittal entropy:

Sharma-Mittal entropy could be used to map out participants behaviour in accordance with many different measures of uncertainty reduction as it unifies different measures of entropy in one overall approach. Using this approach we could test different measures of active learning directly as well as checking for some individual differences.

Link here

Julian Jara-Ettinger – The naive utility calculus:

Very nice demonstration of how participants might assess the cost and reward function of agents moving on a grid of different landscapes.

Link here

Minor issues: 

High group level r-squares:

Many studies still aim for high r-squared values. A common approach here is to have a bar plot of aggregated participants’ behaviour and then to try and match those bars with another model and afterwards calculating a r-squared. There are some problems with this approach. First of all, high r-squares have not much to do with matching bar plots (the height doesn’t matter, only the variance explained). If matching barplots is your goal, you could calculate the distance to the barplot values, but that wouldn’t be too great either. Secondly, aggregated values over all participants can be misleading as long as we do not know the strategy of each participant individually. Last not least, other criteria to determine model quality such as information measurements or cross validation estimates exist and could add more information to our model comparisons.

ggplots:

I appreciate that more people are using R for their plots. However, I have seen some minor issues with ggplots throughout the conference. They are not the end of the world, but could be easily improved (so why not?).

1. Make sure to name your axes correctly; “x_variable” shows that you did not change the labels of your plots.

2. Try to adjust ticks If they overlap.

3. Save your plots as a PDF if possible and insert it without creating distortions to the width to height ratio.

4. Use facet_wrap instead of grid.arrange().

I know that I’m being picky here, but these things can make a difference and are easy to add.

General themes I liked:

More pluralistic Bayesian approaches:

This year there were really a lot of good Bayesian models of many diverse psychological tasks. Even more, it seems like the focus on mostly hierarchical models seems to be gone and people use many different approaches drawn from Bayesian statistics such as information gain-inspired methods and direct uncertainty quantification.

Rumelhart prize:

I really enjoyed Michael Jordan’s talk (and everyone presenting in the symposium prior to it).

Summary:

All in all a pretty inspiring conference. See you next year in Philadelphia, everyone!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s