There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Abstract
The ability to learn abstract concepts is a powerful component of human cognition.
It has been argued that variable binding is the key element enabling this ability,
but the computational aspects of variable binding remain poorly understood. Here,
we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT)
model of rule learning. Given a set of data items, the model uses Bayesian inference
to infer a probability distribution over stochastic programs that implement variable
binding. Because the model makes use of symbolic variables as well as Bayesian inference
and programs with stochastic primitives, it combines many of the advantages of both
symbolic and statistical approaches to cognitive modeling. To evaluate the model,
we conducted an experiment in which human subjects viewed training items and then
judged which test items belong to the same concept as the training items. We found
that the HLOT model provides a close match to human generalization patterns, significantly
outperforming two variants of the Generalized Context Model, one variant based on
string similarity and the other based on visual similarity using features from a deep
convolutional neural network. Additional results suggest that variable binding happens
automatically, implying that binding operations do not add complexity to peoples'
hypothesized rules. Overall, this work demonstrates that a cognitive model combining
symbolic variables with Bayesian inference and stochastic program primitives provides
a new perspective for understanding people's patterns of generalization.