There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Abstract
To achieve its impressive performance in tasks such as speech perception or object
recognition, the brain extracts multiple levels of representation from the sensory
input. Backpropagation was the first computationally efficient model of how neural
networks could learn multiple layers of representation, but it required labeled training
data and it did not work well in deep networks. The limitations of backpropagation
learning can now be overcome by using multilayer neural networks that contain top-down
connections and training them to generate sensory data rather than to classify it.
Learning multilayer generative models might seem difficult, but a recent discovery
makes it easy to learn nonlinear distributed representations one layer at a time.