24
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Learning multiple layers of representation.

      1
      Trends in cognitive sciences
      Elsevier BV

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          To achieve its impressive performance in tasks such as speech perception or object recognition, the brain extracts multiple levels of representation from the sensory input. Backpropagation was the first computationally efficient model of how neural networks could learn multiple layers of representation, but it required labeled training data and it did not work well in deep networks. The limitations of backpropagation learning can now be overcome by using multilayer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it. Learning multilayer generative models might seem difficult, but a recent discovery makes it easy to learn nonlinear distributed representations one layer at a time.

          Related collections

          Author and article information

          Journal
          Trends Cogn Sci
          Trends in cognitive sciences
          Elsevier BV
          1364-6613
          1364-6613
          Oct 2007
          : 11
          : 10
          Affiliations
          [1 ] Department of Computer Science, University of Toronto, 10 King's College Road, Toronto, M5S 3G4, Canada. hinton@cs.toronto.edu
          Article
          S1364-6613(07)00217-3
          10.1016/j.tics.2007.09.004
          17921042
          dee218e0-03d2-4564-ae1a-05659440f31f
          History

          Comments

          Comment on this article