58
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Random synaptic feedback weights support error backpropagation for deep learning

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.

          Abstract

          Multi-layered neural architectures that implement learning require elaborate mechanisms for symmetric backpropagation of errors that are biologically implausible. Here the authors propose a simple resolution to this problem of blame assignment that works even with feedback using random synaptic weights.

          Related collections

          Most cited references31

          • Record: found
          • Abstract: found
          • Article: not found

          Top-down influences on visual processing.

          Re-entrant or feedback pathways between cortical areas carry rich and varied information about behavioural context, including attention, expectation, perceptual tasks, working memory and motor commands. Neurons receiving such inputs effectively function as adaptive processors that are able to assume different functional states according to the task being executed. Recent data suggest that the selection of particular inputs, representing different components of an association field, enable neurons to take on different functional roles. In this Review, we discuss the various top-down influences exerted on the visual cortical pathways and highlight the dynamic nature of the receptive field, which allows neurons to carry information that is relevant to the current perceptual demands.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Selective attention. Long-range and local circuits for top-down modulation of visual cortex processing.

            Top-down modulation of sensory processing allows the animal to select inputs most relevant to current tasks. We found that the cingulate (Cg) region of the mouse frontal cortex powerfully influences sensory processing in the primary visual cortex (V1) through long-range projections that activate local γ-aminobutyric acid-ergic (GABAergic) circuits. Optogenetic activation of Cg neurons enhanced V1 neuron responses and improved visual discrimination. Focal activation of Cg axons in V1 caused a response increase at the activation site but a decrease at nearby locations (center-surround modulation). Whereas somatostatin-positive GABAergic interneurons contributed preferentially to surround suppression, vasoactive intestinal peptide-positive interneurons were crucial for center facilitation. Long-range corticocortical projections thus act through local microcircuits to exert spatially specific top-down modulation of sensory processing. Copyright © 2014, American Association for the Advancement of Science.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Sensorimotor mismatch signals in primary visual cortex of the behaving mouse.

              Studies in anesthetized animals have suggested that activity in early visual cortex is mainly driven by visual input and is well described by a feedforward processing hierarchy. However, evidence from experiments on awake animals has shown that both eye movements and behavioral state can strongly modulate responses of neurons in visual cortex; the functional significance of this modulation, however, remains elusive. Using visual-flow feedback manipulations during locomotion in a virtual reality environment, we found that responses in layer 2/3 of mouse primary visual cortex are strongly driven by locomotion and by mismatch between actual and expected visual feedback. These data suggest that processing in visual cortex may be based on predictive coding strategies that use motor-related and visual input to detect mismatches between predicted and actual visual feedback. Copyright © 2012 Elsevier Inc. All rights reserved.
                Bookmark

                Author and article information

                Journal
                Nat Commun
                Nat Commun
                Nature Communications
                Nature Publishing Group
                2041-1723
                08 November 2016
                2016
                : 7
                : 13276
                Affiliations
                [1 ]Department of Pharmacology, University of Oxford , Oxford OX1 3QT, UK
                [2 ]Google DeepMind, 5 New Street Square , London EC4A 3TW, UK
                [3 ]School of Biology, University of St Andrews, Harold Mitchel Building, St Andrews , Fife KY16 9TH, UK
                [4 ]Departments of Physiology and Medicine, University of Toronto , Toronto, Ontario M5S 1A8, Canada
                [5 ]Centre for Vision Research, York University , Toronto, Ontario M3J 1P3, Canada
                Author notes
                Article
                ncomms13276
                10.1038/ncomms13276
                5105169
                27824044
                cc78e6d9-dbd9-4794-98ac-045694d40274
                Copyright © 2016, The Author(s)

                This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

                History
                : 07 January 2016
                : 16 September 2016
                Categories
                Article

                Uncategorized
                Uncategorized

                Comments

                Comment on this article