30
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.

          Related collections

          Most cited references87

          • Record: found
          • Abstract: not found
          • Article: not found

          Gradient-based learning applied to document recognition

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Neural networks and physical systems with emergent collective computational abilities.

            J Hopfield (1982)
            Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Reducing the dimensionality of data with neural networks.

              High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Neurosci
                Front Neurosci
                Front. Neurosci.
                Frontiers in Neuroscience
                Frontiers Media S.A.
                1662-4548
                1662-453X
                29 June 2016
                2016
                : 10
                : 241
                Affiliations
                [1] 1Department of Cognitive Sciences, University of California, Irvine Irvine, CA, USA
                [2] 2Department of Bioengineering, University of California San Diego, La Jolla, CA, USA
                [3] 3Electrical and Computer Engineering Department, University of California San Diego, La Jolla, CA, USA
                [4] 4Machine Learning Department, Carnegie Mellon University Pittsburgh, PA, USA
                Author notes

                Edited by: Themis Prodromakis, University of Southampton, UK

                Reviewed by: Damien Querlioz, University Paris-Sud, France; Doo Seok Jeong, Korea Institute of Science and Technology, South Korea

                *Correspondence: Emre O. Neftci eneftci@ 123456uci.edu

                This article was submitted to Neuromorphic Engineering, a section of the journal Frontiers in Neuroscience

                Article
                10.3389/fnins.2016.00241
                4925698
                27445650
                22975401-db29-4cee-bc85-25228c9b637f
                Copyright © 2016 Neftci, Pedroni, Joshi, Al-Shedivat and Cauwenberghs.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 26 January 2016
                : 17 May 2016
                Page count
                Figures: 10, Tables: 2, Equations: 23, References: 78, Pages: 16, Words: 12780
                Funding
                Funded by: National Science Foundation 10.13039/100000001
                Award ID: CCF-1317373
                Funded by: Office of Naval Research 10.13039/100000006
                Award ID: MURI 14-13-1-0205
                Funded by: Intel Corporation 10.13039/100002418
                Categories
                Neuroscience
                Original Research

                Neurosciences
                stochastic processes,spiking neural networks,synaptic plasticty,unsupervised learning,hopfield networks,regularization,synaptic transmission

                Comments

                Comment on this article