11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Learning recurrent dynamics in spiking networks

      research-article
      1 , , 1 ,
      eLife
      eLife Sciences Publications, Ltd
      spiking network, recurrent dynamics, learning, universal dynamics, None

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We apply the training method to learn arbitrary firing patterns, stabilize irregular spiking activity in a network of excitatory and inhibitory neurons respecting Dale’s law, and reproduce the heterogeneous spiking rate patterns of cortical neurons engaged in motor planning and movement. We identify sufficient conditions for successful learning, characterize two types of learning errors, and assess the network capacity. Our findings show that synaptically-coupled recurrent spiking networks possess a vast computational capability that can support the diverse activity patterns in the brain.

          Related collections

          Most cited references50

          • Record: found
          • Abstract: found
          • Article: not found

          Simple model of spiking neurons.

          A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Real-time computing without stable states: a new framework for neural computation based on perturbations.

            A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication.

              We present a method for learning nonlinear systems, echo state networks (ESNs). ESNs employ artificial recurrent neural networks in a way that has recently been proposed independently as a learning mechanism in biological brains. The learning method is computationally efficient and easy to use. On a benchmark task of predicting a chaotic time series, accuracy is improved by a factor of 2400 over previous techniques. The potential for engineering applications is illustrated by equalizing a communication channel, where the signal error rate is improved by two orders of magnitude.
                Bookmark

                Author and article information

                Contributors
                Journal
                eLife
                Elife
                eLife
                eLife
                eLife Sciences Publications, Ltd
                2050-084X
                20 September 2018
                2018
                : 7
                : e37124
                Affiliations
                [1 ]deptLaboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases National Institutes of Health BethesdaUnited States
                Author information
                http://orcid.org/0000-0002-1322-6207
                http://orcid.org/0000-0003-1463-9553
                Article
                37124
                10.7554/eLife.37124
                6195349
                30234488
                4d7fcbdd-dcd0-42a4-ba03-8a29e1769a01

                This is an open-access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.

                History
                : 29 March 2018
                : 14 September 2018
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100000062, National Institute of Diabetes and Digestive and Kidney Diseases;
                Award ID: Intramural Research Program
                Award Recipient :
                The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
                Categories
                Research Article
                Computational and Systems Biology
                Neuroscience
                Custom metadata
                Modifying the recurrent connectivity of spiking networks provides sufficient flexibility to generate arbitrarily complex recurrent dynamics, suggesting that individual neurons in a recurrent network have the capability to support near universal dynamics.

                Life sciences
                spiking network,recurrent dynamics,learning,universal dynamics,none
                Life sciences
                spiking network, recurrent dynamics, learning, universal dynamics, none

                Comments

                Comment on this article