Blog
About

  • Record: found
  • Abstract: found
  • Article: found
Is Open Access

Minimal approach to neuro-inspired information processing

Read this article at

Bookmark
      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

      Abstract

      To learn and mimic how the brain processes information has been a major research challenge for decades. Despite the efforts, little is known on how we encode, maintain and retrieve information. One of the hypothesis assumes that transient states are generated in our intricate network of neurons when the brain is stimulated by a sensory input. Based on this idea, powerful computational schemes have been developed. These schemes, known as machine-learning techniques, include artificial neural networks, support vector machine and reservoir computing, among others. In this paper, we concentrate on the reservoir computing (RC) technique using delay-coupled systems. Unlike traditional RC, where the information is processed in large recurrent networks of interconnected artificial neurons, we choose a minimal design, implemented via a simple nonlinear dynamical system subject to a self-feedback loop with delay. This design is not intended to represent an actual brain circuit, but aims at finding the minimum ingredients that allow developing an efficient information processor. This simple scheme not only allows us to address fundamental questions but also permits simple hardware implementations. By reducing the neuro-inspired reservoir computing approach to its bare essentials, we find that nonlinear transient responses of the simple dynamical system enable the processing of information with excellent performance and at unprecedented speed. We specifically explore different hardware implementations and, by that, we learn about the role of nonlinearity, noise, system responses, connectivity structure, and the quality of projection onto the required high-dimensional state space. Besides the relevance for the understanding of basic mechanisms, this scheme opens direct technological opportunities that could not be addressed with previous approaches.

      Related collections

      Most cited references 42

      • Record: found
      • Abstract: found
      • Article: not found

      Real-time computing without stable states: a new framework for neural computation based on perturbations.

      A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.
        Bookmark
        • Record: found
        • Abstract: found
        • Article: not found

        Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication.

         H Jaeger,  H HAAS (2004)
        We present a method for learning nonlinear systems, echo state networks (ESNs). ESNs employ artificial recurrent neural networks in a way that has recently been proposed independently as a learning mechanism in biological brains. The learning method is computationally efficient and easy to use. On a benchmark task of predicting a chaotic time series, accuracy is improved by a factor of 2400 over previous techniques. The potential for engineering applications is illustrated by equalizing a communication channel, where the signal error rate is improved by two orders of magnitude.
          Bookmark
          • Record: found
          • Abstract: found
          • Article: not found

          State-dependent computations: spatiotemporal processing in cortical networks.

          A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.
            Bookmark

            Author and article information

            Affiliations
            Instituto de Física Interdisciplinar y Sistemas Complejos, (UIB-CSIC) Palma de Mallorca, Spain
            Author notes

            Edited by: Javier M. Buldú, Centro de Tecnología Biomédica, Spain

            Reviewed by: Guillaume Lajoie, Max Planck Institute for Dynamics and Self-Organization, Germany; Danko Nikolic, Max Planck Institute for Brain Research, Germany

            *Correspondence: Ingo Fischer, Instituto de Física Interdisciplinar y Sistemas Complejos, (UIB-CSIC), Campus Universitat Illes Balears, E-07122 Palma de Mallorca, Spain ingo@ 123456ifisc.uib-csic.es
            Contributors
            Journal
            Front Comput Neurosci
            Front Comput Neurosci
            Front. Comput. Neurosci.
            Frontiers in Computational Neuroscience
            Frontiers Media S.A.
            1662-5188
            02 June 2015
            2015
            : 9
            4451339
            10.3389/fncom.2015.00068
            Copyright © 2015 Soriano, Brunner, Escalona-Morán, Mirasso and Fischer.

            This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

            Counts
            Figures: 6, Tables: 0, Equations: 5, References: 51, Pages: 11, Words: 8175
            Categories
            Neuroscience
            Review

            Comments

            Comment on this article