Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
82
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Self-organized criticality as a fundamental property of neural systems

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The neural criticality hypothesis states that the brain may be poised in a critical state at a boundary between different types of dynamics. Theoretical and experimental studies show that critical systems often exhibit optimal computational properties, suggesting the possibility that criticality has been evolutionarily selected as a useful trait for our nervous system. Evidence for criticality has been found in cell cultures, brain slices, and anesthetized animals. Yet, inconsistent results were reported for recordings in awake animals and humans, and current results point to open questions about the exact nature and mechanism of criticality, as well as its functional role. Therefore, the criticality hypothesis has remained a controversial proposition. Here, we provide an account of the mathematical and physical foundations of criticality. In the light of this conceptual framework, we then review and discuss recent experimental studies with the aim of identifying important next steps to be taken and connections to other fields that should be explored.

          Related collections

          Most cited references90

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Emergence of scaling in random networks

          Systems as diverse as genetic networks or the world wide web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature is found to be a consequence of the two generic mechanisms that networks expand continuously by the addition of new vertices, and new vertices attach preferentially to already well connected sites. A model based on these two ingredients reproduces the observed stationary scale-free distributions, indicating that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Power-law distributions in empirical data

            Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution -- the part of the distribution representing large but rare events -- and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data while in others the power law is ruled out.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Real-time computing without stable states: a new framework for neural computation based on perturbations.

              A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Syst Neurosci
                Front Syst Neurosci
                Front. Syst. Neurosci.
                Frontiers in Systems Neuroscience
                Frontiers Media S.A.
                1662-5137
                23 September 2014
                2014
                : 8
                : 166
                Affiliations
                [1] 1Computational Neurophysiology Group, Institute for Theoretical Biology, Humboldt Universität zu Berlin Berlin, Germany
                [2] 2Bernstein Center for Computational Neuroscience Berlin Berlin, Germany
                [3] 3École Normale Supérieure Paris, France
                [4] 4Department of Engineering Mathematics, Merchant Venturers School of Engineering, University of Bristol Bristol, UK
                Author notes

                Edited by: Dietmar Plenz, National Institute of Mental Health, NIH, USA

                Reviewed by: Shan Yu, National Institute of Mental Health, USA; Woodrow Shew, University of Arkansas, USA; Hongdian Yang, Johns Hopkins University School of Medicine, USA

                *Correspondence: Janina Hesse, Computational Neurophysiology Group, Institute for Theoretical Biology, Humboldt Universität zu Berlin, Philippstr. 13, Building 4, 10115 Berlin, Germany e-mail: janina.hesse@ 123456bccn-berlin.de

                This article was submitted to the journal Frontiers in Systems Neuroscience.

                Article
                10.3389/fnsys.2014.00166
                4171833
                25294989
                518a35a5-738c-41cc-a0e7-4ce95187aeeb
                Copyright © 2014 Hesse and Gross.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 23 May 2014
                : 25 August 2014
                Page count
                Figures: 4, Tables: 1, Equations: 2, References: 100, Pages: 14, Words: 13466
                Categories
                Neuroscience
                Review Article

                Neurosciences
                self-organized criticality,brain,phase transition,dynamics,neural network
                Neurosciences
                self-organized criticality, brain, phase transition, dynamics, neural network

                Comments

                Comment on this article