624
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Non-Random Brain: Efficiency, Economy, and Complex Dynamics

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Modern anatomical tracing and imaging techniques are beginning to reveal the structural anatomy of neural circuits at small and large scales in unprecedented detail. When examined with analytic tools from graph theory and network science, neural connectivity exhibits highly non-random features, including high clustering and short path length, as well as modules and highly central hub nodes. These characteristic topological features of neural connections shape non-random dynamic interactions that occur during spontaneous activity or in response to external stimulation. Disturbances of connectivity and thus of neural dynamics are thought to underlie a number of disease states of the brain, and some evidence suggests that degraded functional performance of brain networks may be the outcome of a process of randomization affecting their nodes and edges. This article provides a survey of the non-random structure of neural connectivity, primarily at the large scale of regions and pathways in the mammalian cerebral cortex. In addition, we will discuss how non-random connections can give rise to differentiated and complex patterns of dynamics and information flow. Finally, we will explore the idea that at least some disorders of the nervous system are associated with increased randomness of neural connections.

          Related collections

          Most cited references67

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Emergence of scaling in random networks

          Systems as diverse as genetic networks or the world wide web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature is found to be a consequence of the two generic mechanisms that networks expand continuously by the addition of new vertices, and new vertices attach preferentially to already well connected sites. A model based on these two ingredients reproduces the observed stationary scale-free distributions, indicating that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Modularity and community structure in networks

            M. Newman (2006)
            Many networks of interest in the sciences, including a variety of social and biological networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure has attracted considerable recent attention. One of the most sensitive detection methods is optimization of the quality function known as "modularity" over the possible divisions of a network, but direct application of this method using, for instance, simulated annealing is computationally costly. Here we show that the modularity can be reformulated in terms of the eigenvectors of a new characteristic matrix for the network, which we call the modularity matrix, and that this reformulation leads to a spectral algorithm for community detection that returns results of better quality than competing methods in noticeably shorter running times. We demonstrate the algorithm with applications to several network data sets.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Real-time computing without stable states: a new framework for neural computation based on perturbations.

              A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.
                Bookmark

                Author and article information

                Journal
                Front Comput Neurosci
                Front. Comput. Neurosci.
                Frontiers in Computational Neuroscience
                Frontiers Research Foundation
                1662-5188
                06 December 2010
                08 February 2011
                2011
                : 5
                : 5
                Affiliations
                [1] 1simpleDepartment of Psychological and Brain Sciences, Indiana University Bloomington, IN, USA
                Author notes

                Edited by: Arvind Kumar, Albert-Ludwig University Freiburg, Germany

                Reviewed by: Xiao-Jing Wang , Yale University School of Medicine, USA; Marcus Kaiser, Seoul National University, South Korea

                *Correspondence: Olaf Sporns, Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA. e-mail: osporns@ 123456indiana.edu

                Dr. Xiao-Jing Wang was assisted by Rishidev Chaudhuri, Yale University School of Medicine, in the reviewing of this article.

                Article
                10.3389/fncom.2011.00005
                3037776
                21369354
                71c1aa78-2735-466f-bc6a-af47b93c8206
                Copyright © 2011 Sporns.

                This is an open-access article subject to an exclusive license agreement between the authors and Frontiers Media SA, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

                History
                : 31 October 2010
                : 25 January 2011
                Page count
                Figures: 5, Tables: 0, Equations: 0, References: 131, Pages: 13, Words: 12319
                Categories
                Neuroscience
                Review Article

                Neurosciences
                networks,neuroanatomy,neuroimaging,neural dynamics,connectome,complex systems
                Neurosciences
                networks, neuroanatomy, neuroimaging, neural dynamics, connectome, complex systems

                Comments

                Comment on this article