17
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Learning modular representations from global sparse coding networks

      abstract
      1 , , 1 , 1
      BMC Neuroscience
      BioMed Central
      Nineteenth Annual Computational Neuroscience Meeting: CNS*2010
      24–30 July 2010

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The introduction of the efficient coding hypothesis by Barlow in 1961 marked the beginning of a detailed investigation into how neurons could adapt their receptive field (RF) profiles to represent their external sensory environment more efficiently. The efficiency of the neural code was originally studied in the context of a single neuron, primarily with respect to the amount of information that could be sent from a single cell across an axonal communication channel. With the development of techniques to examine the activity of multiple neurons in intact neural circuits, our view of information processing in sensory systems has shifted from a neuron-centric view to a network-centric one. The sparse coding hypothesis attempts to describe how efficient coding can be performed at the network level. In sparse coding models, a network’s representation is deemed efficient if the number of neurons that are active within a population is small relative to the size of the population. Mounting evidence shows that sensory systems employ sparse population coding, both in experimental [1] and computational modeling [2] studies. While these studies have focused on testing the sparse coding hypothesis with respect to the development of RF profiles in different sensory environments, predictions of information processing in mature neural sparse coding circuits have yet to be studied in detail, let alone tested experimentally. To determine relevant predictions of sparse coding models for mature neural circuits, we began by studying current neurally plausible models for sparse coding [3],4 that employ a number of processes known to take place within cortical networks, including recurrent inhibition, stimulus-driven excitation, and thresholding of each cell’s membrane potential. These models also predict that in order to perform sparse approximation across a network of neurons, the strength of inhibition amongst any pair of neurons is given by the coherence between the RF profiles of the pre- and post-synaptic cell. This implies that as a cell’s RF profile changes over time, the cell would need to alert all interneurons that synapse onto excitatory cells with overlapping receptive fields of the precise changes in its RF profile. Furthermore, predictions made about the spatial arrangement of cells differ from observations of the spatial organization of cells in the cortex. To move towards a more biologically accurate model for sparse coding in sensory systems, we incorporated a simple Hebbian learning rule into the locally competitive sparse coding model described in [3]. We found that as connections are strengthened amongst all the cells that become active in response to a given stimuli, the networks that emerge exhibit modularity (akin to columnar architectures) and small-world topologies (high clustering with small average path length), both of which have been observed in networks in the cortex. Using this model, we go on to show that under certain assumptions, orientation maps emerge. In addition to suggesting that sparse coding must be refined to incorporate stimulus-dependent plasticity, our results suggest that analyzing the structure of the coherence amongst the RFs of neighboring neurons should enable a more principled investigation of the sparse coding hypothesis in intact mature neural circuits.

          Related collections

          Most cited references2

          • Record: found
          • Abstract: found
          • Article: not found

          A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields.

          Computational models of primary visual cortex have demonstrated that principles of efficient coding and neuronal sparseness can explain the emergence of neurones with localised oriented receptive fields. Yet, existing models have failed to predict the diverse shapes of receptive fields that occur in nature. The existing models used a particular "soft" form of sparseness that limits average neuronal activity. Here we study models of efficient coding in a broader context by comparing soft and "bard" forms of neuronal sparseness. As a result of our analyses, we propose a novel network model for visual cortex. The model forms efficient visual representations in which the number of active neurones, rather than mean neuronal activity, is limited. This form of hard sparseness also economises cortical resources like synaptic memory and metabolic energy. Furthermore, our model accurately predicts the distribution of receptive field shapes found in the primary visual cortex of cat and monkey.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            An unsupervised learning model of neural plasticity: Orientation selectivity in goggle-reared kittens.

            The selectivities of neurons in primary visual cortex are often considered to be adapted to the statistics of natural images. Accordingly, simple cell-like tuning emerges when unsupervised learning models that seek sparse representations of input probabilities are trained on natural scenes. However, orientation tuning develops before structured vision starts, rendering these previous results moot as models of activity-dependent development. A more stringent examination of such models comes from experiments demonstrating altered neural response properties in goggle-reared kittens. We show that an unsupervised learning model of cortical responsivity accounts well for the dramatic effects of stimulus driven development during goggle-rearing.
              Bookmark

              Author and article information

              Conference
              BMC Neurosci
              BMC Neuroscience
              BioMed Central
              1471-2202
              2010
              20 July 2010
              : 11
              : Suppl 1
              : P131
              Affiliations
              [1 ]Electrical & Computer Engineering Dept., Rice University, Houston, TX, 77005, USA
              Article
              1471-2202-11-S1-P131
              10.1186/1471-2202-11-S1-P131
              3090834
              66e181e1-df19-479e-bb6c-06b0edec0ea1
              Copyright ©2010 Dyer et al; licensee BioMed Central Ltd.
              Nineteenth Annual Computational Neuroscience Meeting: CNS*2010
              San Antonio, TX, USA
              24–30 July 2010
              History
              Categories
              Poster Presentation

              Neurosciences
              Neurosciences

              Comments

              Comment on this article