Blog
About

  • Record: found
  • Abstract: found
  • Article: found
Is Open Access

A balanced memory network

Preprint

Read this article at

Bookmark
      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

      Abstract

      A fundamental problem in neuroscience is understanding how working memory -- the ability to store information at intermediate timescales, like 10s of seconds -- is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons.

      Related collections

      Most cited references 94

      • Record: found
      • Abstract: found
      • Article: not found

      Activity-dependent scaling of quantal amplitude in neocortical neurons.

      Information is stored in neural circuits through long-lasting changes in synaptic strengths. Most studies of information storage have focused on mechanisms such as long-term potentiation and depression (LTP and LTD), in which synaptic strengths change in a synapse-specific manner. In contrast, little attention has been paid to mechanisms that regulate the total synaptic strength of a neuron. Here we describe a new form of synaptic plasticity that increases or decreases the strength of all of a neuron's synaptic inputs as a function of activity. Chronic blockade of cortical culture activity increased the amplitude of miniature excitatory postsynaptic currents (mEPSCs) without changing their kinetics. Conversely, blocking GABA (gamma-aminobutyric acid)-mediated inhibition initially raised firing rates, but over a 48-hour period mESPC amplitudes decreased and firing rates returned to close to control values. These changes were at least partly due to postsynaptic alterations in the response to glutamate, and apparently affected each synapse in proportion to its initial strength. Such 'synaptic scaling' may help to ensure that firing rates do not become saturated during developmental changes in the number and strength of synaptic inputs, as well as stabilizing synaptic strengths during Hebbian modification and facilitating competition between synapses.
        Bookmark
        • Record: found
        • Abstract: not found
        • Article: not found

        The glutamate receptor ion channels.

          Bookmark
          • Record: found
          • Abstract: found
          • Article: not found

          Bayesian inference with probabilistic population codes.

          Recent psychophysical experiments indicate that humans perform near-optimal Bayesian inference in a wide variety of tasks, ranging from cue integration to decision making to motor control. This implies that neurons both represent probability distributions and combine those distributions according to a close approximation to Bayes' rule. At first sight, it would seem that the high variability in the responses of cortical neurons would make it difficult to implement such optimal statistical inference in cortical circuits. We argue that, in fact, this variability implies that populations of neurons automatically represent probability distributions over the stimulus, a type of code we call probabilistic population codes. Moreover, we demonstrate that the Poisson-like variability observed in cortex reduces a broad class of Bayesian inference to simple linear combinations of populations of neural activity. These results hold for arbitrary probability distributions over the stimulus, for tuning curves of arbitrary shape and for realistic neuronal variability.
            Bookmark

            Author and article information

            Journal
            0704.3005
            10.1371/journal.pcbi.0030141

            Theoretical physics, Neurosciences

            Comments

            Comment on this article