Blog
About

  • Record: found
  • Abstract: found
  • Article: found
Is Open Access

Neuron's Eye View: Inferring Features of Complex Stimuli from Neural Responses

Preprint

Read this article at

Bookmark
      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

      Abstract

      Experiments that study neural encoding of stimuli at the level of individual neurons typically choose a small set of features present in the world --- contrast and luminance for vision, pitch and intensity for sound --- and assemble a stimulus set that systematically (and preferably exhaustively) varies along these dimensions. Neuronal responses in the form of firing rates are then examined for modulation with respect to these features via some form of regression. This approach requires that experimenters know (or guess) in advance the relevant features coded by a given population of neurons. Unfortunately, for domains as complex as social interaction or natural movement, the relevant feature space is poorly understood, and an arbitrary \emph{a priori} choice of feature sets may give rise to confirmation bias. Here, we present a Bayesian model for exploratory data analysis that is capable of automatically identifying the features present in unstructured stimuli based solely on neuronal responses. Our approach is unique within the class of latent state space models of neural activity in that it assumes that firing rates of neurons are sensitive to multiple discrete time-varying features tied to the \emph{stimulus}, each of which has Markov (or semi-Markov) dynamics. That is, we are modeling stimulus dynamics as driven by neural activity, rather than intrinsic neural dynamics. We derive a fast variational Bayesian inference algorithm and show that it correctly recovers hidden features in synthetic data, as well as ground-truth stimulus features in a prototypical neural dataset. To demonstrate the utility of the algorithm, we also apply it to an exploratory analysis of prefrontal cortex recordings performed while monkeys watched naturalistic videos of primate social activity.

      Related collections

      Most cited references 18

      • Record: found
      • Abstract: found
      • Article: not found

      Spatio-temporal correlations and visual signalling in a complete neuronal population.

      Statistical dependencies in the responses of sensory neurons govern both the amount of stimulus information conveyed and the means by which downstream neurons can extract it. Although a variety of measurements indicate the existence of such dependencies, their origin and importance for neural coding are poorly understood. Here we analyse the functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells using a model of multi-neuron spike responses. The model, with parameters fit directly to physiological data, simultaneously captures both the stimulus dependence and detailed spatio-temporal correlations in population responses, and provides two insights into the structure of the neural code. First, neural encoding at the population level is less noisy than one would expect from the variability of individual neurons: spike times are more precise, and can be predicted more accurately when the spiking of neighbouring neurons is taken into account. Second, correlations provide additional sensory information: optimal, model-based decoding that exploits the response correlation structure extracts 20% more information about the visual scene than decoding under the assumption of independence, and preserves 40% more visual information than optimal linear decoding. This model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlated activity in populations of neurons.
        Bookmark
        • Record: found
        • Abstract: found
        • Article: not found

        Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task.

        Decisions about the visual world can take time to form, especially when information is unreliable. We studied the neural correlate of gradual decision formation by recording activity from the lateral intraparietal cortex (area LIP) of rhesus monkeys during a combined motion-discrimination reaction-time task. Monkeys reported the direction of random-dot motion by making an eye movement to one of two peripheral choice targets, one of which was within the response field of the neuron. We varied the difficulty of the task and measured both the accuracy of direction discrimination and the time required to reach a decision. Both the accuracy and speed of decisions increased as a function of motion strength. During the period of decision formation, the epoch between onset of visual motion and the initiation of the eye movement response, LIP neurons underwent ramp-like changes in their discharge rate that predicted the monkey's decision. A steeper rise in spike rate was associated with stronger stimulus motion and shorter reaction times. The observations suggest that neurons in LIP integrate time-varying signals that originate in the extrastriate visual cortex, accumulating evidence for or against a specific behavioral response. A threshold level of LIP activity appears to mark the completion of the decision process and to govern the tradeoff between accuracy and speed of perception.
          Bookmark
          • Record: found
          • Abstract: not found
          • Article: not found

          Variational inference for Dirichlet process mixtures

            Bookmark

            Author and article information

            Journal
            1512.01408

            Machine learning, Neurosciences

            Comments

            Comment on this article