• Record: found
  • Abstract: found
  • Article: found
Is Open Access

Orientation Transfer in Vernier and Stereoacuity Training

Read this article at

      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.


      Human performance on various visual tasks can be improved substantially via training. However, the enhancements are frequently specific to relatively low-level stimulus dimensions. While such specificity has often been thought to be indicative of a low-level neural locus of learning, recent research suggests that these same effects can be accounted for by changes in higher-level areas–in particular in the way higher-level areas read out information from lower-level areas in the service of highly practiced decisions. Here we contrast the degree of orientation transfer seen after training on two different tasks—vernier acuity and stereoacuity. Importantly, while the decision rule that could improve vernier acuity (i.e. a discriminant in the image plane) would not be transferable across orientations, the simplest rule that could be learned to solve the stereoacuity task (i.e. a discriminant in the depth plane) would be insensitive to changes in orientation. Thus, given a read-out hypothesis, more substantial transfer would be expected as a result of stereoacuity than vernier acuity training. To test this prediction, participants were trained (7500 total trials) on either a stereoacuity ( N = 9) or vernier acuity ( N = 7) task with the stimuli in either a vertical or horizontal configuration (balanced across participants). Following training, transfer to the untrained orientation was assessed. As predicted, evidence for relatively orientation specific learning was observed in vernier trained participants, while no evidence of specificity was observed in stereo trained participants. These results build upon the emerging view that perceptual learning (even very specific learning effects) may reflect changes in inferences made by high-level areas, rather than necessarily fully reflecting changes in the receptive field properties of low-level areas.

      Related collections

      Most cited references 26

      • Record: found
      • Abstract: found
      • Article: not found

      The Psychophysics Toolbox.

      The Psychophysics Toolbox is a software package that supports visual psychophysics. Its routines provide an interface between a high-level interpreted language (MATLAB on the Macintosh) and the video display hardware. A set of example programs is included with the Toolbox distribution.
        • Record: found
        • Abstract: found
        • Article: not found

        The VideoToolbox software for visual psychophysics: transforming numbers into movies.

        The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli. High-level platform-independent languages like MATLAB are best for creating the numbers that describe the desired images. Low-level, computer-specific VideoToolbox routines control the hardware that transforms those numbers into a movie. Transcending the particular computer and language, we discuss the nature of the computer-display interface, and how to calibrate and control it.
          • Record: found
          • Abstract: found
          • Article: not found

          Practising orientation identification improves orientation coding in V1 neurons.

           R. Vogels,  G Orban,  N Qian (2001)
          The adult brain shows remarkable plasticity, as demonstrated by the improvement in fine sensorial discriminations after intensive practice. The behavioural aspects of such perceptual learning are well documented, especially in the visual system. Specificity for stimulus attributes clearly implicates an early cortical site, where receptive fields retain fine selectivity for these attributes; however, the neuronal correlates of a simple visual discrimination task remained unidentified. Here we report electrophysiological correlates in the primary visual cortex (V1) of monkeys for learning orientation identification. We link the behavioural improvement in this type of learning to an improved neuronal performance of trained compared to naive neurons. Improved long-term neuronal performance resulted from changes in the characteristics of orientation tuning of individual neurons. More particularly, the slope of the orientation tuning curve that was measured at the trained orientation increased only for the subgroup of trained neurons most likely to code the orientation identified by the monkey. No modifications of the tuning curve were observed for orientations for which the monkey had not been trained. Thus training induces a specific and efficient increase in neuronal sensitivity in V1.

            Author and article information

            [1 ]Department of Psychology, University of Wisconsin-Madison, Madison, WI, United States of America
            [2 ]Department of Neuroscience, Brown University, Providence, Rhode Island, United States of America
            Durham University, UNITED KINGDOM
            Author notes

            Competing Interests: The authors have declared that no competing interests exist.

            Conceived and designed the experiments: BR CSG. Performed the experiments: NS. Analyzed the data: NS FK BR CSG. Contributed reagents/materials/analysis tools: NS FK BR CSG. Wrote the paper: NS FK BR CSG.

            Role: Editor
            PLoS One
            PLoS ONE
            PLoS ONE
            Public Library of Science (San Francisco, CA USA )
            23 December 2015
            : 10
            : 12
            26700311 4689363 10.1371/journal.pone.0145770 PONE-D-15-24937
            © 2015 Snell et al

            This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

            Figures: 4, Tables: 0, Pages: 12
            The authors have no support or funding to report.
            Research Article
            Custom metadata
            Relevant aggregated data are provided in the supplementary materials.



            Comment on this article