9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      High reward enhances perceptual learning

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Studies of perceptual learning have revealed a great deal of plasticity in adult humans. In this study, we systematically investigated the effects and mechanisms of several forms (trial-by-trial, block, and session rewards) and levels (no, low, high, subliminal) of monetary reward on the rate, magnitude, and generalizability of perceptual learning. We found that high monetary reward can greatly promote the rate and boost the magnitude of learning and enhance performance in untrained spatial frequencies and eye without changing interocular, interlocation, and interdirection transfer indices. High reward per se made unique contributions to the enhanced learning through improved internal noise reduction. Furthermore, the effects of high reward on perceptual learning occurred in a range of perceptual tasks. The results may have major implications for the understanding of the nature of the learning rule in perceptual learning and for the use of reward to enhance perceptual learning in practical applications.

          Related collections

          Most cited references71

          • Record: found
          • Abstract: found
          • Article: not found

          The reverse hierarchy theory of visual perceptual learning.

          Perceptual learning can be defined as practice-induced improvement in the ability to perform specific perceptual tasks. We previously proposed the Reverse Hierarchy Theory as a unifying concept that links behavioral findings of visual learning with physiological and anatomical data. Essentially, it asserts that learning is a top-down guided process, which begins at high-level areas of the visual system, and when these do not suffice, progresses backwards to the input levels, which have a better signal-to-noise ratio. This simple concept has proved powerful in explaining a broad range of findings, including seemingly contradicting data. We now extend this concept to describe the dynamics of skill acquisition and interpret recent behavioral and electrophysiological findings.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Reward processing in primate orbitofrontal cortex and basal ganglia.

            This article reviews and interprets neuronal activities related to the expectation and delivery of reward in the primate orbitofrontal cortex, in comparison with slowly discharging neurons in the striatum (caudate, putamen and ventral striatum, including nucleus accumbens) and midbrain dopamine neurons. Orbitofrontal neurons showed three principal forms of reward-related activity during the performance of delayed response tasks, namely responses to reward-predicting instructions, activations during the expectation period immediately preceding reward and responses following reward. These activations discriminated between different rewards, often on the basis of the animals' preferences. Neurons in the striatum were also activated in relation to the expectation and detection of reward but in addition showed activities related to the preparation, initiation and execution of movements which reflected the expected reward. Dopamine neurons responded to rewards and reward-predicting stimuli, and coded an error in the prediction of reward. Thus, the investigated cortical and basal ganglia structures showed multiple, heterogeneous, partly simultaneous activations which were related to specific aspects of rewards. These activations may represent the neuronal substrates of rewards during learning and established behavioral performance. The processing of reward expectations suggests an access to central representations of rewards which may be used for the neuronal control of goaldirected behavior.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Task difficulty and the specificity of perceptual learning.

              Practising simple visual tasks leads to a dramatic improvement in performing them. This learning is specific to the stimuli used for training. We show here that the degree of specificity depends on the difficulty of the training conditions. We find that the pattern of specificities maps onto the pattern of receptive field selectivities along the visual pathway. With easy conditions, learning generalizes across orientation and retinal position, matching the spatial generalization of higher visual areas. As task difficulty increases, learning becomes more specific with respect to both orientation and position, matching the fine spatial retinotopy exhibited by lower areas. Consequently, we enjoy the benefits of learning generalization when possible, and of fine grain but specific training when necessary. The dynamics of learning show a corresponding feature. Improvement begins with easy cases (when the subject is allowed long processing times) and only subsequently proceeds to harder cases. This learning cascade implies that easy conditions guide the learning of hard ones. Taken together, the specificity and dynamics suggest that learning proceeds as a countercurrent along the cortical hierarchy. Improvement begins at higher generalizing levels, which, in turn, direct harder-condition learning to the subdomain of their lower-level inputs. As predicted by this reverse hierarchy model, learning can be effective using only difficult trials, but on condition that learning onset has previously been enabled. A single prolonged presentation suffices to initiate learning. We call this single-encounter enabling effect 'eureka'.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Vis
                J Vis
                jovi
                J Vis
                JOVI
                Journal of Vision
                The Association for Research in Vision and Ophthalmology
                1534-7362
                2018
                24 August 2018
                : 18
                : 8
                : 11
                Affiliations
                bdosher@ 123456uci.edu
                lu.535@ 123456osu.edu
                huangcb@ 123456psych.ac.cn
                [1]CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
                [2]Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
                [3]Laboratory of Brain Processes (LOBES), Center for Cognitive and Brain Sciences, Center for Cognitive and Behavioral Brain Imaging, and Departments of Psychology, The Ohio State University, Columbus, OH, USA
                [4]School of Ophthalmology & Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
                [5]CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
                [6]Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
                [7]CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
                [8]Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
                [9]CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
                [10]Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
                [11]CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
                [12]Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
                [13]CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
                [14]Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
                [15]CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
                [16]Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
                [17]School of Arts and Design, Zhengzhou University of Light Industry, Zhengzhou, Henan, China
                [18]CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
                [19]CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
                [20]Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
                [21]Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
                [22]Laboratory of Brain Processes (LOBES), Center for Cognitive and Brain Sciences, Center for Cognitive and Behavioral Brain Imaging, and Departments of Psychology, The Ohio State University, Columbus, OH, USA
                [23]CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
                [24]Department of Psychology, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
                Article
                jovi-18-08-02 JOV-06007-2017
                10.1167/18.8.11
                6108453
                30372760
                cb4b12db-ccfa-4c7e-a13e-5e7c8276b1ea
                Copyright 2018 The Authors

                This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

                History
                : 2 December 2017
                : 12 May 2018
                Categories
                Article

                perceptual learning,reward,transfer,perceptual template model

                Comments

                Comment on this article