72
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Temporal structure of motor variability is dynamically regulated and predicts motor learning ability.

      Nature neuroscience
      Adolescent, Adult, Female, Humans, Individuality, Learning, physiology, Male, Middle Aged, Movement, Nonlinear Dynamics, Predictive Value of Tests, Psychomotor Performance, Reward, Time Factors, Young Adult

      Read this article at

      ScienceOpenPublisherPubMed
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Individual differences in motor learning ability are widely acknowledged, yet little is known about the factors that underlie them. Here we explore whether movement-to-movement variability in motor output, a ubiquitous if often unwanted characteristic of motor performance, predicts motor learning ability. Surprisingly, we found that higher levels of task-relevant motor variability predicted faster learning both across individuals and across tasks in two different paradigms, one relying on reward-based learning to shape specific arm movement trajectories and the other relying on error-based learning to adapt movements in novel physical environments. We proceeded to show that training can reshape the temporal structure of motor variability, aligning it with the trained task to improve learning. These results provide experimental support for the importance of action exploration, a key idea from reinforcement learning theory, showing that motor variability facilitates motor learning in humans and that our nervous systems actively regulate it to improve learning.

          Related collections

          Most cited references30

          • Record: found
          • Abstract: found
          • Article: not found

          Reinforcement Learning: A Survey

          This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Learning of action through adaptive combination of motor primitives.

            Understanding how the brain constructs movements remains a fundamental challenge in neuroscience. The brain may control complex movements through flexible combination of motor primitives, where each primitive is an element of computation in the sensorimotor map that transforms desired limb trajectories into motor commands. Theoretical studies have shown that a system's ability to learn action depends on the shape of its primitives. Using a time-series analysis of error patterns, here we show that humans learn the dynamics of reaching movements through a flexible combination of primitives that have gaussian-like tuning functions encoding hand velocity. The wide tuning of the inferred primitives predicts limitations on the brain's ability to represent viscous dynamics. We find close agreement between the predicted limitations and the subjects' adaptation to new force fields. The mathematical properties of the derived primitives resemble the tuning curves of Purkinje cells in the cerebellum. The activity of these cells may encode primitives that underlie the learning of dynamics.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The uncontrolled manifold concept: identifying control variables for a functional task

                Bookmark

                Author and article information

                Comments

                Comment on this article