18
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Stable readout of observed actions from format-dependent activity of monkey’s anterior intraparietal neurons

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Significance

          The anterior intraparietal area (AIP) is a crucial hub in the observed manipulative action (OMA) network of primates. While macaques observe manipulative action videos, their AIP neuronal activity robustly encodes first the viewpoint from which the action is observed, then the actor’s body posture, and finally the observed-action identity. Despite the lack of fully invariant OMA-selective single neurons, OMA exemplars could be decoded accurately from the activity of a set of units that maintain stable OMA selectivity despite rescaling their firing rate across formats. We propose that by integrating signals multiplicatively about others’ action and their visual format, the AIP can provide a stable readout of OMA identity at the population level.

          Abstract

          Humans accurately identify observed actions despite large dynamic changes in their retinal images and a variety of visual presentation formats. A large network of brain regions in primates participates in the processing of others’ actions, with the anterior intraparietal area (AIP) playing a major role in routing information about observed manipulative actions (OMAs) to the other nodes of the network. This study investigated whether the AIP also contributes to invariant coding of OMAs across different visual formats. We recorded AIP neuronal activity from two macaques while they observed videos portraying seven manipulative actions (drag, drop, grasp, push, roll, rotate, squeeze) in four visual formats. Each format resulted from the combination of two actor’s body postures (standing, sitting) and two viewpoints (lateral, frontal). Out of 297 recorded units, 38% were OMA-selective in at least one format. Robust population code for viewpoint and actor’s body posture emerged shortly after stimulus presentation, followed by OMA selectivity. Although we found no fully invariant OMA-selective neuron, we discovered a population code that allowed us to classify action exemplars irrespective of the visual format. This code depends on a multiplicative mixing of signals about OMA identity and visual format, particularly evidenced by a set of units maintaining a relatively stable OMA selectivity across formats despite considerable rescaling of their firing rate depending on the visual specificities of each format. These findings suggest that the AIP integrates format-dependent information and the visual features of others’ actions, leading to a stable readout of observed manipulative action identity.

          Related collections

          Most cited references41

          • Record: found
          • Abstract: found
          • Article: not found

          How does the brain solve visual object recognition?

          Mounting evidence suggests that 'core object recognition,' the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains poorly understood. Here we review evidence ranging from individual neurons and neuronal populations to behavior and computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical subnetworks with a common functional goal. Copyright © 2012 Elsevier Inc. All rights reserved.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP.

            In this study, we mainly investigated the visual selectivity of hand-manipulation-related neurons in the anterior intraparietal area (area AIP) while the animal was grasping or fixating on three-dimensional (3D) objects of different geometric shapes, sizes, and orientations. We studied the activity of 132 task-related neurons during the hand-manipulation tasks in the light and in the dark, as well as during object fixation. Seventy-seven percent (101/132) of the hand-manipulation-related neurons were visually responsive, showing either lesser activity during manipulation in the dark than during that in the light (visual-motor neurons) or no activation in the dark (visual-dominant neurons). Of these visually responsive neurons, more than half (n = 66) responded during the object-fixation task (object-type). Among these, 55 were tested for their shape selectivity during the object-fixation task, and many (n = 25) were highly selective, preferring one particular shape of the six different shapes presented (ring, cube, cylinder, cone, sphere, and square plate). For 28 moderately selective object-type neurons, we performed multidimensional scaling (MDS) to examine how the neurons encode the similarity of objects. The results suggest that some moderately selective neurons responded preferentially to common geometric features shared by similar objects (flat, round, elongated, etc.). Moderately selective nonobject-type visually responsive neurons, which did not respond during object fixation, were found by MDS to be more closely related to the handgrip than to the object shape. We found a similar selectivity for handgrip in motor-dominant neurons that did not show any visual response. With regard to the size of the objects, 16 of 26 object-type neurons tested were selective for both size and shape, whereas 9 object-type neurons were selective for shape but not for size. Seven of 12 nonobject-type and all (8/8) of the motor-dominant neurons examined were selective for size, and almost all of them were also selective for objects. Many hand-manipulation-related neurons that preferred the plate and/or ring were selective for the orientation of the objects (17/20). These results suggest that the visual responses of object-type neurons represent the shape, size, and/or orientation of 3D objects, whereas those of the nonobject-type neurons probably represent the shape of the handgrip, grip size, or hand-orientation. The activity of motor-dominant neurons was also, in part, likely to represent these parameters of hand movement. This suggests that the dorsal visual pathway is concerned with the aspect of form, orientation, and/or size perception that is relevant for the visual control of movements.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A Fully Automated Approach to Spike Sorting.

              Understanding the detailed dynamics of neuronal networks will require the simultaneous measurement of spike trains from hundreds of neurons (or more). Currently, approaches to extracting spike times and labels from raw data are time consuming, lack standardization, and involve manual intervention, making it difficult to maintain data provenance and assess the quality of scientific results. Here, we describe an automated clustering approach and associated software package that addresses these problems and provides novel cluster quality metrics. We show that our approach has accuracy comparable to or exceeding that achieved using manual or semi-manual techniques with desktop central processing unit (CPU) runtimes faster than acquisition time for up to hundreds of electrodes. Moreover, a single choice of parameters in the algorithm is effective for a variety of electrode geometries and across multiple brain regions. This algorithm has the potential to enable reproducible and automated spike sorting of larger scale recordings than is currently possible.
                Bookmark

                Author and article information

                Journal
                Proc Natl Acad Sci U S A
                Proc. Natl. Acad. Sci. U.S.A
                pnas
                pnas
                PNAS
                Proceedings of the National Academy of Sciences of the United States of America
                National Academy of Sciences
                0027-8424
                1091-6490
                14 July 2020
                24 June 2020
                24 June 2020
                : 117
                : 28
                : 16596-16605
                Affiliations
                [1] aDepartment of Psychology, University of Turin , 10124 Turin, Italy;
                [2] bDepartment of Medicine and Surgery, University of Parma , 43125 Parma, Italy;
                [3] cDepartment of Neuroscience, Washington University in St. Louis , St. Louis, MO 63110
                Author notes
                1To whom correspondence may be addressed. Email: marco.lanzilotto@ 123456unito.it or luca.bonini@ 123456unipr.it .

                Edited by Peter L. Strick, University of Pittsburgh, Pittsburgh, PA, and approved May 22, 2020 (received for review April 14, 2020)

                Author contributions: G.A.O. and L.B. designed research; M.L., M.M., A.L., C.G.F., and L.B. performed research; G.A.O. and L.B. contributed new reagents/analytic tools; M.L. analyzed data; and M.L., G.A.O., and L.B. wrote the paper.

                2G.A.O. and L.B. contributed equally to this work.

                Author information
                https://orcid.org/0000-0002-3854-7875
                https://orcid.org/0000-0002-8179-9584
                https://orcid.org/0000-0002-3485-2127
                Article
                202007018
                10.1073/pnas.2007018117
                7369316
                32581128
                fa77f919-9588-4e93-bc81-3735ccc52e3a
                Copyright © 2020 the Author(s). Published by PNAS.

                This open access article is distributed under Creative Commons Attribution License 4.0 (CC BY).

                History
                Page count
                Pages: 10
                Funding
                Funded by: EC | H2020 | H2020 Priority Excellent Science | H2020 Future and Emerging Technologies (FET) 100010664
                Award ID: 600925
                Award Recipient : Guy A Orban
                Funded by: EC | H2020 | H2020 Priority Excellent Science | H2020 European Research Council (ERC) 100010663
                Award ID: 678307
                Award Recipient : Luca Bonini
                Categories
                Biological Sciences
                Neuroscience

                parietal cortex,action observation,visual invariance,neural decoding

                Comments

                Comment on this article