44
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Event-driven visual attention for the humanoid robot iCub

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend.

          Related collections

          Most cited references30

          • Record: found
          • Abstract: not found
          • Article: not found

          A 128$\times$ 128 120 dB 15 $\mu$s Latency Asynchronous Temporal Contrast Vision Sensor

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            State-of-the-art in visual attention modeling.

            Modeling visual attention--particularly stimulus-driven, saliency-based attention--has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Spatial summation in the receptive fields of simple cells in the cat's striate cortex.

              1. We have examined the responses of simple cells in the cat's atriate cortex to visual patterns that were designed to reveal the extent to which these cells may be considered to sum light-evoked influences linearly across their receptive fields. We used one-dimensional luminance-modulated bars and grating as stimuli; their orientation was always the same as the preferred orientation of the neurone under study. The stimuli were presented on an oscilloscope screen by a digital computer, which also accumulated neuronal responses and controlled a randomized sequence of stimulus presentations. 2. The majority of simple cells respond to sinusoidal gratings that are moving or whose contrast is modulated in time in a manner consistent with the hypothesis that they have linear spatial summation. Their responses to moving gratings of all spatial frequencies are modulated in synchrony with the passage of the gratings' bars across their receptive fields, and they do not produce unmodulated responses even at the highest spatial frequencies. Many of these cells respond to temporally modulated stationary gratings simply by changing their response amplitude sinusoidally as the spatial phase of the grating the grating is varied. Nonetheless, their behavior appears to indicate linear spatial summation, since we show in an Appendix that the absence of a 'null' phase in a visual neurone need not indicate non-linear spatial summation, and further that a linear neurone lacking a 'null' phase should give responses of the form that we have observed in this type of simple cell. 3. A minority of simple cells appears to have significant non-linearities of spatial summation. These neurones respond to moving gratings of high spatial frequency with a partially or totally unmodulated elevation of firing rate. They have no 'null' phases when tested with stationary gratings, and reveal their non-linearity by giving responses to gratings of some spatial phases that are composed partly or wholly of even harmonics of the stimulus frequency ('on-off' responses). 4. We compared simple receptive fields with their sensitivity to sinusoidal gratings of different spatial frequencies. Qualitatively, the most sensitive subregions of simple cells' receptive fields are roughly the same width as the individual bars of the gratings to which they are most sensitive. Quantitatively, their receptive field profiles measured with thin stationary lines, agree well with predicted profiles derived by Fourier synthesis of their spatial frequency tuning curves.
                Bookmark

                Author and article information

                Journal
                Front Neurosci
                Front Neurosci
                Front. Neurosci.
                Frontiers in Neuroscience
                Frontiers Media S.A.
                1662-4548
                1662-453X
                08 September 2013
                13 December 2013
                2013
                : 7
                : 234
                Affiliations
                [1] 1Robotics, Brain and Cognitive Science, Istituto Italiano di Tecnologia Genova, Italy
                [2] 2iCub Facility, Istituto Italiano di Tecnologia Genova, Italy
                Author notes

                Edited by: Tobi Delbruck, University of Zurich and ETH Zurich, Switzerland

                Reviewed by: Theodore Yu, Texas Instruments Inc., USA; Nabil Imam, Cornell University, USA

                *Correspondence: Francesco Rea, Robotics, Brain and Cognitive Science, Istituto Italiano di Tecnologia, via Morego 30, 16163 Genova, Italy e-mail: francesco.rea@ 123456iit.it

                This article was submitted to Neuromorphic Engineering, a section of the journal Frontiers in Neuroscience.

                Article
                10.3389/fnins.2013.00234
                3862023
                24379753
                c26bec5a-1a6c-4efd-9eb8-bc02e6a65d66
                Copyright © 2013 Rea, Metta and Bartolozzi.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 13 August 2013
                : 20 November 2013
                Page count
                Figures: 5, Tables: 2, Equations: 1, References: 38, Pages: 11, Words: 7938
                Categories
                Neuroscience
                Original Research Article

                Neurosciences
                visual attention,neuromorphic,humanoid robotics,event-driven,saliency map
                Neurosciences
                visual attention, neuromorphic, humanoid robotics, event-driven, saliency map

                Comments

                Comment on this article