1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      An Application of Deep Learning to Tactile Data for Object Recognition under Visual Guidance †

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Drawing inspiration from haptic exploration of objects by humans, the current work proposes a novel framework for robotic tactile object recognition, where visual information in the form of a set of visually interesting points is employed to guide the process of tactile data acquisition. Neuroscience research confirms the integration of cutaneous data as a response to surface changes sensed by humans with data from joints, muscles, and bones (kinesthetic cues) for object recognition. On the other hand, psychological studies demonstrate that humans tend to follow object contours to perceive their global shape, which leads to object recognition. In compliance with these findings, a series of contours are determined around a set of 24 virtual objects from which bimodal tactile data (kinesthetic and cutaneous) are obtained sequentially and by adaptively changing the size of the sensor surface according to the object geometry for each object. A virtual Force Sensing Resistor array (FSR) is employed to capture cutaneous cues. Two different methods for sequential data classification are then implemented using Convolutional Neural Networks (CNN) and conventional classifiers, including support vector machines and k-nearest neighbors. In the case of conventional classifiers, we exploit contourlet transformation to extract features from tactile images. In the case of CNN, two networks are trained for cutaneous and kinesthetic data and a novel hybrid decision-making strategy is proposed for object recognition. The proposed framework is tested both for contours determined blindly (randomly determined contours of objects) and contours determined using a model of visual attention. Trained classifiers are tested on 4560 new sequential tactile data and the CNN trained over tactile data from object contours selected by the model of visual attention yields an accuracy of 98.97% which is the highest accuracy among other implemented approaches.

          Related collections

          Most cited references25

          • Record: found
          • Abstract: not found
          • Article: not found

          The contourlet transform: an efficient directional multiresolution image representation

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Haptic perception: a tutorial.

            This tutorial focuses on the sense of touch within the context of a fully active human observer. It is intended for graduate students and researchers outside the discipline who seek an introduction to the rapidly evolving field of human haptics. The tutorial begins with a review of peripheral sensory receptors in skin, muscles, tendons, and joints. We then describe an extensive body of research on "what" and "where" channels, the former dealing with haptic perception of objects, surfaces, and their properties, and the latter with perception of spatial layout on the skin and in external space relative to the perceiver. We conclude with a brief discussion of other significant issues in the field, including vision-touch interactions, affective touch, neural plasticity, and applications.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Visuo-haptic object-related activation in the ventral visual pathway.

              The ventral pathway is involved in primate visual object recognition. In humans, a central stage in this pathway is an occipito-temporal region termed the lateral occipital complex (LOC), which is preferentially activated by visual objects compared to scrambled images or textures. However, objects have characteristic attributes (such as three-dimensional shape) that can be perceived both visually and haptically. Therefore, object-related brain areas may hold a representation of objects in both modalities. Using fMRI to map object-related brain regions, we found robust and consistent somatosensory activation in the occipito-temporal cortex. This region showed clear preference for objects compared to textures in both modalities. Most somatosensory object-selective voxels overlapped a part of the visual object-related region LOC. Thus, we suggest that neuronal populations in the occipito-temporal cortex may constitute a multimodal object-related network.
                Bookmark

                Author and article information

                Journal
                Sensors (Basel)
                Sensors (Basel)
                sensors
                Sensors (Basel, Switzerland)
                MDPI
                1424-8220
                29 March 2019
                April 2019
                : 19
                : 7
                : 1534
                Affiliations
                Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada; acretu@ 123456sce.carleton.ca
                Author notes
                [†]

                This paper is an extended version our paper published in Rouhafzay, G.; Cretu, A.-M. Data-Driven A Visuo-Haptic Framework for Object Recognition Inspired by Human Tactile Perception. In Proceedings of the 5th International Electronic Conference on Sensors and Applications, Online, 15–30 November 2018; Sciforum Electronic Conference Series; Volume 4, doi:10.3390/ecsa-5-05754.

                Author information
                https://orcid.org/0000-0003-3762-0900
                https://orcid.org/0000-0002-0434-1393
                Article
                sensors-19-01534
                10.3390/s19071534
                6480322
                30934907
                4b34b3af-98b6-4af9-9d7b-536e702818d0
                © 2019 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

                History
                : 27 February 2019
                : 25 March 2019
                Categories
                Article

                Biomedical engineering
                haptic exploration,visual attention,visuo-haptic interaction,tactile object recognition,convolutional neural network

                Comments

                Comment on this article