33
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Brain-inspired multimodal hybrid neural network for robot place recognition

      Read this article at

      ScienceOpenPublisherPubMed
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Place recognition is an essential spatial intelligence capability for robots to understand and navigate the world. However, recognizing places in natural environments remains a challenging task for robots because of resource limitations and changing environments. In contrast, humans and animals can robustly and efficiently recognize hundreds of thousands of places in different conditions. Here, we report a brain-inspired general place recognition system, dubbed NeuroGPR, that enables robots to recognize places by mimicking the neural mechanism of multimodal sensing, encoding, and computing through a continuum of space and time. Our system consists of a multimodal hybrid neural network (MHNN) that encodes and integrates multimodal cues from both conventional and neuromorphic sensors. Specifically, to encode different sensory cues, we built various neural networks of spatial view cells, place cells, head direction cells, and time cells. To integrate these cues, we designed a multiscale liquid state machine that can process and fuse multimodal information effectively and asynchronously using diverse neuronal dynamics and bioinspired inhibitory circuits. We deployed the MHNN on Tianjic, a hybrid neuromorphic chip, and integrated it into a quadruped robot. Our results show that NeuroGPR achieves better performance compared with conventional and existing biologically inspired approaches, exhibiting robustness to diverse environmental uncertainty, including perceptual aliasing, motion blur, light, or weather changes. Running NeuroGPR as an overall multi–neural network workload on Tianjic showcases its advantages with 10.5 times lower latency and 43.6% lower power consumption than the commonly used mobile robot processor Jetson Xavier NX.

          Abstract

          NeuroGPR with multimodal sensing, encoding and computing facilitates robots to robustly and efficiently recognize places in natural environments.

          Related collections

          Most cited references61

          • Record: found
          • Abstract: found
          • Article: not found

          Microstructure of a spatial map in the entorhinal cortex.

          The ability to find one's way depends on neural algorithms that integrate information about place, distance and direction, but the implementation of these operations in cortical microcircuits is poorly understood. Here we show that the dorsocaudal medial entorhinal cortex (dMEC) contains a directionally oriented, topographically organized neural map of the spatial environment. Its key unit is the 'grid cell', which is activated whenever the animal's position coincides with any vertex of a regular grid of equilateral triangles spanning the surface of the environment. Grids of neighbouring cells share a common orientation and spacing, but their vertex locations (their phases) differ. The spacing and size of individual fields increase from dorsal to ventral dMEC. The map is anchored to external landmarks, but persists in their absence, suggesting that grid cells may be part of a generalized, path-integration-based map of the spatial environment.
            • Record: found
            • Abstract: not found
            • Article: not found

            The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat

              • Record: found
              • Abstract: found
              • Article: not found

              How does the brain solve visual object recognition?

              Mounting evidence suggests that 'core object recognition,' the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains poorly understood. Here we review evidence ranging from individual neurons and neuronal populations to behavior and computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical subnetworks with a common functional goal. Copyright © 2012 Elsevier Inc. All rights reserved.

                Author and article information

                Contributors
                Journal
                Science Robotics
                Sci. Robot.
                American Association for the Advancement of Science (AAAS)
                2470-9476
                May 17 2023
                May 17 2023
                : 8
                : 78
                Affiliations
                [1 ]Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
                [2 ]Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria.
                [3 ]IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China.
                [4 ]THU-CET HIK Joint Research Center for Brain-Inspired Computing, Tsinghua University, Beijing 100084, China.
                Article
                10.1126/scirobotics.abm6996
                37163608
                cfae9ebe-f607-450e-8d74-eafdb0d317bb
                © 2023

                Free to read

                History

                Comments

                Comment on this article

                Related Documents Log