Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Novel cartographer using an OAK-D smart camera for indoor robots location and navigation

      , ,
      Journal of Physics: Conference Series
      IOP Publishing

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In recent years, service robots have been widely used in people’s daily life, and with the development of more and more intelligence, people put forward higher requirements for autonomous positioning and navigation functions of robots. Like outdoor navigation, indoor navigation also needs the support of navigation data. Although the indoor positioning and navigation scheme based on cameras, lidars and other sensors is gradually developing, due to the complexity of the indoor structure, manual production of indoor navigation data is time-consuming and laborious, and the efficiency is relatively low. In order to solve the problem of low productivity and improve the accuracy of robot automatic navigation, we added a new type of intelligent camera, called OpenCV AI kit or OAK-D, and proposed a method to automatically build data files that can be used for indoor navigation and location services using indoor 3D point cloud data. This intelligent camera performs neural reasoning on chips that do not use GPUs. It can also use stereo drills for depth estimation, and use 4K color camera images as input to run the neural network model. Python API can be called to realize real-time detection of indoor doors, windows and other static objects. The target detection technology uses an artificial intelligence camera, and the robot can well identify and accurately mark on the indoor map. In this paper, a high-performance indoor robot navigation system is developed, and multisensor fusion technology is designed. Environmental information is collected through artificial intelligent camera (OAK-D), laser lidar, and data fusion is carried out. In the experiment part of this paper,The static fusion map module is created based on the laser sensor information and the sensor information of the depth camera, the hierarchical dynamic cost map module is created in the real-time navigation, and the global positioning of the robot is realized by combining the word bag model and the laser point cloud matching. Then a software system is realized by integrating each module. The experiment proves that the system is practical and effective, and has practical value.

          Related collections

          Most cited references13

          • Record: found
          • Abstract: found
          • Article: not found
          Is Open Access

          A Comprehensive Survey of Visual SLAM Algorithms

          Simultaneous localization and mapping (SLAM) techniques are widely researched, since they allow the simultaneous creation of a map and the sensors’ pose estimation in an unknown environment. Visual-based SLAM techniques play a significant role in this field, as they are based on a low-cost and small sensor system, which guarantees those advantages compared to other sensor-based SLAM techniques. The literature presents different approaches and methods to implement visual-based SLAM systems. Among this variety of publications, a beginner in this domain may find problems with identifying and analyzing the main algorithms and selecting the most appropriate one according to his or her project constraints. Therefore, we present the three main visual-based SLAM approaches (visual-only, visual-inertial, and RGB-D SLAM), providing a review of the main algorithms of each approach through diagrams and flowcharts, and highlighting the main advantages and disadvantages of each technique. Furthermore, we propose six criteria that ease the SLAM algorithm’s analysis and consider both the software and hardware levels. In addition, we present some major issues and future directions on visual-SLAM field, and provide a general overview of some of the existing benchmark datasets. This work aims to be the first step for those initiating a SLAM project to have a good perspective of SLAM techniques’ main elements and characteristics.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Towards Autonomous Drone Racing without GPU Using an OAK-D Smart Camera

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Analysis of influential factors on a space target's laser radar cross-section

                Bookmark

                Author and article information

                Journal
                Journal of Physics: Conference Series
                J. Phys.: Conf. Ser.
                IOP Publishing
                1742-6588
                1742-6596
                May 01 2023
                May 01 2023
                : 2467
                : 1
                : 012029
                Article
                10.1088/1742-6596/2467/1/012029
                ee944aba-a637-47e1-865e-0cc8ca250b9a
                © 2023

                http://creativecommons.org/licenses/by/3.0/

                History

                Comments

                Comment on this article