9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Using synthetic data for training deep neural networks for robotic manipulation holds the promise of an almost unlimited amount of pre-labeled training data, generated safely out of harm's way. One of the key challenges of synthetic data, to date, has been to bridge the so-called reality gap, so that networks trained on synthetic data operate correctly when exposed to real-world data. We explore the reality gap in the context of 6-DoF pose estimation of known objects from a single RGB image. We show that for this problem the reality gap can be successfully spanned by a simple combination of domain randomized and photorealistic data. Using synthetic data generated in this manner, we introduce a one-shot deep neural network that is able to perform competitively against a state-of-the-art network trained on a combination of real and synthetic data. To our knowledge, this is the first deep network trained only on synthetic data that is able to achieve state-of-the-art performance on 6-DoF object pose estimation. Our network also generalizes better to novel environments including extreme lighting conditions, for which we show qualitative results. Using this network we demonstrate a real-time system estimating object poses with sufficient accuracy for real-world semantic grasping of known household objects in clutter by a real robot.

          Related collections

          Most cited references5

          • Record: found
          • Abstract: found
          • Article: not found

          Gradient response maps for real-time detection of textureless objects.

          We present a method for real-time 3D object instance detection that does not require a time-consuming training stage, and can handle untextured objects. At its core, our approach is a novel image representation for template matching designed to be robust to small image transformations. This robustness is based on spread image gradient orientations and allows us to test only a small subset of all possible pixel locations when parsing the image, and to represent a 3D object with a limited set of templates. In addition, we demonstrate that if a dense depth sensor is available we can extend our approach for an even better performance also taking 3D surface normal orientations into account. We show how to take advantage of the architecture of modern computers to build an efficient but very discriminant representation of the input images that can be used to consider thousands of templates in real time. We demonstrate in many experiments on real data that our method is much faster and more robust with respect to background clutter than current state-of-the-art methods.
            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            The YCB object and Model set: Towards common benchmarks for manipulation research

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              CAD2RL: Real Single-Image Flight Without a Single Real Image

                Bookmark

                Author and article information

                Journal
                27 September 2018
                Article
                1809.10790
                c08ec2cb-dffb-4121-91c1-ab59141317ba

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                Conference on Robot Learning (CoRL) 2018
                cs.RO

                Robotics
                Robotics

                Comments

                Comment on this article