Blog
About

6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Unified Framework for Multi-View Multi-Class Object Pose Estimation

      Preprint

      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          One core challenge in object pose estimation is to ensure accurate and robust performance for large numbers of diverse foreground objects amidst complex background clutter. In this work, we present a scalable framework for accurately inferring six Degree-of-Freedom (6-DoF) pose for a large number of object classes from single or multiple views. To learn discriminative pose features, we integrate three new capabilities into a deep Convolutional Neural Network (CNN): an inference scheme that combines both classification and pose regression based on a uniform tessellation of SE(3), fusion of a class prior into the training process via a tiled class map, and an additional regularization using deep supervision with an object mask. Further, an efficient multi-view framework is formulated to address single-view ambiguity. We show this consistently improves the performance of the single-view network. We evaluate our method on three large-scale benchmarks: YCB-Video, JHUScene-50 and ObjectNet-3D. Our approach achieves competitive or superior performance over the current state-of-the-art methods.

          Related collections

          Most cited references 2

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Articulated pose estimation with flexible mixtures-of-parts

           Deva Ramanan,  Yi Yang (2011)
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Deep Learning of Local RGB-D Patches for 3D Object Detection and 6D Pose Estimation

            We present a 3D object detection method that uses regressed descriptors of locally-sampled RGB-D patches for 6D vote casting. For regression, we employ a convolutional auto-encoder that has been trained on a large collection of random local patches. During testing, scene patch descriptors are matched against a database of synthetic model view patches and cast 6D object votes which are subsequently filtered to refined hypotheses. We evaluate on three datasets to show that our method generalizes well to previously unseen input data, delivers robust detection results that compete with and surpass the state-of-the-art while being scalable in the number of objects.
              Bookmark

              Author and article information

              Journal
              21 March 2018
              Article
              1803.08103

              http://arxiv.org/licenses/nonexclusive-distrib/1.0/

              Custom metadata
              In review
              cs.CV

              Comments

              Comment on this article