0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Unsupervised Monocular Depth Learning in Dynamic Scenes

      Preprint
      , , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We present a method for jointly training the estimation of depth, ego-motion, and a dense 3D translation field of objects relative to the scene, with monocular photometric consistency being the sole source of supervision. We show that this apparently heavily underdetermined problem can be regularized by imposing the following prior knowledge about 3D translation fields: they are sparse, since most of the scene is static, and they tend to be constant for rigid moving objects. We show that this regularization alone is sufficient to train monocular depth prediction models that exceed the accuracy achieved in prior work for dynamic scenes, including methods that require semantic input. Code is at https://github.com/google-research/google-research/tree/master/depth_and_motion_learning .

          Related collections

          Author and article information

          Journal
          30 October 2020
          Article
          2010.16404
          367c2217-a1ce-4cdf-8d3b-482ac7ad9e88

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          Accepted at 4th Conference on Robot Learning (CoRL 2020)
          cs.CV cs.GR cs.LG cs.RO

          Computer vision & Pattern recognition,Robotics,Artificial intelligence,Graphics & Multimedia design

          Comments

          Comment on this article