16
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In this paper, a marker-based, single-person optical motion capture method (DeepMoCap) is proposed using multiple spatio-temporally aligned infrared-depth sensors and retro-reflective straps and patches (reflectors). DeepMoCap explores motion capture by automatically localizing and labeling reflectors on depth images and, subsequently, on 3D space. Introducing a non-parametric representation to encode the temporal correlation among pairs of colorized depthmaps and 3D optical flow frames, a multi-stage Fully Convolutional Network (FCN) architecture is proposed to jointly learn reflector locations and their temporal dependency among sequential frames. The extracted reflector 2D locations are spatially mapped in 3D space, resulting in robust 3D optical data extraction. The subject’s motion is efficiently captured by applying a template-based fitting technique on the extracted optical data. Two datasets have been created and made publicly available for evaluation purposes; one comprising multi-view depth and 3D optical flow annotated images (DMC2.5D), and a second, consisting of spatio-temporally aligned multi-view depth images along with skeleton, inertial and ground truth MoCap data (DMC3D). The FCN model outperforms its competitors on the DMC2.5D dataset using 2D Percentage of Correct Keypoints (PCK) metric, while the motion capture outcome is evaluated against RGB-D and inertial data fusion approaches on DMC3D, outperforming the next best method by 4.5 % in total 3D PCK accuracy.

          Related collections

          Most cited references49

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications

          Consumer-grade range cameras such as the Kinect sensor have the potential to be used in mapping applications where accuracy requirements are less strict. To realize this potential insight into the geometric quality of the data acquired by the sensor is essential. In this paper we discuss the calibration of the Kinect sensor, and provide an analysis of the accuracy and resolution of its depth data. Based on a mathematical model of depth measurement from disparity a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimeters up to about 4 cm at the maximum range of the sensor. The quality of the data is also found to be influenced by the low resolution of the depth measurements.
            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Real-time human pose recognition in parts from single depth images

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Microsoft coo: common objects in context

                Bookmark

                Author and article information

                Journal
                Sensors (Basel)
                Sensors (Basel)
                sensors
                Sensors (Basel, Switzerland)
                MDPI
                1424-8220
                11 January 2019
                January 2019
                : 19
                : 2
                : 282
                Affiliations
                [1 ]Centre for Research and Technology Hellas, Information Technologies Institute, 6th km Charilaou-Thermi, 57001 Thermi, Thessaloniki, Greece; zarpalas@ 123456iti.gr (D.Z.); daras@ 123456iti.gr (P.D.)
                [2 ]National Technical University of Athens, School of Electrical and Computer Engineering, Zografou Campus, Iroon Polytechniou 9, 15780 Zografou, Athens, Greece; stefanos@ 123456cs.ntua.gr
                [3 ]School of Computer Science, University of Lincoln, Brayford LN67TS, UK
                Author notes
                [* ]Correspondence: tofis@ 123456iti.gr or tofis3d@ 123456central.ntua.gr ; Tel.: +30-231-046-4160
                Author information
                https://orcid.org/0000-0002-3848-4210
                https://orcid.org/0000-0003-2899-0598
                https://orcid.org/0000-0003-3814-6710
                Article
                sensors-19-00282
                10.3390/s19020282
                6359336
                30642017
                dc0ac3ba-c7e1-4782-8e94-fffab8152561
                © 2019 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

                History
                : 13 December 2018
                : 07 January 2019
                Categories
                Article

                Biomedical engineering
                motion capture,deep learning,retro-reflectors,retro-reflective markers,multiple depth sensors,low-cost,deep mocap,depth data,3d data,3d vision,optical mocap,marker-based mocap

                Comments

                Comment on this article