6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Review: Point Cloud-Based 3D Human Joints Estimation

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Joint estimation of the human body is suitable for many fields such as human–computer interaction, autonomous driving, video analysis and virtual reality. Although many depth-based researches have been classified and generalized in previous review or survey papers, the point cloud-based pose estimation of human body is still difficult due to the disorder and rotation invariance of the point cloud. In this review, we summarize the recent development on the point cloud-based pose estimation of the human body. The existing works are divided into three categories based on their working principles, including template-based method, feature-based method and machine learning-based method. Especially, the significant works are highlighted with a detailed introduction to analyze their characteristics and limitations. The widely used datasets in the field are summarized, and quantitative comparisons are provided for the representative methods. Moreover, this review helps further understand the pertinent applications in many frontier research directions. Finally, we conclude the challenges involved and problems to be solved in future researches.

          Related collections

          Most cited references122

          • Record: found
          • Abstract: not found
          • Article: not found

          Enhanced skeleton visualization for view invariant human action recognition

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Learning Actionlet Ensemble for 3D Human Action Recognition.

            Human action recognition is an important yet challenging task. Human actions usually involve human-object interactions, highly articulated motions, high intra-class variations, and complicated temporal structures. The recently developed commodity depth sensors open up new possibilities of dealing with this problem by providing 3D depth data of the scene. This information not only facilitates a rather powerful human motion capturing technique, but also makes it possible to efficiently model human-object interactions and intra-class variations. In this paper, we propose to characterize the human actions with a novel actionlet ensemble model, which represents the interaction of a subset of human joints. The proposed model is robust to noise, invariant to translational and temporal misalignment, and capable of characterizing both the human motion and the human-object interactions. We evaluate the proposed approach on three challenging action recognition datasets captured by Kinect devices, a multiview action recognition dataset captured with Kinect device, and a dataset captured by a motion capture system. The experimental evaluations show that the proposed approach achieves superior performance to the state-of-the-art algorithms.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Pointnet: deep learning on point sets for 3d classification and segmentation

              C. Qi, H. SU, K. MO (2024)
                Bookmark

                Author and article information

                Contributors
                Role: Academic Editor
                Journal
                Sensors (Basel)
                Sensors (Basel)
                sensors
                Sensors (Basel, Switzerland)
                MDPI
                1424-8220
                01 March 2021
                March 2021
                : 21
                : 5
                : 1684
                Affiliations
                [1 ]Institute of Modern Optics, Nankai University, Tianjin 300350, China; 1120180105@ 123456mail.nankai.edu.cn (T.X.); 1120190109@ 123456mail.nankai.edu.cn (D.A.); 1911343@ 123456mail.nankai.edu.cn (Y.J.)
                [2 ]Angle AI (Tianjin) Technology Company Ltd., Tianjin 300450, China
                Author notes
                [* ]Correspondence: yueyang@ 123456nankai.edu.cn
                [†]

                These authors contributed equally to this work.

                Author information
                https://orcid.org/0000-0002-4442-3676
                Article
                sensors-21-01684
                10.3390/s21051684
                7957572
                33804411
                a9134855-1912-4f32-9675-cc127600a267
                © 2021 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

                History
                : 28 January 2021
                : 23 February 2021
                Categories
                Review

                Biomedical engineering
                point cloud,joint estimation,skeleton extraction,depth sensor,skeleton tracking,computer vision,human representation,convolutional neural network,random tree walk,random forest,geodesic features,global features,deformation model,hand pose tracking,action recognition

                Comments

                Comment on this article