Blog
About

159
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Conference Proceedings: found
      Is Open Access

      An Evaluation of DTW Approaches for Whole-of-Body Gesture Recognition

      , ,

      Proceedings of the 28th International BCS Human Computer Interaction Conference (HCI 2014) (HCI)

      BCS Human Computer Interaction Conference (HCI 2014)

      9 - 12 September 2014

      Whole-of-Body Gestures, Dynamic Time Warping, Spatio-Temporal Pattern Recognition

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This paper systematically explores the capabilities of different forms of Dynamic Time Warping (DTW) algorithms and their parameter configurations in recognising whole-of-body gestures. The standard DTW (SDTW) (Sakoe and Chiba 1978), globally feature weighted DTW (Reyes et al. 2011) and locally feature weighted DTW (Arici et al. 2013) algorithms are particularly considered, while an enhanced version of the globally feature weighted DTW (EDTW) algorithm is presented. A wide range of configurable parameters: distance measures (Euclidean and Mahalanobis), combination of features (Cartesian velocity, angular velocity and acceleration), combinations of skeletal elements, reference signal count and k-nearest neighbour count are tested in order to understand the impact on final recognition accuracies. The study is conducted by collecting gesturing data from10 participants for 9 differentwhole-of-body gesture commands. The results suggest that the proposed enhanced version of the globally feature weighted DTW algorithm performs significantly better than the other DTW algorithms. Given sufficient training data this study suggests that the Mahalanobis distance has the capability to better differentiate certain gestures compared to the Euclidean distance. Out of the features, Cartesian velocity combined with angular velocity provides the highest gesture discriminant capability while acceleration provides the lowest. When highly informative and stable skeletal elements are selected, the overall performance gain obtained by adding extra skeletal data is marginal. Also the recognition accuracies are sensitive to the reference signal count and the KNN percentage. Additionally, the presented results summarise the unique capabilities of certain configurations over others, highlighting the importance of selecting the appropriate DTW algorithm and its configurations to achieve optimal gesture recognition performances.

          Related collections

          Most cited references 13

          • Record: found
          • Abstract: found
          • Article: not found

          A unified framework for gesture recognition and spatiotemporal gesture segmentation.

          Within the context of hand gesture recognition, spatiotemporal gesture segmentation is the task of determining, in a video sequence, where the gesturing hand is located and when the gesture starts and ends. Existing gesture recognition methods typically assume either known spatial segmentation or known temporal segmentation, or both. This paper introduces a unified framework for simultaneously performing spatial segmentation, temporal segmentation, and recognition. In the proposed framework, information flows both bottom-up and top-down. A gesture can be recognized even when the hand location is highly ambiguous and when information about when the gesture begins and ends is unavailable. Thus, the method can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds. The proposed method consists of three novel contributions: a spatiotemporal matching algorithm that can accommodate multiple candidate hand detections in every frame, a classifier-based pruning framework that enables accurate and early rejection of poor matches to gesture models, and a subgesture reasoning algorithm that learns which gesture models can falsely match parts of other longer gestures. The performance of the approach is evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, in front of a cluttered background, and retrieval of occurrences of signs of interest in a video database containing continuous, unsegmented signing in American Sign Language (ASL).
            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Charade: remote control of objects using free-hand gestures

                Bookmark

                Author and article information

                Contributors
                Conference
                September 2014
                September 2014
                : 11-21
                Affiliations
                The University of New South Wales

                Canberra ACT 2600

                Australia
                Simcentric Technologies
                Article
                10.14236/ewic/HCI2014.5
                © Suranjith De Silva et al. Published by BCS Learning and Development Ltd. Proceedings of the 28th International BCS Human Computer Interaction Conference (HCI 2014), Southport, UK

                This work is licensed under a Creative Commons Attribution 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

                Proceedings of the 28th International BCS Human Computer Interaction Conference (HCI 2014)
                HCI
                28
                Southport, UK
                9 - 12 September 2014
                Electronic Workshops in Computing (eWiC)
                BCS Human Computer Interaction Conference (HCI 2014)
                Product
                Product Information: 1477-9358BCS Learning & Development
                Self URI (journal page): https://ewic.bcs.org/
                Categories
                Electronic Workshops in Computing

                Comments

                Comment on this article