12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Detecting Mistakes in CPR Training with Multimodal Data and Neural Networks

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This study investigated to what extent multimodal data can be used to detect mistakes during Cardiopulmonary Resuscitation (CPR) training. We complemented the Laerdal QCPR ResusciAnne manikin with the Multimodal Tutor for CPR, a multi-sensor system consisting of a Microsoft Kinect for tracking body position and a Myo armband for collecting electromyogram information. We collected multimodal data from 11 medical students, each of them performing two sessions of two-minute chest compressions (CCs). We gathered in total 5254 CCs that were all labelled according to five performance indicators, corresponding to common CPR training mistakes. Three out of five indicators, CC rate, CC depth and CC release, were assessed automatically by the ReusciAnne manikin. The remaining two, related to arms and body position, were annotated manually by the research team. We trained five neural networks for classifying each of the five indicators. The results of the experiment show that multimodal data can provide accurate mistake detection as compared to the ResusciAnne manikin baseline. We also show that the Multimodal Tutor for CPR can detect additional CPR training mistakes such as the correct use of arms and body weight. Thus far, these mistakes were identified only by human instructors. Finally, to investigate user feedback in the future implementations of the Multimodal Tutor for CPR, we conducted a questionnaire to collect valuable feedback aspects of CPR training.

          Related collections

          Most cited references38

          • Record: found
          • Abstract: found
          • Article: not found

          Multimodal Machine Learning: A Survey and Taxonomy

          Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The Relative Effectiveness of Human Tutoring, Intelligent Tutoring Systems, and Other Tutoring Systems

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges

                Bookmark

                Author and article information

                Journal
                Sensors (Basel)
                Sensors (Basel)
                sensors
                Sensors (Basel, Switzerland)
                MDPI
                1424-8220
                13 July 2019
                July 2019
                : 19
                : 14
                : 3099
                Affiliations
                [1 ]Welten Institute, Research Centre for Learning, Teaching and Technology Open University of the Netherlands, Valkenburgerweg, 177 6401 AT Heerlen, The Netherlands
                [2 ]DIPF - Leibniz Institute for Research and Information in Education, Rostocker Straße 6, 60323 Frankfurt, Germany
                Author notes
                [* ]Correspondence: daniele.dimitri@ 123456ou.nl
                Author information
                https://orcid.org/0000-0002-9331-6893
                Article
                sensors-19-03099
                10.3390/s19143099
                6679577
                31337029
                caea1351-2459-4ee7-b642-64ac9694a68a
                © 2019 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

                History
                : 20 May 2019
                : 05 July 2019
                Categories
                Article

                Biomedical engineering
                multimodal data,neural networks,psychomotor learning,training mistakes,medical simulation,learning analytics,signal processing,activity recognition,sensors

                Comments

                Comment on this article