2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      X-CHAR: A Concept-based Explainable Complex Human Activity Recognition Model

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          End-to-end deep learning models are increasingly applied to safety-critical human activity recognition (HAR) applications, e.g., healthcare monitoring and smart home control, to reduce developer burden and increase the performance and robustness of prediction models. However, integrating HAR models in safety-critical applications requires trust, and recent approaches have aimed to balance the performance of deep learning models with explainable decision-making for complex activity recognition. Prior works have exploited the compositionality of complex HAR (i.e., higher-level activities composed of lower-level activities) to form models with symbolic interfaces, such as concept-bottleneck architectures, that facilitate inherently interpretable models. However, feature engineering for symbolic concepts–as well as the relationship between the concepts–requires precise annotation of lower-level activities by domain experts, usually with fixed time windows, all of which induce a heavy and error-prone workload on the domain expert. In this paper, we introduce X-CHAR , an e Xplainable Complex Human Activity Recognition model that doesn’t require precise annotation of low-level activities, offers explanations in the form of human-understandable, high-level concepts, while maintaining the robust performance of end-to-end deep learning models for time series data. X-CHAR learns to model complex activity recognition in the form of a sequence of concepts. For each classification, X-CHAR outputs a sequence of concepts and a counterfactual example as the explanation. We show that the sequence information of the concepts can be modeled using Connectionist Temporal Classification (CTC) loss without having accurate start and end times of low-level annotations in the training dataset–significantly reducing developer burden. We evaluate our model on several complex activity datasets and demonstrate that our model offers explanations without compromising the prediction accuracy in comparison to baseline models. Finally, we conducted a mechanical Turk study to show that the explanations provided by our model are more understandable than the explanations from existing methods for complex activity recognition.

          Related collections

          Most cited references37

          • Record: found
          • Abstract: found
          • Article: not found

          A Unified Approach to Interpreting Model Predictions

          Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches. To appear in NIPS 2017
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

            Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found

              Clinically applicable deep learning for diagnosis and referral in retinal disease

                Bookmark

                Author and article information

                Contributors
                Journal
                101719413
                47236
                Proc ACM Interact Mob Wearable Ubiquitous Technol
                Proc ACM Interact Mob Wearable Ubiquitous Technol
                Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies
                2474-9567
                11 September 2023
                March 2023
                28 March 2023
                25 March 2024
                : 7
                : 1
                : 17
                Affiliations
                University of California Los Angeles, USA
                University of California Los Angeles, USA
                University of Southern California, Information Sciences Institute, USA
                University of California Los Angeles, USA
                Author notes
                Article
                NIHMS1928656
                10.1145/3580804
                10961595
                38529008
                009512ea-b2ad-43c5-9050-4814d59b70d1

                This work is licensed under a Creative Commons Attribution International 4.0 License.

                History
                Categories
                Article

                activity recognition,neural networks,explainable ai,interpretability

                Comments

                Comment on this article