5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Leveraging Advances in Artificial Intelligence to Improve the Quality and Timing of Palliative Care

      discussion

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In recent years, research on artificial intelligence (AI) in medicine has seen great advances, especially with regards to the detection of diseases [1,2]. While definitions of what exactly constitutes AI vary, most definitions mention computer-based systems solving tasks that would normally require “natural”, especially human, intelligence [3]. Among the many subspecialities of AI, deep learning, where information from the input is extracted through many layers normally by so called neural networks, has emerged as an important tool due to its ability to extract meaningful information from imaging data [4]. State-of-the-art deep learning algorithms have been able to compete with—and in some cases even surpass—trained physicians in terms of diagnostic accuracy for certain indications [1,2]. While other areas of interest such as drug discovery have emerged, the application of AI is still largely limited to projects with the potential for great commercial gain. In contrast, research on its impact in other fields such as global health is comparatively slow [5,6]. One of these fields that has not been the focus and therefore has been largely unaffected by the recent advances is palliative care, a discipline of increasing importance in the aging population of the industrialized nations [7]. Palliative care is an interdisciplinary concept that strives to improve quality of life and prevent or alleviate symptoms for patients with severe, complex, and in some cases terminal illnesses [8]. As of March 2020, searching PubMed for publications whose titles contain the word “palliative” and either of the terms “deep learning”, “machine learning” or “artificial intelligence” yielded only four publications [9,10,11,12]. One of these publications was a rapid review searching databases for publications that used machine learning to improve palliative care that found only three publications trying to predict short-term mortality [10]. However, as palliative care is becoming more and more relevant, the possibilities to apply AI techniques to the research questions of the field increase drastically. Since many AI methods require massive amounts of learning data to reach their full predictive potential, well-curated datasets are a mandatory prerequisite [4]. As “conventional”, non-AI research in palliative care becomes increasingly established due to more patients that can be analyzed as well as more funding and interest, those datasets that are created as a byproduct can be used as a starting point for introducing AI to the field. If initial results are promising, a collaboration of palliative care centers to create even larger datasets could further improve these results. Especially imaging studies using deep learning could benefit from more generalizable predictions if the training images are acquired on a variety of heterogeneous scanners instead of the same machines from a single hospital [4]. To reduce the amount of data needed for the initial studies, researchers should also take advantage of advances in AI research such as image augmentation that apply modifications to training images to expand those datasets [13]. One challenge of particular interest could be the determination of the optimal timing of palliative care involvement. A growing number of publications have confirmed the positive impact of early palliative care for patients with cancer [14]. Likewise, the American Society of Clinical Oncology (ASCO) has implemented the recommendation for the integration of palliative care into standard oncology care beyond end-of-life care in 2012 and updated this recommendation to further strengthen the role of palliative care and detail what it should consist of in 2016 [15,16]. However, the recommended involvement of palliative care in the early course of the disease is often not feasible as the capacity of many palliative care facilities or the availability is still limited. Additionally, patients may be reluctant to receive palliative care, especially when the symptom burden is still low. It is therefore of great importance to adequately time the involvement of palliative care well, allowing one to allocate resources efficiently while simultaneously avoiding any compromise to the patients’ quality of life. Current approaches to this problem are largely based on clinical scores that try to predict mortality [17,18]. While the timing of palliative care is certainly an important aspect that could benefit from AI, it is far from being the only one. Every decision where the clinician has to weigh the possible benefits of an intervention with the stress caused by performing it could benefit from better predictions of whether the benefits that doctor and patient hope for will actually occur. This includes predicting if the pain caused by a bone metastasis will respond to palliative radiation or how long whole-brain radiotherapy can delay further neurologic deterioration. One might also try to assess more accurately whether palliative chemotherapy will result in a quicker decline in quality of life than the disease itself for a given patient. Using AI to address these problems could help improve the current predictions by not only incorporating clinical but also imaging data, thereby not only predicting mortality but also the probability of increased symptom burden and a decrease in quality of life throughout the course of a patient’s disease. This use of AI for palliative care is especially promising in a time where advances in model explainability such as Grad-CAM, which displays the areas of an image that a neural network bases its predictions on, and bayesian neural networks, which quantify the certainty of a model in its predictions, can be implemented much more easily than before, thereby helping physicians to not just blindly trust the artificial intelligence’s decisions but also to assess it and to ultimately incorporate its advice into a clinical decision of when to suggest which modality of palliative care to a patient [19,20]. As detecting diseases is already a major focus of AI research in medicine and the suggested use cases for leveraging AI in palliative care could impact adequate symptom management as well as advanced care planning and end of life care, one can hope for improvements in all areas of palliative care using the different modalities of AI.

          Related collections

          Most cited references14

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Deep Learning to Improve Breast Cancer Detection on Screening Mammography

          The rapid development of deep learning, a family of machine learning techniques, has spurred much interest in its application to medical imaging problems. Here, we develop a deep learning algorithm that can accurately detect breast cancer on screening mammograms using an “end-to-end” training approach that efficiently leverages training datasets with either complete clinical annotation or only the cancer status (label) of the whole image. In this approach, lesion annotations are required only in the initial training stage, and subsequent stages require only image-level labels, eliminating the reliance on rarely available lesion annotations. Our all convolutional network method for classifying screening mammograms attained excellent performance in comparison with previous methods. On an independent test set of digitized film mammograms from the Digital Database for Screening Mammography (CBIS-DDSM), the best single model achieved a per-image AUC of 0.88, and four-model averaging improved the AUC to 0.91 (sensitivity: 86.1%, specificity: 80.1%). On an independent test set of full-field digital mammography (FFDM) images from the INbreast database, the best single model achieved a per-image AUC of 0.95, and four-model averaging improved the AUC to 0.98 (sensitivity: 86.7%, specificity: 96.1%). We also demonstrate that a whole image classifier trained using our end-to-end approach on the CBIS-DDSM digitized film mammograms can be transferred to INbreast FFDM images using only a subset of the INbreast data for fine-tuning and without further reliance on the availability of lesion annotations. These findings show that automatic deep learning methods can be readily trained to attain high accuracy on heterogeneous mammography platforms, and hold tremendous promise for improving clinical tools to reduce false positive and false negative screening mammography results. Code and model available at: https://github.com/lishen/end2end-all-conv.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found
            Is Open Access

            Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

            We propose a technique for producing "visual explanations" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting important regions in the image for predicting the concept. Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers, (2) CNNs used for structured outputs, (3) CNNs used in tasks with multimodal inputs or reinforcement learning, without any architectural changes or re-training. We combine Grad-CAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes, (b) are robust to adversarial images, (c) outperform previous methods on localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, we show that even non-attention based models can localize inputs. We devise a way to identify important neurons through Grad-CAM and combine it with neuron names to provide textual explanations for model decisions. Finally, we design and conduct human studies to measure if Grad-CAM helps users establish appropriate trust in predictions from models and show that Grad-CAM helps untrained users successfully discern a 'stronger' nodel from a 'weaker' one even when both make identical predictions. Our code is available at https://github.com/ramprs/grad-cam/, along with a demo at http://gradcam.cloudcv.org, and a video at youtu.be/COjUB9Izk6E. This version was published in International Journal of Computer Vision (IJCV) in 2019; A previous version of the paper was published at International Conference on Computer Vision (ICCV'17)
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Improving palliative care with deep learning

              Background Access to palliative care is a key quality metric which most healthcare organizations strive to improve. The primary challenges to increasing palliative care access are a combination of physicians over-estimating patient prognoses, and a shortage of palliative staff in general. This, in combination with treatment inertia can result in a mismatch between patient wishes, and their actual care towards the end of life. Methods In this work, we address this problem, with Institutional Review Board approval, using machine learning and Electronic Health Record (EHR) data of patients. We train a Deep Neural Network model on the EHR data of patients from previous years, to predict mortality of patients within the next 3-12 month period. This prediction is used as a proxy decision for identifying patients who could benefit from palliative care. Results The EHR data of all admitted patients are evaluated every night by this algorithm, and the palliative care team is automatically notified of the list of patients with a positive prediction. In addition, we present a novel technique for decision interpretation, using which we provide explanations for the model’s predictions. Conclusion The automatic screening and notification saves the palliative care team the burden of time consuming chart reviews of all patients, and allows them to take a proactive approach in reaching out to such patients rather then relying on referrals from the treating physicians.
                Bookmark

                Author and article information

                Journal
                Cancers (Basel)
                Cancers (Basel)
                cancers
                Cancers
                MDPI
                2072-6694
                03 May 2020
                May 2020
                : 12
                : 5
                : 1149
                Affiliations
                [1 ]Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; daniel.zwahlen@ 123456ksw.ch (D.Z.); robert.foerster@ 123456ksw.ch (R.F.)
                [2 ]Competence Center for Palliative Care, University Hospital Zurich, 8091 Zurich, Switzerland; Caroline.Hertler@ 123456usz.ch (C.H.); david.blum@ 123456usz.ch (D.B.)
                Author notes
                [* ]Correspondence: paul.windisch@ 123456ksw.ch ; Tel.: +41-52-266-26-53
                Author information
                https://orcid.org/0000-0003-1040-4888
                https://orcid.org/0000-0001-6181-2895
                https://orcid.org/0000-0003-4359-7735
                https://orcid.org/0000-0002-7664-9207
                Article
                cancers-12-01149
                10.3390/cancers12051149
                7281519
                32375249
                63c10dfb-1d4e-488b-a842-95abd1f377e7
                © 2020 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

                History
                : 01 April 2020
                : 02 May 2020
                Categories
                Commentary

                Comments

                Comment on this article