0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Discovering Digital Tumor Signatures—Using Latent Code Representations to Manipulate and Classify Liver Lesions

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Simple Summary

          We use a generative deep learning paradigm for the identification of digital signatures in radiological imaging data. The model is trained on a small inhouse data set and evaluated on publicly available data. Apart from using the learned signatures for the characterization of lesions, in analogy to radiomics features, we also demonstrate that by manipulating them we can create realistic synthetic CT image patches. This generation of synthetic data can be carried out at user-defined spatial locations. Moreover, the discrimination of liver lesions from normal liver tissue can be achieved with high accuracy, sensitivity, and specificity.

          Abstract

          Modern generative deep learning (DL) architectures allow for unsupervised learning of latent representations that can be exploited in several downstream tasks. Within the field of oncological medical imaging, we term these latent representations “digital tumor signatures” and hypothesize that they can be used, in analogy to radiomics features, to differentiate between lesions and normal liver tissue. Moreover, we conjecture that they can be used for the generation of synthetic data, specifically for the artificial insertion and removal of liver tumor lesions at user-defined spatial locations in CT images. Our approach utilizes an implicit autoencoder, an unsupervised model architecture that combines an autoencoder and two generative adversarial network (GAN)-like components. The model was trained on liver patches from 25 or 57 inhouse abdominal CT scans, depending on the experiment, demonstrating that only minimal data is required for synthetic image generation. The model was evaluated on a publicly available data set of 131 scans. We show that a PCA embedding of the latent representation captures the structure of the data, providing the foundation for the targeted insertion and removal of tumor lesions. To assess the quality of the synthetic images, we conducted two experiments with five radiologists. For experiment 1, only one rater and the ensemble-rater were marginally above the chance level in distinguishing real from synthetic data. For the second experiment, no rater was above the chance level. To illustrate that the “digital signatures” can also be used to differentiate lesion from normal tissue, we employed several machine learning methods. The best performing method, a LinearSVM, obtained 95% (97%) accuracy, 94% (95%) sensitivity, and 97% (99%) specificity, depending on if all data or only normal appearing patches were used for training of the implicit autoencoder. Overall, we demonstrate that the proposed unsupervised learning paradigm can be utilized for the removal and insertion of liver lesions at user defined spatial locations and that the digital signatures can be used to discriminate between lesions and normal liver tissue in abdominal CT scans.

          Related collections

          Most cited references36

          • Record: found
          • Abstract: not found
          • Article: not found

          Extracting and composing robust features with denoising autoencoders

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            GAN-based Synthetic Medical Image Augmentation for increased CNN Performance in Liver Lesion Classification

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study

              Background Non-small-cell lung cancer (NSCLC) patients often demonstrate varying clinical courses and outcomes, even within the same tumor stage. This study explores deep learning applications in medical imaging allowing for the automated quantification of radiographic characteristics and potentially improving patient stratification. Methods and findings We performed an integrative analysis on 7 independent datasets across 5 institutions totaling 1,194 NSCLC patients (age median = 68.3 years [range 32.5–93.3], survival median = 1.7 years [range 0.0–11.7]). Using external validation in computed tomography (CT) data, we identified prognostic signatures using a 3D convolutional neural network (CNN) for patients treated with radiotherapy (n = 771, age median = 68.0 years [range 32.5–93.3], survival median = 1.3 years [range 0.0–11.7]). We then employed a transfer learning approach to achieve the same for surgery patients (n = 391, age median = 69.1 years [range 37.2–88.0], survival median = 3.1 years [range 0.0–8.8]). We found that the CNN predictions were significantly associated with 2-year overall survival from the start of respective treatment for radiotherapy (area under the receiver operating characteristic curve [AUC] = 0.70 [95% CI 0.63–0.78], p < 0.001) and surgery (AUC = 0.71 [95% CI 0.60–0.82], p < 0.001) patients. The CNN was also able to significantly stratify patients into low and high mortality risk groups in both the radiotherapy (p < 0.001) and surgery (p = 0.03) datasets. Additionally, the CNN was found to significantly outperform random forest models built on clinical parameters—including age, sex, and tumor node metastasis stage—as well as demonstrate high robustness against test–retest (intraclass correlation coefficient = 0.91) and inter-reader (Spearman’s rank-order correlation = 0.88) variations. To gain a better understanding of the characteristics captured by the CNN, we identified regions with the most contribution towards predictions and highlighted the importance of tumor-surrounding tissue in patient stratification. We also present preliminary findings on the biological basis of the captured phenotypes as being linked to cell cycle and transcriptional processes. Limitations include the retrospective nature of this study as well as the opaque black box nature of deep learning networks. Conclusions Our results provide evidence that deep learning networks may be used for mortality risk stratification based on standard-of-care CT images from NSCLC patients. This evidence motivates future research into better deciphering the clinical and biological basis of deep learning networks as well as validation in prospective data.
                Bookmark

                Author and article information

                Contributors
                Role: Academic Editor
                Role: Academic Editor
                Journal
                Cancers (Basel)
                Cancers (Basel)
                cancers
                Cancers
                MDPI
                2072-6694
                22 June 2021
                July 2021
                : 13
                : 13
                : 3108
                Affiliations
                [1 ]Institute for AI in Medicine (IKIM), University Medicine Essen, 45131 Essen, Germany; j.murray@ 123456dkfz-heidelberg.de
                [2 ]Division of Radiology, German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany; mail@ 123456benedikt-kersjes.de (B.K.); h.schlemmer@ 123456dkfz-heidelberg.de (H.-P.S.)
                [3 ]German Cancer Consortium (DKTK), Partner Site Heidelberg, 69120 Heidelberg, Germany
                [4 ]Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen (WTZ), 45122 Essen, Germany
                [5 ]Department of General Psychiatry, Center of Psychosocial Medicine, Heidelberg University, 69115 Heidelberg, Germany; Kai.Ueltzhoeffer@ 123456med.uni-heidelberg.de
                [6 ]Medical Faculty Heidelberg, Heidelberg University, 69120 Heidelberg, Germany
                [7 ]Visual Learning Lab, Heidelberg University, 69120 Heidelberg, Germany; carsten.rother@ 123456iwr.uni-heidelberg.de (C.R.); ullrich.koethe@ 123456iwr.uni-heidelberg.de (U.K.)
                Author notes
                Author information
                https://orcid.org/0000-0001-8686-0682
                https://orcid.org/0000-0003-3485-1201
                https://orcid.org/0000-0001-6036-1287
                Article
                cancers-13-03108
                10.3390/cancers13133108
                8269051
                34206336
                4bcc4cd5-edc9-4c79-bba2-d4fe85267ed4
                © 2021 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( https://creativecommons.org/licenses/by/4.0/).

                History
                : 17 April 2021
                : 16 June 2021
                Categories
                Article

                unsupervised learning,latent code,synthetic image generation,machine learning

                Comments

                Comment on this article