5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. However, current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis, e.g. region detection. We evaluated our CNN on a region detection problem using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image analysis (pre-fused inputs, multi-branch techniques, multi-channel techniques) and demonstrated that our approach had a significantly higher accuracy (\(p < 0.05\)) than the baselines.

          Related collections

          Most cited references21

          • Record: found
          • Abstract: found
          • Article: not found

          From RECIST to PERCIST: Evolving Considerations for PET response criteria in solid tumors.

          The purpose of this article is to review the status and limitations of anatomic tumor response metrics including the World Health Organization (WHO) criteria, the Response Evaluation Criteria in Solid Tumors (RECIST), and RECIST 1.1. This article also reviews qualitative and quantitative approaches to metabolic tumor response assessment with (18)F-FDG PET and proposes a draft framework for PET Response Criteria in Solid Tumors (PERCIST), version 1.0. PubMed searches, including searches for the terms RECIST, positron, WHO, FDG, cancer (including specific types), treatment response, region of interest, and derivative references, were performed. Abstracts and articles judged most relevant to the goals of this report were reviewed with emphasis on limitations and strengths of the anatomic and PET approaches to treatment response assessment. On the basis of these data and the authors' experience, draft criteria were formulated for PET tumor response to treatment. Approximately 3,000 potentially relevant references were screened. Anatomic imaging alone using standard WHO, RECIST, and RECIST 1.1 criteria is widely applied but still has limitations in response assessments. For example, despite effective treatment, changes in tumor size can be minimal in tumors such as lymphomas, sarcoma, hepatomas, mesothelioma, and gastrointestinal stromal tumor. CT tumor density, contrast enhancement, or MRI characteristics appear more informative than size but are not yet routinely applied. RECIST criteria may show progression of tumor more slowly than WHO criteria. RECIST 1.1 criteria (assessing a maximum of 5 tumor foci, vs. 10 in RECIST) result in a higher complete response rate than the original RECIST criteria, at least in lymph nodes. Variability appears greater in assessing progression than in assessing response. Qualitative and quantitative approaches to (18)F-FDG PET response assessment have been applied and require a consistent PET methodology to allow quantitative assessments. Statistically significant changes in tumor standardized uptake value (SUV) occur in careful test-retest studies of high-SUV tumors, with a change of 20% in SUV of a region 1 cm or larger in diameter; however, medically relevant beneficial changes are often associated with a 30% or greater decline. The more extensive the therapy, the greater the decline in SUV with most effective treatments. Important components of the proposed PERCIST criteria include assessing normal reference tissue values in a 3-cm-diameter region of interest in the liver, using a consistent PET protocol, using a fixed small region of interest about 1 cm(3) in volume (1.2-cm diameter) in the most active region of metabolically active tumors to minimize statistical variability, assessing tumor size, treating SUV lean measurements in the 1 (up to 5 optional) most metabolically active tumor focus as a continuous variable, requiring a 30% decline in SUV for "response," and deferring to RECIST 1.1 in cases that do not have (18)F-FDG avidity or are technically unsuitable. Criteria to define progression of tumor-absent new lesions are uncertain but are proposed. Anatomic imaging alone using standard WHO, RECIST, and RECIST 1.1 criteria have limitations, particularly in assessing the activity of newer cancer therapies that stabilize disease, whereas (18)F-FDG PET appears particularly valuable in such cases. The proposed PERCIST 1.0 criteria should serve as a starting point for use in clinical trials and in structured quantitative clinical reporting. Undoubtedly, subsequent revisions and enhancements will be required as validation studies are undertaken in varying diseases and treatments.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            U-Net: Convolutional Networks for Biomedical Image Segmentation

            There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Improving predictive inference under covariate shift by weighting the log-likelihood function

                Bookmark

                Author and article information

                Journal
                04 October 2018
                Article
                1810.02492
                66f94e86-c35d-4457-9f71-4cccd67ba46e

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                15 pages (10 main paper, 5 supplementary), 11 images (6 main paper, 5 supplementary), 2 tables
                cs.CV

                Computer vision & Pattern recognition
                Computer vision & Pattern recognition

                Comments

                Comment on this article