27
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Correlating nuclear morphometric patterns with estrogen receptor status in breast cancer pathologic specimens

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In this pilot study, we introduce a machine learning framework to identify relationships between cancer tissue morphology and hormone receptor pathway activation in breast cancer pathology hematoxylin and eosin (H&E)-stained samples. As a proof-of-concept, we focus on predicting clinical estrogen receptor (ER) status—defined as greater than one percent of cells positive for estrogen receptor by immunohistochemistry staining—from spatial arrangement of nuclear features. Our learning pipeline segments nuclei from H&E images, extracts their position, shape and orientation descriptors, and then passes them to a deep neural network to predict ER status. After training on 57 tissue cores of invasive ductal carcinoma (IDC), our pipeline predicted ER status in an independent test set of patient samples (AUC ROC = 0.72, 95%CI = 0.55–0.89, n = 56). This proof of concept shows that machine-derived descriptors of morphologic histology patterns can be correlated to signaling pathway status. Unlike other deep learning approaches to pathology, our system uses deep neural networks to learn spatial relationships between pre-defined biological features, which improves the interpretability of the system and sheds light on the features the neural network uses to predict ER status. Future studies will correlate morphometry to quantitative measures of estrogen receptor status and, ultimately response to hormonal therapy.

          Digital pathology: Hormone receptor status predicted from pathology slides

          An artificial intelligence tool that analyzes the morphology of cell nuclei can help pathologists predict whether a breast cancer sample expresses the estrogen receptor (ER) or not. David Agus from the University of Southern California in Los Angeles, USA, and colleagues designed a machine-learning algorithm to correlate ER status — which is usually determined via immunohistochemistry assays — with visual patterns of shape, orientation and other nuclear features that a pathologist normally sees on a stained biopsy specimen. The researchers trained the algorithm on samples taken from 57 women with untreated invasive ductal carcinoma. They then tested the model’s accuracy on a separate set of 56 patient samples. The algorithm could predict ER status with reasonable precision and accuracy, suggesting that, with improvements, it could form the basis of a diagnostic aid for guiding treatment decisions.

          Related collections

          Most cited references18

          • Record: found
          • Abstract: not found
          • Article: not found

          Tamoxifen in the treatment of breast cancer.

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Systematic analysis of breast cancer morphology uncovers stromal features associated with survival.

            The morphological interpretation of histologic sections forms the basis of diagnosis and prognostication for cancer. In the diagnosis of carcinomas, pathologists perform a semiquantitative analysis of a small set of morphological features to determine the cancer's histologic grade. Physicians use histologic grade to inform their assessment of a carcinoma's aggressiveness and a patient's prognosis. Nevertheless, the determination of grade in breast cancer examines only a small set of morphological features of breast cancer epithelial cells, which has been largely unchanged since the 1920s. A comprehensive analysis of automatically quantitated morphological features could identify characteristics of prognostic relevance and provide an accurate and reproducible means for assessing prognosis from microscopic image data. We developed the C-Path (Computational Pathologist) system to measure a rich quantitative feature set from the breast cancer epithelium and stroma (6642 features), including both standard morphometric descriptors of image objects and higher-level contextual, relational, and global image features. These measurements were used to construct a prognostic model. We applied the C-Path system to microscopic images from two independent cohorts of breast cancer patients [from the Netherlands Cancer Institute (NKI) cohort, n = 248, and the Vancouver General Hospital (VGH) cohort, n = 328]. The prognostic model score generated by our system was strongly associated with overall survival in both the NKI and the VGH cohorts (both log-rank P ≤ 0.001). This association was independent of clinical, pathological, and molecular factors. Three stromal features were significantly associated with survival, and this association was stronger than the association of survival with epithelial characteristics in the model. These findings implicate stromal morphologic structure as a previously unrecognized prognostic determinant for breast cancer.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Classifying and segmenting microscopy images with deep multiple instance learning

              Motivation: High-content screening (HCS) technologies have enabled large scale imaging experiments for studying cell biology and for drug screening. These systems produce hundreds of thousands of microscopy images per day and their utility depends on automated image analysis. Recently, deep learning approaches that learn feature representations directly from pixel intensity values have dominated object recognition challenges. These tasks typically have a single centered object per image and existing models are not directly applicable to microscopy datasets. Here we develop an approach that combines deep convolutional neural networks (CNNs) with multiple instance learning (MIL) in order to classify and segment microscopy images using only whole image level annotations. Results: We introduce a new neural network architecture that uses MIL to simultaneously classify and segment microscopy images with populations of cells. We base our approach on the similarity between the aggregation function used in MIL and pooling layers used in CNNs. To facilitate aggregating across large numbers of instances in CNN feature maps we present the Noisy-AND pooling function, a new MIL operator that is robust to outliers. Combining CNNs with MIL enables training CNNs using whole microscopy images with image level labels. We show that training end-to-end MIL CNNs outperforms several previous methods on both mammalian and yeast datasets without requiring any segmentation steps. Availability and implementation: Torch7 implementation available upon request. Contact: oren.kraus@mail.utoronto.ca
                Bookmark

                Author and article information

                Contributors
                agus@usc.edu
                Journal
                NPJ Breast Cancer
                NPJ Breast Cancer
                NPJ Breast Cancer
                Nature Publishing Group UK (London )
                2374-4677
                4 September 2018
                4 September 2018
                2018
                : 4
                : 32
                Affiliations
                [1 ]ISNI 0000 0001 2156 6853, GRID grid.42505.36, Lawrence J. Ellison Institute for Transformative Medicine, , University of Southern California, ; 2250 Alcazar Street, CSC 240, Los Angeles, CA 90089-9075 USA
                [2 ]ISNI 0000 0001 0790 959X, GRID grid.411377.7, Intelligent Systems Engineering, Indiana University, ; 700N. Woodlawn Ave., Bloomington, IN 47408 USA
                [3 ]ISNI 0000000419368710, GRID grid.47100.32, Department of Pathology, , BML 116, Yale University School of Medicine, ; 310 Cedar St, PO Box 208023, New Haven, CT 06520-8023 USA
                Author information
                http://orcid.org/0000-0003-2977-5611
                http://orcid.org/0000-0001-5787-5009
                http://orcid.org/0000-0002-9925-0151
                http://orcid.org/0000-0002-7499-7822
                Article
                84
                10.1038/s41523-018-0084-4
                6123433
                30211313
                a0d24d57-c479-49b0-b59c-6cf374d59968
                © The Author(s) 2018

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 9 May 2018
                : 7 August 2018
                : 10 August 2018
                Funding
                Funded by: FundRef https://doi.org/10.13039/100001006, Breast Cancer Research Foundation (BCRF);
                Award ID: BCRF-16-103
                Award ID: BCRF-16-103
                Award ID: BCRF-16-103
                Award Recipient :
                Categories
                Article
                Custom metadata
                © The Author(s) 2018

                Comments

                Comment on this article