2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Deep learning extended depth-of-field microscope for fast and slide-free histology

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Significance

          Traditional microscopy suffers from a fixed trade-off between depth-of-field (DOF) and spatial resolution—the higher the desired spatial resolution, the narrower the DOF. We present DeepDOF, a computational microscope that allows us to break free from this constraint and achieve >5× larger DOF while retaining cellular-resolution imaging—obviating the need for z-scanning and significantly reducing the time needed for imaging. The key ingredients that allow this advance are 1) an optimized phase mask placed at the microscope aperture; and 2) a deep-learning-based algorithm that turns sensor data into high-resolution, large-DOF images. DeepDOF offers an inexpensive means for fast and slide-free histology, suited for improving tissue sampling during intraoperative assessment and in resource-constrained settings.

          Abstract

          Microscopic evaluation of resected tissue plays a central role in the surgical management of cancer. Because optical microscopes have a limited depth-of-field (DOF), resected tissue is either frozen or preserved with chemical fixatives, sliced into thin sections placed on microscope slides, stained, and imaged to determine whether surgical margins are free of tumor cells—a costly and time- and labor-intensive procedure. Here, we introduce a deep-learning extended DOF (DeepDOF) microscope to quickly image large areas of freshly resected tissue to provide histologic-quality images of surgical margins without physical sectioning. The DeepDOF microscope consists of a conventional fluorescence microscope with the simple addition of an inexpensive (less than $10) phase mask inserted in the pupil plane to encode the light field and enhance the depth-invariance of the point-spread function. When used with a jointly optimized image-reconstruction algorithm, diffraction-limited optical performance to resolve subcellular features can be maintained while significantly extending the DOF (200 µm). Data from resected oral surgical specimens show that the DeepDOF microscope can consistently visualize nuclear morphology and other important diagnostic features across highly irregular resected tissue surfaces without serial refocusing. With the capability to quickly scan intact samples with subcellular detail, the DeepDOF microscope can improve tissue sampling during intraoperative tumor-margin assessment, while offering an affordable tool to provide histological information from resected tissue specimens in resource-limited settings.

          Related collections

          Most cited references59

          • Record: found
          • Abstract: not found
          • Book Chapter: not found

          U-Net: Convolutional Networks for Biomedical Image Segmentation

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Image Quality Assessment: From Error Visibility to Structural Similarity

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning

              Visual inspection of histopathology slides is one of the main methods used by pathologists to assess the stage, type and subtype of lung tumors. Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung cancer, and their distinction requires visual inspection by an experienced pathologist. In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them-STK11, EGFR, FAT1, SETBP1, KRAS and TP53-can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations. Our approach can be applied to any cancer type, and the code is available at https://github.com/ncoudray/DeepPATH .
                Bookmark

                Author and article information

                Journal
                Proc Natl Acad Sci U S A
                Proc Natl Acad Sci U S A
                pnas
                pnas
                PNAS
                Proceedings of the National Academy of Sciences of the United States of America
                National Academy of Sciences
                0027-8424
                1091-6490
                29 December 2020
                14 December 2020
                14 December 2020
                : 117
                : 52
                : 33051-33060
                Affiliations
                [1] aDepartment of Electrical and Computer Engineering, Rice University , Houston, TX 77005;
                [2] bDepartment of Bioengineering, Rice University , Houston, TX 77005;
                [3] cDepartment of Applied Physics, Rice University , Houston, TX 77005;
                [4] dDepartment of Head and Neck Surgery, University of Texas MD Anderson Cancer Center , Houston, TX 77030;
                [5] eDepartment of Pathology, University of Texas MD Anderson Cancer Center , Houston, TX 77030
                Author notes
                2To whom correspondence may be addressed. Email: rkortum@ 123456rice.edu or vashok@ 123456rice.edu .

                Contributed by Rebecca R. Richards-Kortum, November 9, 2020 (sent for review July 9, 2020; reviewed by Stephen A. Boppart, Peter T. C. So, and Lei Tian)

                Author contributions: L.J., Y.T., A.M.G., R.R.R.-K., and A.V. designed research; L.J., Y.T., Y.W., J.B.C., M.T.T., X.Z., H.B., J.T.R., M.D.W., A.M.G., R.R.R.-K., and A.V. performed research; L.J., Y.T., Y.W., J.B.C., M.T.T., M.D.W., A.M.G., R.R.R.-K., and A.V. analyzed data; and L.J., Y.T., R.R.R.-K., and A.V. wrote the paper.

                Reviewers: S.A.B., Beckman Institute for Advanced Science and Technology; P.T.C.S., Massachusetts Institute of Technology; and L.T., Boston University.

                1L.J. and Y.T. contributed equally to this work.

                Author information
                https://orcid.org/0000-0003-2568-8940
                https://orcid.org/0000-0002-8821-4139
                https://orcid.org/0000-0001-9918-4062
                https://orcid.org/0000-0002-3509-3054
                https://orcid.org/0000-0003-2347-9467
                https://orcid.org/0000-0001-5043-7460
                Article
                202013571
                10.1073/pnas.2013571117
                7776814
                33318169
                be0ac7b9-cf36-431f-8b45-4a42404e049f
                Copyright © 2020 the Author(s). Published by PNAS.

                This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).

                History
                Page count
                Pages: 10
                Funding
                Funded by: National Science Foundation (NSF) 100000001
                Award ID: 1730574
                Award Recipient : Lingbo Jin Award Recipient : Yubo Tang Award Recipient : Jackson B. Coole Award Recipient : Rebecca R. Richards-Kortum Award Recipient : Ashok Veeraraghavan
                Funded by: National Science Foundation (NSF) 100000001
                Award ID: 1648451
                Award Recipient : Lingbo Jin Award Recipient : Yubo Tang Award Recipient : Jackson B. Coole Award Recipient : Rebecca R. Richards-Kortum Award Recipient : Ashok Veeraraghavan
                Funded by: HHS | NIH | National Cancer Institute (NCI) 100000054
                Award ID: CA16672
                Award Recipient : Melody T. Tan Award Recipient : Hawraa Badaoui Award Recipient : Michelle D. Williams Award Recipient : Ann M Gillenwater
                Funded by: DOD | Defense Advanced Research Projects Agency (DARPA) 100000185
                Award ID: N66001-17-C-4012
                Award Recipient : Yicheng Wu Award Recipient : Xuan Zhao Award Recipient : Jacob T Robinson
                Funded by: National Science Foundation (NSF) 100000001
                Award ID: 1652633
                Award Recipient : Lingbo Jin Award Recipient : Yubo Tang Award Recipient : Jackson B. Coole Award Recipient : Rebecca R. Richards-Kortum Award Recipient : Ashok Veeraraghavan
                Funded by: HHS | National Institutes of Health (NIH) 100000002
                Award ID: 1RF1NS110501
                Award Recipient : Xuan Zhao Award Recipient : Jacob T Robinson
                Categories
                Physical Sciences
                Engineering

                deep learning,extended depth-of-field microscopy,end-to-end optimization,phase mask,pathology

                Comments

                Comment on this article