12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A deep learning framework for 18F-FDG PET imaging diagnosis in pediatric patients with temporal lobe epilepsy

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose

          Epilepsy is one of the most disabling neurological disorders, which affects all age groups and often results in severe consequences. Since misdiagnoses are common, many pediatric patients fail to receive the correct treatment. Recently, 18F-fluorodeoxyglucose positron emission tomography ( 18F-FDG PET) imaging has been used for the evaluation of pediatric epilepsy. However, the epileptic focus is very difficult to be identified by visual assessment since it may present either hypo- or hyper-metabolic abnormality with unclear boundary. This study aimed to develop a novel symmetricity-driven deep learning framework of PET imaging for the identification of epileptic foci in pediatric patients with temporal lobe epilepsy (TLE).

          Methods

          We retrospectively included 201 pediatric patients with TLE and 24 age-matched controls who underwent 18F-FDG PET-CT studies. 18F-FDG PET images were quantitatively investigated using 386 symmetricity features, and a pair-of-cube (PoC)-based Siamese convolutional neural network (CNN) was proposed for precise localization of epileptic focus, and then metabolic abnormality level of the predicted focus was calculated automatically by asymmetric index (AI). Performances of the proposed framework were compared with visual assessment, statistical parametric mapping (SPM) software, and Jensen-Shannon divergence-based logistic regression (JS-LR) analysis.

          Results

          The proposed deep learning framework could detect the epileptic foci accurately with the dice coefficient of 0.51, which was significantly higher than that of SPM (0.24, P < 0.01) and significantly (or marginally) higher than that of visual assessment (0.31–0.44, P = 0.005–0.27). The area under the curve (AUC) of the PoC classification was higher than that of the JS-LR (0.93 vs. 0.72). The metabolic level detection accuracy of the proposed method was significantly higher than that of visual assessment blinded or unblinded to clinical information (90% vs. 56% or 68%, P < 0.01).

          Conclusion

          The proposed deep learning framework for 18F-FDG PET imaging could identify epileptic foci accurately and efficiently, which might be applied as a computer-assisted approach for the future diagnosis of epilepsy patients.

          Trial registration

          NCT04169581. Registered November 13, 2019

          Public site: https://clinicaltrials.gov/ct2/show/NCT04169581

          Supplementary Information

          The online version contains supplementary material available at 10.1007/s00259-020-05108-y.

          Related collections

          Most cited references38

          • Record: found
          • Abstract: found
          • Article: not found

          Radiomics: Images Are More than Pictures, They Are Data

          This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain.

            One of the most challenging problems in modern neuroimaging is detailed characterization of neurodegeneration. Quantifying spatial and longitudinal atrophy patterns is an important component of this process. These spatiotemporal signals will aid in discriminating between related diseases, such as frontotemporal dementia (FTD) and Alzheimer's disease (AD), which manifest themselves in the same at-risk population. Here, we develop a novel symmetric image normalization method (SyN) for maximizing the cross-correlation within the space of diffeomorphic maps and provide the Euler-Lagrange equations necessary for this optimization. We then turn to a careful evaluation of our method. Our evaluation uses gold standard, human cortical segmentation to contrast SyN's performance with a related elastic method and with the standard ITK implementation of Thirion's Demons algorithm. The new method compares favorably with both approaches, in particular when the distance between the template brain and the target brain is large. We then report the correlation of volumes gained by algorithmic cortical labelings of FTD and control subjects with those gained by the manual rater. This comparison shows that, of the three methods tested, SyN's volume measurements are the most strongly correlated with volume measurements gained by expert labeling. This study indicates that SyN, with cross-correlation, is a reliable method for normalizing and making anatomical measurements in volumetric MRI of patients and at-risk elderly individuals.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography

              With an estimated 160,000 deaths in 2018, lung cancer is the most common cause of cancer death in the United States1. Lung cancer screening using low-dose computed tomography has been shown to reduce mortality by 20-43% and is now included in US screening guidelines1-6. Existing challenges include inter-grader variability and high false-positive and false-negative rates7-10. We propose a deep learning algorithm that uses a patient's current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the-art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases. We conducted two reader studies. When prior computed tomography imaging was not available, our model outperformed all six radiologists with absolute reductions of 11% in false positives and 5% in false negatives. Where prior computed tomography imaging was available, the model performance was on-par with the same radiologists. This creates an opportunity to optimize the screening process via computer assistance and automation. While the vast majority of patients remain unscreened, we show the potential for deep learning models to increase the accuracy, consistency and adoption of lung cancer screening worldwide.
                Bookmark

                Author and article information

                Contributors
                hzhang21@zju.edu.cn
                czhuo@zju.edu.cn
                meitian@zju.edu.cn
                Journal
                Eur J Nucl Med Mol Imaging
                Eur J Nucl Med Mol Imaging
                European Journal of Nuclear Medicine and Molecular Imaging
                Springer Berlin Heidelberg (Berlin/Heidelberg )
                1619-7070
                1619-7089
                9 January 2021
                9 January 2021
                2021
                : 48
                : 8
                : 2476-2485
                Affiliations
                [1 ]GRID grid.13402.34, ISNI 0000 0004 1759 700X, Department of Nuclear Medicine and PET-CT Center, , The Second Hospital of Zhejiang University School of Medicine, ; Hangzhou, Zhejiang China
                [2 ]GRID grid.13402.34, ISNI 0000 0004 1759 700X, College of Information Science & Electronic Engineering, , Zhejiang University, ; Hangzhou, Zhejiang China
                [3 ]GRID grid.13402.34, ISNI 0000 0004 1759 700X, Key Laboratory for Biomedical Engineering of Ministry of Education, , Zhejiang University, ; Hangzhou, Zhejiang China
                [4 ]GRID grid.13402.34, ISNI 0000 0004 1759 700X, Department of Pediatrics, , The Second Hospital of Zhejiang University School of Medicine, ; Hangzhou, Zhejiang China
                [5 ]GRID grid.13402.34, ISNI 0000 0004 1759 700X, Department of Neurology, Epilepsy Center, , The Second Hospital of Zhejiang University School of Medicine, ; Hangzhou, Zhejiang China
                [6 ]GRID grid.13402.34, ISNI 0000 0004 1759 700X, Center of Clinical Epidemiology & Biostatistics, , The Second Hospital of Zhejiang University School of Medicine, ; Hangzhou, Zhejiang China
                [7 ]GRID grid.13402.34, ISNI 0000 0004 1759 700X, College of Biomedical Engineering and Instrument Science, , Zhejiang University, ; Hangzhou, Zhejiang China
                Author information
                http://orcid.org/0000-0002-1587-2114
                Article
                5108
                10.1007/s00259-020-05108-y
                8241642
                33420912
                9bc75bbb-5405-4e92-9097-8d18b9545049
                © The Author(s) 2021

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 29 May 2020
                : 8 November 2020
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/501100001809, National Natural Science Foundation of China;
                Award ID: 81725009
                Award Recipient :
                Categories
                Original Article
                Custom metadata
                © Springer-Verlag GmbH Germany, part of Springer Nature 2021

                Radiology & Imaging
                deep learning,epilepsy,pediatrics,positron emission tomography (pet),glucose metabolism

                Comments

                Comment on this article