0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Automated segmentation and volume prediction in pediatric Wilms’ tumor CT using nnu-net

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Radiologic volumetric evaluation of Wilms’ tumor (WT) is an important indicator to guide treatment decisions. However, due to the heterogeneity of the tumors, radiologists have main-guard differences in diagnosis that can lead to misdiagnosis and poor treatment. The aim of this study was to explore whether CT-based outlining of WT foci can be automated using deep learning.

          Methods

          We included CT intravenous phase images of 105 patients with WT and double-blind outlining of lesions by two radiologists. Then, we trained an automatic segmentation model using nnUnet. The Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD 95) were used to assess the performance. Next, we optimized the automatic segmentation results based on the ratio of the three-dimensional diameter of the lesion to improve the performance of volumetric assessment.

          Results

          The DSC and HD 95 was 0.83 ± 0.22 and 10.50 ± 8.98 mm. The absolute difference and percentage difference in tumor size was 72.27 ± 134.84 cm 3 and 21.08% ± 30.46%. After optimization according to our method, it decreased to 40.22 ± 96.06 cm 3 and 10.16% ± 9.70%.

          Conclusion

          We introduce a novel method that enhances the accuracy of predicting WT volume by integrating AI automated outlining and 3D tumor diameters. This approach surpasses the accuracy of using AI outcomes alone and has the potential to enhance the clinical evaluation of pediatric patients with WT. By intertwining AI outcomes with clinical data, this method becomes more interpretive and offers promising applications beyond Wilms tumor, extending to other pediatric diseases.

          Related collections

          Most cited references24

          • Record: found
          • Abstract: found
          • Article: not found

          Computational Radiomics System to Decode the Radiographic Phenotype

          Radiomics aims to quantify phenotypic characteristics on medical imaging through the use of automated algorithms. Radiomic artificial intelligence (AI) technology, either based on engineered hard-coded algorithms or deep learning methods, can be used to develop non-invasive imaging-based biomarkers. However, lack of standardized algorithm definitions and image processing severely hampers reproducibility and comparability of results. To address this issue, we developed PyRadiomics , a flexible open-source platform capable of extracting a large panel of engineered features from medical images. PyRadiomics is implemented in Python and can be used standalone or using 3D-Slicer. Here, we discuss the workflow and architecture of PyRadiomics and demonstrate its application in characterizing lung-lesions. Source code, documentation, and examples are publicly available at www.radiomics.io . With this platform, we aim to establish a reference standard for radiomic analyses, provide a tested and maintained resource, and to grow the community of radiomic developers addressing critical needs in cancer research.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation

            Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Deep learning in cancer diagnosis, prognosis and treatment selection

              Deep learning is a subdiscipline of artificial intelligence that uses a machine learning technique called artificial neural networks to extract patterns and make predictions from large data sets. The increasing adoption of deep learning across healthcare domains together with the availability of highly characterised cancer datasets has accelerated research into the utility of deep learning in the analysis of the complex biology of cancer. While early results are promising, this is a rapidly evolving field with new knowledge emerging in both cancer biology and deep learning. In this review, we provide an overview of emerging deep learning techniques and how they are being applied to oncology. We focus on the deep learning applications for omics data types, including genomic, methylation and transcriptomic data, as well as histopathology-based genomic inference, and provide perspectives on how the different data types can be integrated to develop decision support tools. We provide specific examples of how deep learning may be applied in cancer diagnosis, prognosis and treatment management. We also assess the current limitations and challenges for the application of deep learning in precision oncology, including the lack of phenotypically rich data and the need for more explainable deep learning models. Finally, we conclude with a discussion of how current obstacles can be overcome to enable future clinical utilisation of deep learning.
                Bookmark

                Author and article information

                Contributors
                rudra@zju.edu.cn
                hongxizhang11@zju.edu.cn
                Journal
                BMC Pediatr
                BMC Pediatr
                BMC Pediatrics
                BioMed Central (London )
                1471-2431
                9 May 2024
                9 May 2024
                2024
                : 24
                : 321
                Affiliations
                [1 ]GRID grid.13402.34, ISNI 0000 0004 1759 700X, Department of Radiology, The Children’s Hospital, , Zhejiang University School of Medicine, National Clinical Research Center for Child Health, ; No. 3333, Binshneg Rd, Hangzhou, China
                [2 ]Wenzhou Medical University, ( https://ror.org/00rd5t069) Wenzhou, China
                Article
                4775
                10.1186/s12887-024-04775-2
                11080230
                38724944
                f608b24b-84a0-4fde-8dd1-d73e2412a724
                © The Author(s) 2024

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

                History
                : 25 October 2023
                : 18 April 2024
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/501100012166, National Key Research and Development Program of China;
                Award ID: 2023YFC2706100
                Funded by: FundRef http://dx.doi.org/10.13039/501100001809, National Natural Science Foundation of China;
                Award ID: 82373971
                Funded by: FundRef http://dx.doi.org/10.13039/501100004731, Natural Science Foundation of Zhejiang Province;
                Award ID: No.LY24H180002
                Categories
                Research
                Custom metadata
                © BioMed Central Ltd., part of Springer Nature 2024

                Pediatrics
                ct,wilms’ tumor,deep learning
                Pediatrics
                ct, wilms’ tumor, deep learning

                Comments

                Comment on this article