2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Deep learning enables automated MRI-based estimation of uterine volume also in patients with uterine fibroids undergoing high-intensity focused ultrasound therapy

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          High-intensity focused ultrasound (HIFU) is used for the treatment of symptomatic leiomyomas. We aim to automate uterine volumetry for tracking changes after therapy with a 3D deep learning approach.

          Methods

          A 3D nnU-Net model in the default setting and in a modified version including convolutional block attention modules (CBAMs) was developed on 3D T2-weighted MRI scans. Uterine segmentation was performed in 44 patients with routine pelvic MRI (standard group) and 56 patients with uterine fibroids undergoing ultrasound-guided HIFU therapy (HIFU group). Here, preHIFU scans ( n = 56), postHIFU imaging maximum one day after HIFU ( n = 54), and the last available follow-up examination ( n = 53, days after HIFU: 420 ± 377) were included. The training was performed on 80% of the data with fivefold cross-validation. The remaining data were used as a hold-out test set. Ground truth was generated by a board-certified radiologist and a radiology resident. For the assessment of inter-reader agreement, all preHIFU examinations were segmented independently by both.

          Results

          High segmentation performance was already observed for the default 3D nnU-Net (mean Dice score = 0.95 ± 0.05) on the validation sets. Since the CBAM nnU-Net showed no significant benefit, the less complex default model was applied to the hold-out test set, which resulted in accurate uterus segmentation (Dice scores: standard group 0.92 ± 0.07; HIFU group 0.96 ± 0.02), which was comparable to the agreement between the two readers.

          Conclusions

          This study presents a method for automatic uterus segmentation which allows a fast and consistent assessment of uterine volume. Therefore, this method could be used in the clinical setting for objective assessment of therapeutic response to HIFU therapy.

          Supplementary information

          The online version contains supplementary material available at 10.1186/s13244-022-01342-0.

          Key points

          • Deep learning methods enable accurate segmentation of the uterus in T2-weighted MRI.

          • Automatic uterine volumetry is possible in patients with and without leiomyomas.

          • Automated volumetry enables an objective assessment of response to high-intensity focused ultrasound therapy.

          Supplementary information

          The online version contains supplementary material available at 10.1186/s13244-022-01342-0.

          Related collections

          Most cited references36

          • Record: found
          • Abstract: found
          • Article: not found

          3D Slicer as an image computing platform for the Quantitative Imaging Network.

          Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. Copyright © 2012 Elsevier Inc. All rights reserved.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            seaborn: statistical data visualization

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation

              Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.
                Bookmark

                Author and article information

                Contributors
                sprinkart@uni-bonn.de
                Journal
                Insights Imaging
                Insights Imaging
                Insights into Imaging
                Springer Vienna (Vienna )
                1869-4101
                5 January 2023
                5 January 2023
                December 2023
                : 14
                : 1
                Affiliations
                [1 ]GRID grid.15090.3d, ISNI 0000 0000 8786 803X, Department of Diagnostic and Interventional Radiology, , University Hospital Bonn, ; Venusberg-Campus 1, 53127 Bonn, Germany
                [2 ]GRID grid.15090.3d, ISNI 0000 0000 8786 803X, Department of Radiotherapy and Radiation Oncology, , University Hospital Bonn, ; Venusberg-Campus 1, 53127 Bonn, Germany
                [3 ]GRID grid.15090.3d, ISNI 0000 0000 8786 803X, Department of Neuroradiology, , University Hospital Bonn, ; Venusberg-Campus 1, 53127 Bonn, Germany
                [4 ]GRID grid.15090.3d, ISNI 0000 0000 8786 803X, Department of Gynaecology and Gynaecological Oncology, , University Hospital Bonn, ; Bonn, Germany
                [5 ]GRID grid.15090.3d, ISNI 0000 0000 8786 803X, Department of Nuclear Medicine, , University Hospital Bonn, ; Bonn, Germany
                Author information
                http://orcid.org/0000-0002-1435-9562
                Article
                1342
                10.1186/s13244-022-01342-0
                9813298
                36600120
                49334fd1-be51-4fe3-a12d-00ebac702bbf
                © The Author(s) 2022

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 2 September 2022
                : 2 December 2022
                Funding
                Funded by: Universitätsklinikum Bonn (8930)
                Categories
                Original Article
                Custom metadata
                © The Author(s) 2023

                Radiology & Imaging
                deep learning,magnetic resonance imaging,uterus,leiomyoma
                Radiology & Imaging
                deep learning, magnetic resonance imaging, uterus, leiomyoma

                Comments

                Comment on this article