28
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Segmentation of dental cone‐beam CT scans affected by metal artifacts using a mixed‐scale dense convolutional neural network

      other

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose

          In order to attain anatomical models, surgical guides and implants for computer‐assisted surgery, accurate segmentation of bony structures in cone‐beam computed tomography (CBCT) scans is required. However, this image segmentation step is often impeded by metal artifacts. Therefore, this study aimed to develop a mixed‐scale dense convolutional neural network (MS‐D network) for bone segmentation in CBCT scans affected by metal artifacts.

          Method

          Training data were acquired from 20 dental CBCT scans affected by metal artifacts. An experienced medical engineer segmented the bony structures in all CBCT scans using global thresholding and manually removed all remaining noise and metal artifacts. The resulting gold standard segmentations were used to train an MS‐D network comprising 100 convolutional layers using far fewer trainable parameters than alternative convolutional neural network (CNN) architectures. The bone segmentation performance of the MS‐D network was evaluated using a leave‐2‐out scheme and compared with a clinical snake evolution algorithm and two state‐of‐the‐art CNN architectures (U‐Net and ResNet). All segmented CBCT scans were subsequently converted into standard tessellation language (STL) models and geometrically compared with the gold standard.

          Results

          CBCT scans segmented using the MS‐D network, U‐Net, ResNet and the snake evolution algorithm demonstrated mean Dice similarity coefficients of 0.87 ± 0.06, 0.87 ± 0.07, 0.86 ± 0.05, and 0.78 ± 0.07, respectively. The STL models acquired using the MS‐D network, U‐Net, ResNet and the snake evolution algorithm demonstrated mean absolute deviations of 0.44 mm ± 0.13 mm, 0.43 mm ± 0.16 mm, 0.40 mm ± 0.12 mm and 0.57 mm ± 0.22 mm, respectively. In contrast to the MS‐D network, the ResNet introduced wave‐like artifacts in the STL models, whereas the U‐Net incorrectly labeled background voxels as bone around the vertebrae in 4 of the 9 CBCT scans containing vertebrae.

          Conclusion

          The MS‐D network was able to accurately segment bony structures in CBCT scans affected by metal artifacts.

          Related collections

          Most cited references36

          • Record: found
          • Abstract: found
          • Article: not found

          Artifacts in CT: recognition and avoidance.

          Artifacts can seriously degrade the quality of computed tomographic (CT) images, sometimes to the point of making them diagnostically unusable. To optimize image quality, it is necessary to understand why artifacts occur and how they can be prevented or suppressed. CT artifacts originate from a range of sources. Physics-based artifacts result from the physical processes involved in the acquisition of CT data. Patient-based artifacts are caused by such factors as patient movement or the presence of metallic materials in or on the patient. Scanner-based artifacts result from imperfections in scanner function. Helical and multisection technique artifacts are produced by the image reconstruction process. Design features incorporated into modern CT scanners minimize some types of artifacts, and some can be partially corrected by the scanner software. However, in many instances, careful patient positioning and optimum selection of scanning parameters are the most important factors in avoiding CT artifacts. (c) RSNA, 2004.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Convolutional Neural Network Based Metal Artifact Reduction in X-Ray Computed Tomography

            In the presence of metal implants, metal artifacts are introduced to x-ray CT images. Although a large number of metal artifact reduction (MAR) methods have been proposed in the past decades, MAR is still one of the major problems in clinical x-ray CT. In this work, we develop a convolutional neural network (CNN) based open MAR framework, which fuses the information from the original and corrected images to suppress artifacts. The proposed approach consists two phases. In the CNN training phase, we build a database consisting of metal-free, metal-inserted and pre-corrected CT images, and image patches are extracted and used for CNN training. In the MAR phase, the uncorrected and pre-corrected images are used as the input of the trained CNN to generate a CNN image with reduced artifacts. To further reduce the remaining artifacts, water equivalent tissues in a CNN image are set to a uniform value to yield a CNN prior, whose forward projections are used to replace the metal-affected projections, followed by the FBP reconstruction. The effectiveness of the proposed method is validated on both simulated and real data. Experimental results demonstrate the superior MAR capability of the proposed method to its competitors in terms of artifact suppression and preservation of anatomical structures in the vicinity of metal implants.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A mixed-scale dense convolutional neural network for image analysis

              Popular neural networks for image-processing problems often contain many different operations, multiple layers of connections, and a large number of trainable parameters, often exceeding several million. They are typically tailored to specific applications, making it difficult to apply a network that is successful in one application to different applications. Here, we introduce a neural network architecture that is less complex than existing networks, is easy to train, and achieves accurate results with relatively few trainable parameters. The network automatically adapts to a specific problem, allowing the same network to be applied to a wide variety of different problems. Deep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results in practice, a large number of trainable parameters are often required. Here, we introduce a network architecture based on using dilated convolutions to capture features at different image scales and densely connecting all feature maps with each other. The resulting architecture is able to achieve accurate results with relatively few parameters and consists of a single set of operations, making it easier to implement, train, and apply in practice, and automatically adapts to different problems. We compare results of the proposed network architecture with popular existing architectures for several segmentation problems, showing that the proposed architecture is able to achieve accurate results with fewer parameters, with a reduced risk of overfitting the training data.
                Bookmark

                Author and article information

                Contributors
                j.minnema@vumc.nl
                Journal
                Med Phys
                Med Phys
                10.1002/(ISSN)2473-4209
                MP
                Medical Physics
                John Wiley and Sons Inc. (Hoboken )
                0094-2405
                2473-4209
                13 September 2019
                November 2019
                : 46
                : 11 ( doiID: 10.1002/mp.v46.11 )
                : 5027-5035
                Affiliations
                [ 1 ] Department of Oral and Maxillofacial Surgery/Pathology Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA) Vrije Universiteit Amsterdam Amsterdam Movement Sciences 3D Innovationlab 1081 HV Amsterdam The Netherlands
                [ 2 ] Centrum Wiskunde & Informatica (CWI) 1090 GB Amsterdam The Netherlands
                [ 3 ] Medical Technology Amsterdam UMC Vrije Universiteit Amsterdam 3D Innovationlab 1081 HV Amsterdam The Netherlands
                [ 4 ] Department of Oral and Maxillofacial Surgery Division for Regenerative Orofacial Medicine University Hospital Hamburg‐Eppendorf 20246 Hamburg Germany
                [ 5 ] Fraunhofer Research Institution for Additive Manufacturing Technologies IAPT Am Schleusengraben 13 21029 Hamburg Germany
                Author notes
                [*] [* ] Author to whom correspondence should be addressed. Electronic mail: j.minnema@ 123456vumc.nl ; Telephone: +31 681073639.

                Article
                MP13793
                10.1002/mp.13793
                6900023
                31463937
                fef510ab-475f-475b-b53c-19074b87c072
                © 2019 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

                This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

                History
                : 25 April 2019
                : 19 August 2019
                : 19 August 2019
                Page count
                Figures: 5, Tables: 2, Pages: 9, Words: 6076
                Funding
                Funded by: Netherlands Organisation for Scientific Research , open-funder-registry 10.13039/501100003246;
                Award ID: 639.073.506
                Categories
                Research Article
                QUANTITATIVE IMAGING AND IMAGE PROCESSING
                Research Articles
                Custom metadata
                2.0
                November 2019
                Converter:WILEY_ML3GV2_TO_JATSPMC version:5.7.2 mode:remove_FC converted:05.12.2019

                cone‐beam computed tomography (cbct),image segmentation,metal artifacts

                Comments

                Comment on this article