48
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks

      1 , 2 , 1 , 1 , 2 , 2
      Medical Physics
      Wiley

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          <div class="section"> <a class="named-anchor" id="S1"> <!-- named anchor --> </a> <h5 class="section-title" id="d580358e140">Purpose</h5> <p id="P1">Intensity modulated radiation therapy (IMRT) is commonly employed for treating head and neck (H&amp;N) cancer with uniform tumor dose and conformal critical organ sparing. Accurate delineation of organs-at-risk (OARs) on H&amp;N CT images is thus essential to treatment quality. Manual contouring used in current clinical practice is tedious, time-consuming, and can produce inconsistent results. Existing automated segmentation methods are challenged by the substantial inter-patient anatomical variation and low CT soft tissue contrast. To overcome the challenges, we developed a novel automated H&amp;N OARs segmentation method that combines a fully convolutional neural network (FCNN) with a shape representation model (SRM). </p> </div><div class="section"> <a class="named-anchor" id="S2"> <!-- named anchor --> </a> <h5 class="section-title" id="d580358e145">Methods</h5> <p id="P2">Based on manually segmented H&amp;N CT, the SRM and FCNN were trained in two steps: 1) SRM learned the latent shape representation of H&amp;N OARs from the training dataset; 2) the pre-trained SRM with fixed parameters were used to constrain the FCNN training. The combined segmentation network was then used to delineate nine OARs including the brainstem, optic chiasm, mandible, optical nerves, parotids and submandibular glands on unseen H&amp;N CT images. Twenty-two and 10 H&amp;N CT scans provided by the Public Domain Database for Computational Anatomy (PDDCA) were utilized for training and validation, respectively. Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD), and 95% maximum surface distance (95%SD) were calculated to quantitatively evaluate the segmentation accuracy of the proposed method. The proposed method was compared with an active appearance model that won the 2015 MICCAI H&amp;N Segmentation Grand Challenge based on the same dataset, an atlas method and a deep learning method based on different patient datasets. </p> </div><div class="section"> <a class="named-anchor" id="S3"> <!-- named anchor --> </a> <h5 class="section-title" id="d580358e150">Results</h5> <p id="P3">An average DSC=0.870 (brainstem), DSC=0.583 (optic chiasm), DSC=0.937 (mandible), DSC=0.653 (left optic nerve), DSC=0.689 (right optic nerve), DSC=0.835 (left parotid), DSC=0.832 (right parotid), DSC=0.755 (left submandibular), and DSC=0.813 (right submandibular) were achieved. The segmentation results are consistently superior to the results of atlas and statistical shape based methods as well as a patch-wise convolutional neural network method. Once the networks are trained off-line, the average time to segment all 9 OARs for an unseen CT scan is 9.5 seconds. </p> </div><div class="section"> <a class="named-anchor" id="S4"> <!-- named anchor --> </a> <h5 class="section-title" id="d580358e155">Conclusion</h5> <p id="P4">Experiments on clinical datasets of H&amp;N patients demonstrated the effectiveness of the proposed deep neural network segmentation method for multi-organ segmentation on volumetric CT scans. The accuracy and robustness of the segmentation were further increased by incorporating shape priors using SMR. The proposed method showed competitive performance and took shorter time to segment multiple organs in comparison to state of the art methods. </p> </div>

          Related collections

          Most cited references17

          • Record: found
          • Abstract: found
          • Article: not found

          Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks.

          Accurate segmentation of organs-at-risks (OARs) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep learning-based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state-of-the-art automated segmentation algorithms, commercial software, and interobserver variability.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation.

            We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              3D deeply supervised network for automated segmentation of volumetric medical images.

              While deep convolutional neural networks (CNNs) have achieved remarkable success in 2D medical image segmentation, it is still a difficult task for CNNs to segment important organs or structures from 3D medical images owing to several mutually affected challenges, including the complicated anatomical environments in volumetric images, optimization difficulties of 3D networks and inadequacy of training samples. In this paper, we present a novel and efficient 3D fully convolutional network equipped with a 3D deep supervision mechanism to comprehensively address these challenges; we call it 3D DSN. Our proposed 3D DSN is capable of conducting volume-to-volume learning and inference, which can eliminate redundant computations and alleviate the risk of over-fitting on limited training data. More importantly, the 3D deep supervision mechanism can effectively cope with the optimization problem of gradients vanishing or exploding when training a 3D deep model, accelerating the convergence speed and simultaneously improving the discrimination capability. Such a mechanism is developed by deriving an objective function that directly guides the training of both lower and upper layers in the network, so that the adverse effects of unstable gradient changes can be counteracted during the training procedure. We also employ a fully connected conditional random field model as a post-processing step to refine the segmentation results. We have extensively validated the proposed 3D DSN on two typical yet challenging volumetric medical image segmentation tasks: (i) liver segmentation from 3D CT scans and (ii) whole heart and great vessels segmentation from 3D MR images, by participating two grand challenges held in conjunction with MICCAI. We have achieved competitive segmentation results to state-of-the-art approaches in both challenges with a much faster speed, corroborating the effectiveness of our proposed 3D DSN.
                Bookmark

                Author and article information

                Journal
                Medical Physics
                Med. Phys.
                Wiley
                00942405
                October 2018
                October 2018
                September 19 2018
                : 45
                : 10
                : 4558-4567
                Affiliations
                [1 ]Key Lab of Intelligent Perception and Image Understanding of Ministry of Education; Xidian University; Xi'an Shaanxi 710071 China
                [2 ]Department of Radiation Oncology; University of California-Los Angeles; Los Angeles CA 90095 USA
                Article
                10.1002/mp.13147
                6181786
                30136285
                15a2c2b6-cf65-4fa7-b592-b026e377be00
                © 2018

                http://doi.wiley.com/10.1002/tdm_license_1.1

                http://onlinelibrary.wiley.com/termsAndConditions#am

                http://onlinelibrary.wiley.com/termsAndConditions#vor

                History

                Comments

                Comment on this article