8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Low-Grade Glioma Segmentation Based on CNN with Fully Connected CRF

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This work proposed a novel automatic three-dimensional (3D) magnetic resonance imaging (MRI) segmentation method which would be widely used in the clinical diagnosis of the most common and aggressive brain tumor, namely, glioma. The method combined a multipathway convolutional neural network (CNN) and fully connected conditional random field (CRF). Firstly, 3D information was introduced into the CNN which makes more accurate recognition of glioma with low contrast. Then, fully connected CRF was added as a postprocessing step which purposed more delicate delineation of glioma boundary. The method was applied to T2flair MRI images of 160 low-grade glioma patients. With 59 cases of data training and manual segmentation as the ground truth, the Dice similarity coefficient (DSC) of our method was 0.85 for the test set of 101 MRI images. The results of our method were better than those of another state-of-the-art CNN method, which gained the DSC of 0.76 for the same dataset. It proved that our method could produce better results for the segmentation of low-grade gliomas.

          Related collections

          Most cited references20

          • Record: found
          • Abstract: found
          • Article: not found

          The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

          In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            BrainSuite: an automated cortical surface identification tool.

            We describe a new magnetic resonance (MR) image analysis tool that produces cortical surface representations with spherical topology from MR images of the human brain. The tool provides a sequence of low-level operations in a single package that can produce accurate brain segmentations in clinical time. The tools include skull and scalp removal, image nonuniformity compensation, voxel-based tissue classification, topological correction, rendering, and editing functions. The collection of tools is designed to require minimal user interaction to produce cortical representations. In this paper we describe the theory of each stage of the cortical surface identification process. We then present classification validation results using real and phantom data. We also present a study of interoperator variability.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

              Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and the revival of deep CNN. CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, with 85% sensitivity at 3 false positive per patient, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
                Bookmark

                Author and article information

                Journal
                J Healthc Eng
                J Healthc Eng
                JHE
                Journal of Healthcare Engineering
                Hindawi
                2040-2295
                2040-2309
                2017
                13 June 2017
                : 2017
                : 9283480
                Affiliations
                1Department of Electronic Engineering, Fudan University, Shanghai, China
                2Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China
                3Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
                Author notes
                *Yuanyuan Wang: yywang@ 123456fudan.edu.cn and

                Academic Editor: Ashish Khare

                Author information
                http://orcid.org/0000-0003-1984-1136
                http://orcid.org/0000-0002-0654-6034
                Article
                10.1155/2017/9283480
                5485483
                29065666
                b68c6464-a1c5-408f-b35b-bf2105253913
                Copyright © 2017 Zeju Li et al.

                This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 4 November 2016
                : 21 February 2017
                : 20 March 2017
                Funding
                Funded by: National Natural Science Foundation of China
                Award ID: 11474071
                Funded by: National Basic Research Program of China
                Award ID: 2015CB755500
                Categories
                Research Article

                Comments

                Comment on this article