31
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references22

          • Record: found
          • Abstract: found
          • Article: not found

          Final version of 2009 AJCC melanoma staging and classification.

          To revise the staging system for cutaneous melanoma on the basis of data from an expanded American Joint Committee on Cancer (AJCC) Melanoma Staging Database. The melanoma staging recommendations were made on the basis of a multivariate analysis of 30,946 patients with stages I, II, and III melanoma and 7,972 patients with stage IV melanoma to revise and clarify TNM classifications and stage grouping criteria. Findings and new definitions include the following: (1) in patients with localized melanoma, tumor thickness, mitotic rate (histologically defined as mitoses/mm(2)), and ulceration were the most dominant prognostic factors. (2) Mitotic rate replaces level of invasion as a primary criterion for defining T1b melanomas. (3) Among the 3,307 patients with regional metastases, components that defined the N category were the number of metastatic nodes, tumor burden, and ulceration of the primary melanoma. (4) For staging purposes, all patients with microscopic nodal metastases, regardless of extent of tumor burden, are classified as stage III. Micrometastases detected by immunohistochemistry are specifically included. (5) On the basis of a multivariate analysis of patients with distant metastases, the two dominant components in defining the M category continue to be the site of distant metastases (nonvisceral v lung v all other visceral metastatic sites) and an elevated serum lactate dehydrogenase level. Using an evidence-based approach, revisions to the AJCC melanoma staging system have been made that reflect our improved understanding of this disease. These revisions will be formally incorporated into the seventh edition (2009) of the AJCC Cancer Staging Manual and implemented by early 2010.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks.

            Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

              Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and the revival of deep CNN. CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, with 85% sensitivity at 3 false positive per patient, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
                Bookmark

                Author and article information

                Journal
                IEEE Transactions on Medical Imaging
                IEEE Trans. Med. Imaging
                Institute of Electrical and Electronics Engineers (IEEE)
                0278-0062
                1558-254X
                September 2017
                September 2017
                : 36
                : 9
                : 1876-1886
                Article
                10.1109/TMI.2017.2695227
                28436853
                c7539fd1-8465-4c01-9b47-78df4350e41f
                © 2017
                History

                Comments

                Comment on this article