109
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM

      research-article

      Read this article at

      ScienceOpenPublisherPMC
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.

          Related collections

          Most cited references96

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Very Deep Convolutional Networks for Large-Scale Image Recognition

          , (2014)
          In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions

            Training of neural networks for automated diagnosis of pigmented skin lesions is hampered by the small size and lack of diversity of available datasets of dermatoscopic images. We tackle this problem by releasing the HAM10000 (“Human Against Machine with 10000 training images”) dataset. We collected dermatoscopic images from different populations acquired and stored by different modalities. Given this diversity we had to apply different acquisition and cleaning methods and developed semi-automatic workflows utilizing specifically trained neural networks. The final dataset consists of 10015 dermatoscopic images which are released as a training set for academic machine learning purposes and are publicly available through the ISIC archive. This benchmark dataset can be used for machine learning and for comparisons with human experts. Cases include a representative collection of all important diagnostic categories in the realm of pigmented lesions. More than 50% of lesions have been confirmed by pathology, while the ground truth for the rest of the cases was either follow-up, expert consensus, or confirmation by in-vivo confocal microscopy.
              • Record: found
              • Abstract: found
              • Article: not found

              Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks.

              Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.

                Author and article information

                Contributors
                Role: Academic Editor
                Journal
                Sensors (Basel)
                Sensors (Basel)
                sensors
                Sensors (Basel, Switzerland)
                MDPI
                1424-8220
                18 April 2021
                April 2021
                : 21
                : 8
                : 2852
                Affiliations
                [1 ]Department of Computer Science and Engineering, Gitam Institute of Technology, GITAM Deemed to be University, Rushikonda, Visakhapatnam 530045, India; parvathanenins@ 123456gmail.com
                [2 ]Tata Consultancy Services, Gachibowli, Hyderabad 500019, India; siva.jalluri18@ 123456gmail.com
                [3 ]Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea; fazal@ 123456sejong.ac.kr
                [4 ]Department of Electrical and Electronics Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar 737136, India; akash.b@ 123456smit.smu.edu.in
                [5 ]Division of Future Convergence (HCI Science Major), Dongduk Women’s University, Seoul 02748, Korea
                [6 ]School of Science, Edith Cowan University, Joondalup 6027, Australia
                Author notes
                [* ]Correspondence: wjkim@ 123456dongduk.ac.kr (W.K.); james.kang@ 123456ecu.edu.au (J.J.K.)
                [†]

                These authors contributed equally to this work and are first co-authors.

                Author information
                https://orcid.org/0000-0001-9247-9132
                https://orcid.org/0000-0001-9478-3840
                https://orcid.org/0000-0001-5206-272X
                https://orcid.org/0000-0003-2759-3224
                https://orcid.org/0000-0002-0242-4187
                Article
                sensors-21-02852
                10.3390/s21082852
                8074091
                33919583
                eb478a52-5cb1-499e-866e-6b45a475364d
                © 2021 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( https://creativecommons.org/licenses/by/4.0/).

                History
                : 13 March 2021
                : 16 April 2021
                Categories
                Article

                Biomedical engineering
                skin disease,mobilenet v2,long short-term memory (lstm),deep learning,neural network,grey-level correlation,mobile platform,convolutional neural network (cnn),mobilenet

                Comments

                Comment on this article

                Related Documents Log