2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Comparison of Deep-Learning and Conventional Machine-Learning Methods for the Automatic Recognition of the Hepatocellular Carcinoma Areas from Ultrasound Images

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The emergence of deep-learning methods in different computer vision tasks has proved to offer increased detection, recognition or segmentation accuracy when large annotated image datasets are available. In the case of medical image processing and computer-aided diagnosis within ultrasound images, where the amount of available annotated data is smaller, a natural question arises: are deep-learning methods better than conventional machine-learning methods? How do the conventional machine-learning methods behave in comparison with deep-learning methods on the same dataset? Based on the study of various deep-learning architectures, a lightweight multi-resolution Convolutional Neural Network (CNN) architecture is proposed. It is suitable for differentiating, within ultrasound images, between the Hepatocellular Carcinoma (HCC), respectively the cirrhotic parenchyma (PAR) on which HCC had evolved. The proposed deep-learning model is compared with other CNN architectures that have been adapted by transfer learning for the ultrasound binary classification task, but also with conventional machine-learning (ML) solutions trained on textural features. The achieved results show that the deep-learning approach overcomes classical machine-learning solutions, by providing a higher classification performance.

          Related collections

          Most cited references35

          • Record: found
          • Abstract: found
          • Article: not found

          DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

          In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Recent advances in convolutional neural networks

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Very Deep Convolutional Networks for Large-Scale Image Recognition

              , (2014)
              In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
                Bookmark

                Author and article information

                Journal
                Sensors (Basel)
                Sensors (Basel)
                sensors
                Sensors (Basel, Switzerland)
                MDPI
                1424-8220
                29 May 2020
                June 2020
                : 20
                : 11
                : 3085
                Affiliations
                [1 ]Computer Science Department, Technical University of Cluj-Napoca, 28 Memorandumului Street, 400114 Cluj Napoca, Romania; delia.mitrea@ 123456cs.utcluj.ro (D.-A.M.); flaviu.vancea@ 123456cs.utcluj.ro (F.V.); tiberiu.marita@ 123456cs.utcluj.ro (T.M.); sergiu.nedevschi@ 123456cs.utcluj.ro (S.N.)
                [2 ]Regional Institute of Gastroenterology and Hepatology, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, 19-21 Croitorilor Street, 400162 Cluj-Napoca, Romania; monica.lupsor@ 123456umfcluj.ro (M.L.-P.); rbadea@ 123456umfcluj.ro (R.I.B.)
                [3 ]Medical Imaging Department, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, 8 Babes Street, 400012 Cluj-Napoca, Romania; rotaru.magda@ 123456umfcluj.ro
                Author notes
                Author information
                https://orcid.org/0000-0003-0978-7826
                https://orcid.org/0000-0003-4293-922X
                https://orcid.org/0000-0002-5987-9174
                https://orcid.org/0000-0003-2018-4647
                https://orcid.org/0000-0001-7918-1956
                https://orcid.org/0000-0002-3160-5489
                https://orcid.org/0000-0002-5330-090X
                Article
                sensors-20-03085
                10.3390/s20113085
                7309124
                32485986
                901d3097-96c0-4613-817e-7d366d8be87b
                © 2020 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).

                History
                : 05 May 2020
                : 27 May 2020
                Categories
                Article

                Biomedical engineering
                image processing,convolutional neural networks (cnn),pattern recognition,ultrasound images,hepatocellular carcinoma (hcc),automatic diagnosis

                Comments

                Comment on this article