3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      RAD-UNet: Research on an improved lung nodule semantic segmentation algorithm based on deep learning

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Objective

          Due to the small proportion of target pixels in computed tomography (CT) images and the high similarity with the environment, convolutional neural network-based semantic segmentation models are difficult to develop by using deep learning. Extracting feature information often leads to under- or oversegmentation of lesions in CT images. In this paper, an improved convolutional neural network segmentation model known as RAD-UNet, which is based on the U-Net encoder-decoder architecture, is proposed and applied to lung nodular segmentation in CT images.

          Method

          The proposed RAD-UNet segmentation model includes several improved components: the U-Net encoder is replaced by a ResNet residual network module; an atrous spatial pyramid pooling module is added after the U-Net encoder; and the U-Net decoder is improved by introducing a cross-fusion feature module with channel and spatial attention.

          Results

          The segmentation model was applied to the LIDC dataset and a CT dataset collected by the Affiliated Hospital of Anhui Medical University. The experimental results show that compared with the existing SegNet [14] and U-Net [15] methods, the proposed model demonstrates better lung lesion segmentation performance. On the above two datasets, the mIoU reached 87.76% and 88.13%, and the F1-score reached 93.56% and 93.72%, respectively. Conclusion: The experimental results show that the improved RAD-UNet segmentation method achieves more accurate pixel-level segmentation in CT images of lung tumours and identifies lung nodules better than the SegNet [14] and U-Net [15] models. The problems of under- and oversegmentation that occur during segmentation are solved, effectively improving the image segmentation performance.

          Related collections

          Most cited references38

          • Record: found
          • Abstract: found
          • Article: not found

          Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.

          State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

            In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

              We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Oncol
                Front Oncol
                Front. Oncol.
                Frontiers in Oncology
                Frontiers Media S.A.
                2234-943X
                23 March 2023
                2023
                : 13
                : 1084096
                Affiliations
                [1] 1 Department of Computer Science, Anhui Medical University , Hefei, Anhui, China
                [2] 2 Department of Radiology, First Affiliated Hospital of Anhui Medical University , Hefei, Anhui, China
                [3] 3 Department of General Thoracic Surgery, The First Affiliated Hospital of Anhui Medical University , Hefei, Anhui, China
                Author notes

                Edited by: Liaqat Ali, University of Science and Technology Bannu, Pakistan

                Reviewed by: Zhe Min, Shandong University, China; Hong Huang, Chongqing University, China

                *Correspondence: Zezhi Wu, wuzezhi@ 123456ahmu.edu.cn

                This article was submitted to Cancer Imaging and Image-directed Interventions, a section of the journal Frontiers in Oncology

                Article
                10.3389/fonc.2023.1084096
                10076852
                47c935a7-ac0f-4e77-ba73-71dac118b80b
                Copyright © 2023 Wu, Li and Zuo

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 01 November 2022
                : 01 March 2023
                Page count
                Figures: 10, Tables: 5, Equations: 13, References: 38, Pages: 16, Words: 7415
                Funding
                Funded by: Anhui Provincial Department of Education , doi 10.13039/501100010814;
                This research was supported by the Natural Science Foundation of Anhui University of China (No.: 2022AH050698) and the Natural Science Foundation of Anhui University of China (No.: KJ2021A0265).
                Categories
                Oncology
                Original Research

                Oncology & Radiotherapy
                deep learning,lung lesions,ct imaging,semantic segmentation,the u-net,feature fusion,attention mechanism

                Comments

                Comment on this article