3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Two‐stage coarse‐to‐fine method for pathological images in medical decision‐making systems

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Artificial intelligence decision systems play an important supporting role in the field of medical information. Medical image analysis is an important part of decision systems and an even more important part of medical diagnosis and treatment. The wealth of cellular information in histopathological images makes them a reliable means of diagnosing tumors. However, due to the large size, high resolution, and complex background structure of pathology images, deep learning methods still have various difficulties in the recognition of pathology images. Based on this, this study proposes a two‐stage continuous improvement‐based approach for pathology image recognition in medical decision systems. For pathology images with complex backgrounds, normalization and enhancement is performed to remove the effects of noise color, and light‐dark inconsistencies on the segmentation network. The continuous refinement PSP Net (CRPSPNet) is then designed for accurate recognition of the pathology images. CRPSPNet is divided into two stages: Pyramid Scene Parsing Network segmentation to obtain coarse segmentation results; and continuous refinement model refines the results of the first stage. Experiments using more than 1,000 osteosarcoma pathology images have shown that the method gives more accurate results with fewer computer resources and processing time than traditional optimization models. Its Intersection over Union achieves 0.76.

          Related collections

          Most cited references65

          • Record: found
          • Abstract: found
          • Article: not found

          SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

          We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet.
            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Pyramid Scene Parsing Network

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

                Bookmark

                Author and article information

                Contributors
                Journal
                IET Image Processing
                IET Image Processing
                Institution of Engineering and Technology (IET)
                1751-9659
                1751-9667
                January 2024
                September 26 2023
                January 2024
                : 18
                : 1
                : 175-193
                Affiliations
                [1 ] School of Computer Science and Engineering Changsha University Changsha China
                [2 ] Hunan University of Medicine General Hospital Huaihua China
                [3 ] Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance Hunan University of Medicine Huaihua China
                [4 ] State Key Laboratory of Public Big Data College of Computer Science and Technology Guizhou University Guiyang China
                [5 ] Research Center for Artificial Intelligence Monash University Melbourne Clayton Australia
                Article
                10.1049/ipr2.12941
                b9cceb84-bdb8-4cc9-bf0e-e59253b8868a
                © 2024

                http://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article