0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      ESDiff: a joint model for low-quality retinal image enhancement and vessel segmentation using a diffusion model

      research-article
      , *
      Biomedical Optics Express
      Optica Publishing Group

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In clinical screening, accurate diagnosis of various diseases relies on the extraction of blood vessels from fundus images. However, clinical fundus images often suffer from uneven illumination, blur, and artifacts caused by equipment or environmental factors. In this paper, we propose a unified framework called ESDiff to address these challenges by integrating retinal image enhancement and vessel segmentation. Specifically, we introduce a novel diffusion model-based framework for image enhancement, incorporating mask refinement as an auxiliary task via a vessel mask-aware diffusion model. Furthermore, we utilize low-quality retinal fundus images and their corresponding illumination maps as inputs to the modified UNet to obtain degradation factors that effectively preserve pathological features and pertinent information. This approach enhances the intermediate results within the iterative process of the diffusion model. Extensive experiments on publicly available fundus retinal datasets (i.e. DRIVE, STARE, CHASE_DB1 and EyeQ) demonstrate the effectiveness of ESDiff compared to state-of-the-art methods.

          Related collections

          Most cited references40

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            U-Net: Convolutional Networks for Biomedical Image Segmentation

            There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net . conditionally accepted at MICCAI 2015
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Ridge-based vessel segmentation in color images of the retina.

              A method is presented for automated segmentation of vessels in two-dimensional color images of the retina. This method can be used in computer analyses of retinal images, e.g., in automated screening for diabetic retinopathy. The system is based on extraction of image ridges, which coincide approximately with vessel centerlines. The ridges are used to compose primitives in the form of line elements. With the line elements an image is partitioned into patches by assigning each image pixel to the closest line element. Every line element constitutes a local coordinate frame for its corresponding patch. For every pixel, feature vectors are computed that make use of properties of the patches and the line elements. The feature vectors are classified using a kappaNN-classifier and sequential forward feature selection. The algorithm was tested on a database consisting of 40 manually labeled images. The method achieves an area under the receiver operating characteristic curve of 0.952. The method is compared with two recently published rule-based methods of Hoover et al. and Jiang et al. The results show that our method is significantly better than the two rule-based methods (p < 0.01). The accuracy of our method is 0.944 versus 0.947 for a second observer.
                Bookmark

                Author and article information

                Journal
                Biomed Opt Express
                Biomed Opt Express
                BOE
                Biomedical Optics Express
                Optica Publishing Group
                2156-7085
                29 November 2023
                01 December 2023
                : 14
                : 12
                : 6563-6578
                Affiliations
                [1]School of Information Science and Engineering, Shandong Normal University , Jinan, 250300, China
                Author notes
                Author information
                https://orcid.org/0000-0001-6927-5094
                Article
                506205
                10.1364/BOE.506205
                10898574
                38420298
                6ee9423d-784d-4e21-a1fe-f9b114cccdcb
                © 2023 Optica Publishing Group

                https://doi.org/10.1364/OA_License_v2#VOR-OA

                © 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

                History
                : 19 September 2023
                : 01 November 2023
                : 13 November 2023
                Funding
                Funded by: Natural Sciences and Engineering Research Council of Canada 10.13039/501100000038
                Funded by: Natural Science Foundation of Shandong Province 10.13039/501100007129
                Award ID: ZR2018ZB0419
                Award ID: ZR2019ZD04
                Funded by: Taishan Scholar Foundation of Shandong Province 10.13039/501100010029
                Award ID: TSHW201502038
                Funded by: Natural Science Foundation of Shandong Province 10.13039/501100007129
                Award ID: ZR2020QF032
                Funded by: National Natural Science Foundation of China 10.13039/501100001809
                Award ID: 61572300
                Award ID: 61773246
                Award ID: 62003196
                Award ID: 62072289
                Award ID: 62073201
                Award ID: 81871508
                Categories
                Article

                Vision sciences
                Vision sciences

                Comments

                Comment on this article