1,854
views
1
recommends
+1 Recommend
2 collections
    1
    shares

      2023 Journal Citation Reports Journal Impact Factor is 0.9. Scopus Citescore 0.8. 

      Interested in becoming a CVIA published author?

      • Platinum Open Access with no APCs. 
      • Fast peer review/Fast publication online after article acceptance.

      Submissions should be made electronically at: https://mc04.manuscriptcentral.com/cvia-journal.

      Please refer to the Author Guidelines at https://cvia-journal.org/instructions-to-authors/ before submission.

       

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Two-stage Method with a Shared 3D U-Net for Left Atrial Segmentation of Late Gadolinium-Enhanced MRI Images

      Published
      research-article
      Bookmark

            Abstract

            Objective: This study was aimed at validating the accuracy of a proposed algorithm for fully automatic 3D left atrial segmentation and to compare its performance with existing deep learning algorithms.

            Methods: A two-stage method with a shared 3D U-Net was proposed to segment the 3D left atrium. In this architecture, the 3D U-Net was used to extract 3D features, a two-stage strategy was used to decrease segmentation error caused by the class imbalance problem, and the shared network was designed to decrease model complexity. Model performance was evaluated with the DICE score, Jaccard index and Hausdorff distance.

            Results: Algorithm development and evaluation were performed with a set of 100 late gadolinium-enhanced cardiovascular magnetic resonance images. Our method achieved a DICE score of 0.918, a Jaccard index of 0.848 and a Hausdorff distance of 1.211, thus, outperforming existing deep learning algorithms. The best performance of the proposed model (DICE: 0.851; Jaccard: 0.750; Hausdorff distance: 4.382) was also achieved on a publicly available 2013 image data set.

            Conclusion: The proposed two-stage method with a shared 3D U-Net is an efficient algorithm for fully automatic 3D left atrial segmentation. This study provides a solution for processing large datasets in resource-constrained applications.

            Significance Statement: Studying atrial structure directly is crucial for comprehending and managing atrial fibrillation (AF). Accurate reconstruction and measurement of atrial geometry for clinical purposes remains challenging, despite potential improvements in the visibility of AF-associated structures with late gadolinium-enhanced magnetic resonance imaging. This difficulty arises from the varying intensities caused by increased tissue enhancement and artifacts, as well as variability in image quality. Therefore, an efficient algorithm for fully automatic 3D left atrial segmentation is proposed in the present study.

            Main article text

            Introduction

            Obtaining 3D atrial geometry from late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) is a crucial task for the structural analysis of patients with atrial fibrillation (AF). Direct segmentation and 3D reconstruction based on 2D images, and the relationship between the front and back frames are commonly used in clinical studies. However, this manual segmentation is labor-intensive, structure-specialized work. Its accuracy remains challenging because of imaging artifacts, varying intensities depending on the extent of fibrosis, and varying imaging quality [1]. Therefore, an intelligent algorithm is urgently needed to perform fully automated 3D segmentation, to ensure precise reconstruction and measurement of atrial geometry, thereby facilitating clinical applications.

            In 2018, the Statistical Atlases and Computational Modeling of the Heart workshop held the left atrial (LA) Segmentation Challenge; 27 teams participated in the final evaluation phase, and 18 teams attended the conference and proposed diverse approaches. These approaches included two traditional methods and 16 deep-learning models. These two traditional methods ranked second and third from the bottom, respectively. In contrast, the top ten were deep-learning models, and the Double 3D U-Net model from Xia et al. [2] achieved the best score in this challenge [3]. Therefore, deep-learning models are highly promising for fully automated LA segmentation.

            The objective of this research was to introduce an automated segmentation approach and verify its accuracy.

            Methods

            Image Acquisition and Pre-Processing

            The University of Utah provided 100 3D LGE-MRIs, which were randomly split into training (N = 80) and test (N = 20) sets. Each 3D LGE-MRI scan had a spatial size of either 640 × 640 or 576 × 576 pixels, and consisted of 88 slices. A 1.5 Tesla Avanto or 3.0 Tesla Verio clinical whole-body scanner was used to acquire LGE-MRIs with a spatial resolution of 0.625 × 0.625 × 0.625 mm [3]. To obtain the segmentation masks as the ground truths, experts manually segmented the LA, including the LA appendage, the mitral valve, and the pulmonary vein regions. To increase the performance of deep-learning models, data augmentation (e.g., elastic deformations, affine transformations and warping) was used to artificially increase the size of the training set without causing overfitting [4]. All images were cropped to the same size (576 × 576 × 80) for input into different networks.

            Shared 3D U-Net

            U-Net consists of an encoder and decoder that are connected with long skip connections. Like U-Net, U-Net + + has the same U-shaped architecture of the encoder-decoder scheme, but the encoder and decoder are connected through a series of nested dense convolutional blocks. To improve the performance of U-Net, Double U-Net, which combines two U-Net architectures stacked on top of each other, was proposed. Inspired by Double U-Net, we propose Shared U-Net to achieve the performance of Double U-Net while decreasing the size of the model.

            According to the convolution operation rules, U-Net designed to segment images can also determine the position characteristics of the image. Therefore, Double U-Net can be used to complete these two tasks (i.e., detection and segmentation). First, we use U-Net to detect the region of interest (RoI) and then extract the position coordinates. According to the position coordinates of the extracted RoI, the RoI area of the original image is cropped. Second, the RoI area of the original image is input into the second U-Net for LA cavity segmentation (Figure 1A). In contrast to Double U-Net, our proposed Shared U-Net can achieve LA cavity segmentation (Figure 1B). As shown in Figure 1C, our proposed method is also a two-stage approach: 1) In the first stage, the feature map representing the LA is first obtained with the shared 3D U-Net, the bounding box is located with a fully connected layer, and the RoI for decreasing background predominance is finally cropped from a 3D LGE-MRI. 2) In the second stage, the shared 3D U-Net is used to precisely segment the LA cavity from the RoI output, and zero-padding is applied to reconstruct the final 3D LA with the same size as the original input.

            Figure 1

            Difference Between Our Shared 3D U-Net with Double U-Net or Shared U-Net.

            The shared 3D U-Net used in this study follows the U-Net architecture, which can be divided into two main parts. The first part is the encoder, which extracts relevant image information. The second part is the decoder, which uses the extracted information to predict and reconstruct the segmentation of the image.

            Our approach involves five encoding and decoding blocks in the network architecture. Each encoding block is specifically a residual convolution block; that is, the main path is two 3 × 3 × 3 convolution operations, and the branch path is the residual connection. The residual convolution block is used for image downsampling and feature extraction, and the added residual connection alleviates the problems of gradient disappearance, gradient explosion and overfitting caused by excessive network depth. As the depth of the network increases, the number of feature maps increases, as does the number of channels in the feature maps. Each decoding block consists of a 3 × 3 × 3 inverse convolution; a feature fusion module, which consists of a feature graph concatenation operation according to the dimension of the number of channels; and a 3 × 3 × 3 convolution. A jump connection is added between the encoding block and the decoding block corresponding to the U-shaped structure, so that the low-level feature information can be directly transmitted to the high level, to enable better recovery of the original image by the decoder. To ensure nonlinearity and avoid the disappearance of the gradient problem, each convolution layer is followed by a linear element activation function of the rectifier.

            Evaluation Metrics

            To assess the precision of various deep-learning models, we conducted an evaluation against the ground truths. DICE and Jaccard measurements were used to verify the performance at the volumetric level.

            where P is the 3D prediction, and G is the corresponding 3D ground truth.

            In addition, the Hausdorff distance (HD) was used to evaluate the performance of different models. HD is defined as

            where h(A, B) is called the directed HD and is given by

            where ||ab|| is the Euclidean distance.

            Experimental Settings

            Our experiments were based on PyTorch and were run on an NVIDIA GTX3090 GPU. The Adam optimizer was used in the experiment. The initial learning rate was set to 0.001, the epoch was set to 200, and the batch size was set to 64. In the training stage, we used five-fold cross-validation and early stop training to continuously monitor the loss changes in the model on the validation set. We aimed to adjust the learning rate and other parameters appropriately, and stop training when the loss of the validation set was at a minimum, to prevent overfitting and ensure the model’s generalizability. After training, we used the model weight that achieved the best DICE score on the validation set as the final model weight.

            Results

            Figure 2 shows the segmentation results of our proposed method and other 3D networks on a 2018 image data set and 2013 image data set. Compared with the 3D U-Net and 3D U-Net + + models, our model showed a more comprehensive segmented region. Moreover, compared with Double 3D U-Net, our proposed model confers advantages in processing surface details, thus, making the result similar to the ground truth.

            Figure 2

            Sample Outputs of the Proposed Method and Various 3D Networks on a 2018 Image Data Set and 2013 Image Data Set.

            The proposed network, using the same 3D U-Net to perform region of interest (RoI) positioning and LA cavity segmentation, handles surface details better than other models while ensuring the integrity of the segmented area, thus, making the output more similar to the ground truth.

            Our method achieved a DICE score of 0.918 and a Jaccard index of 0.848. Compared with 2D U-Net, 3D U-Net, U-Net + +, 3D V-Net and Double 3D U-Net, our method performed best (Figure 3). In detail, the DICE score increased from 0.916 for Double 3D U-Net to 0.918 for our model with a shared 3D U-Net, and the Jaccard index increased from 0.845 to 0.848.

            Figure 3

            Performance of Various Networks, Evaluated with the DICE Score and Jaccard Index.

            Despite a slight increase in segmentation performance, the number of parameters, our model required lower memory consumption and shorter training times than the other methods. With the prolongation of the side length of RoI, the memory consumption and training time of the Double 3D U-Net dramatically increased, whereas those of our method were lower and showed little variation (Figure 4). Similarly, the number of parameters of our method was half that of Double 3D U-Net (176620KB vs. 88276KB).

            Figure 4

            Memory Consumption and Training Time.

            (A) Memory consumption between Double 3D U-Net and our method. (B) Training time of two methods.

            The best performance of the proposed model is attributable to our modifications to the traditional 2D U-Net (Table 1). In contrast to 2D U-Net with 3 × 3 convolutions, 3D U-Net with 3 × 3 × 3 convolutions consider 3D information and is suitable for the segmentation of 3D volume data, thus, resulting in better performance in experiment one (Exp. 1). The 3D U-Net + + model was developed on the basis of 3D U-Net by addition of more skip connections for extracting richer shallow features to generate several “U-Nets.” Compared with 3D U-Net, 3D U-Net, + + has more parameters, but has slightly effects on the DICE score and Jaccard index. Therefore, using 3D U-Net rather than 3D U-Net + +, we used a two-stage strategy for 3D LA segmentation with two 3D U-Nets: one for extracting the RoI and the other for segmenting LA. Compared with those of 3D U-Net, the DICE score and Jaccard index of Double 3D U-Net were significantly greater (1.3% and 2.3%, respectively; Exp. 3). However, Double 3D U-Net had twice the number of parameters as 3D U-Net (88276KB vs. 176620KB). Therefore, we proposed a two-stage method with only one shared 3D U-Net to segment the 3D LA, thus maintaining performance while decreasing the number of parameters (88276KB; Exp. 4).

            Table 1

            Results of Four Experiments.

            ExperimentsNetworkDICEHDJaccard
            Exp.1 (2D Vs. 3D)2D U-Net0.89014.0510.802
            3D U-Net 0.903 4.234 0.823
            Exp.2 (U-Net Vs. U-Net + +)2D U-Net + +0.90312.3860.823
            3D U-Net + + 0.908 3.639 0.832
            Exp.3 (Single Vs. Double)3D U-Net0.9034.2340.823
            Double 3D U-Net 0.916 1.300 0.845
            Exp.4 (Double Vs. Ours)Double 3D U-Net0.9161.3000.845
            Shared 3D U-Net 0.918 1.211 0.848

            Exp.1: 2D U-Net vs. 3D U-Net; Exp.2: 2D/3D U-Net vs. 2D/3D U-Net + +; Exp.3: Single 3D U-Net vs. Double 3D U-Net; Exp.4: Double 3D U-Net vs. Shared 3D U-Net. Bold values indicate better results.

            Further evaluation was conducted on a publicly available 2013 image data set. The best performance (according to the DICE score, Jaccard index and HD) was obtained with the proposed model (Shared 3D U-Net; Table 1). However, the DICE score, Jaccard index and HD were lower than those on the publicly available 2018 image data set (Tables 1 and 2).

            Table 2

            Performance of the Indicated Models on the Publicly Available 2013 Image Data Set.

            ModelDICEJaccardHDSize
            Double 3D U-Net0.8420.7385.21645.18M
            Shared 3D U-Net 0.851 0.750 4.382 22.66M
            3D U-Net + +0.8150.7277.051 20.50M
            3D U-Net0.8100.7146.81622.84M
            2D U-Net + +0.7990.67439.48732.04M
            2D U-Net0.7900.66645.80431.04M

            Bold values indicate better results.

            Discussion

            AF, a prevalent type of continuous irregular heart rhythm, is caused by different substrates that are extensively dispersed in both atrial chambers [5]. AF also produces further structural changes, such as dilatation, fibrosis and myofiber alterations [6]. Consequently, comprehensive research on the atrial anatomy and its transformations is essential to enhance understanding and management of AF [7]. Recent studies support the visibility of AF-associated structures by LGE-MRI for identifying AF substrates and predicting AF ablation outcomes [8]. On the one hand, the extent and distribution of atrial fibrosis have been suggested to be reliable predictors of the catheter ablation success rate [9, 10]. On the other hand, LA diameter and volume have been shown to provide reliable information for clinical diagnosis [11]. However, the above structural analysis is based on 3D atrial geometry.

            Segmenting the LA is a crucial step in quantitatively analyzing the structural characteristics of the atria. However, LA segmentation on LGE-MRI images is difficult, owing to variations in intensity. Recently, deep learning techniques have been proposed for automatically segmenting cardiac structures from medical images in 3D [3, 8]. Multiple studies have demonstrated that convolutional neural networks exhibit superior performance to conventional techniques [2, 1225]. Hence, we presented a dual-phase method using a collaborative 3D U-Net for LA segmentation. Several notable discoveries emerged from this study. First, we found that the 3D U-Net architecture achieved better performance than the traditional 2D U-Net architecture. Second, double sequential U-Net architectures (e.g., Double 3D U-Net) achieved superior segmentation results to those of a single U-Net (e.g., 3D U-Net). Finally, the Double 3D U-Net architecture can be optimized with a shared 3D U-Net to decrease model complexity while maintaining good performance.

            In the 2018 LA Segmentation Challenge, 17 teams provided methods and their performance to the challenge organizer. As shown in Table 2, the top method, with a Double 3D U-Net architecture, achieved a Dice score of 0.932 and a Jaccard index of 0.874. The Double 3D U-Net design uses the initial 3D U-Net for automatic RoI detection, and a 3D U-Net is subsequently used for the refined regional segmentation. Similar to this two-stage strategy, our method achieved superior results to those of these one-stage networks (Figure 3). In contrast to the Double 3D U-Net architecture reported by Xia et al., pre-processing (e.g., down-sampling and contrast limited adaptive histogram equalization) and residual connections were not used in our study, and two 3D U-Nets were replaced by a shared 3D U-Net. Under the same conditions, we observed similar performance between Double 3D U-Net and Shared 3D U-Net (Table 3), but Shared 3D U-Net had lower memory consumption, shorter training times and fewer parameters than Double 3D U-Net. In the present study, we also evaluated the performance of Double 3D U-Net, but our results were not consistent with those obtained in the challenge. This inconsistency is attributable to factors including different running devices, pre-processing approaches, development frameworks, hyperparameter settings and post-processing methods. Nevertheless, several tips for improving model performance may be considered in further work: (1) pre-processing methods (e.g., image resizing to multiple scales, normalization, cropping, use of de-noise filters, cropping and down-sampling); (2) post-processing methods (e.g., keeping the largest component, smoothing and dilation erosion); and (3) network components (e.g., dense connections, dilated convolutions, spatial pyramid pooling, attention units, pre-trained networks and ensemble learning).

            Table 3

            Comparison of our Method with Methods Submitted to the 2018 LA Segmentation Challenge.

            MethodsTrainTestNetworksDICEJaccard
            Xia et al. [2]10054Double 3D U-Net 0.932 0.874
            Huang et al. [3]10054Double 3D U-Net0.9310.872
            Bian et al. [14]10054Dilated 2D ResNet0.9260.869
            Vesal et al. [24]10054Dilated 3D U-Net0.9250.861
            Yang et al. [25]10054Double 3D U-Net0.9250.860
            Li et al. [18]10054Double 3D U-Net0.9230.859
            Puybareau et al. [21]100542D FCN with VGG-Net0.9230.857
            Chen et al. [16]10054Multi-task 2D U-Net0.9210.854
            Xu et al. [3]10054Ensemble 2D U-Net0.9150.845
            Jia et al. [17]10054Double Ensemble 2D U-Net0.9070.832
            Liu et al. [19]100542D U-Net0.9030.825
            Borra et al. [15]100543D U-Net0.8980.817
            De Vente et al. [23]100542D U-Net0.8970.815
            Preetha et al. [20]100542D U-Net0.8870.799
            Qiao et al. [26]10054Multi-atlas segmentation0.8610.758
            Nuñez-Garcia et al. [27]10054Multi-atlas segmentation0.8590.758
            Savioli et al. [22]100543D FCN0.8510.744
            Liu et al. [28]8020V-Net0.910.84
            Milletari et al.[13]8020V-Net0.900.82
            Çiçek et al. [12]80203D U-Net0.870.78
            Liu et al. [29]8020UNSMLNet0.920.85
            Ours 80 20 Shared 3D U-Net 0.918 0.848

            Bold values indicate better results.

            Training the shared 3D U-Net on more reliable data could potentially enhance its accuracy. In the present study, our model trained on a 2018 dataset achieved a DICE score of 0.918, but when tested on a 2013 dataset, it achieved a DICE score of 0.851. Therefore, its generalizability is insufficient, and this important aspect must be further studied in the future. Furthermore, we intend to use the shared 3D U-Net to segment both atrial chambers and fibrosis, given that AF is a bi-chamber disease. Our current focus is on developing a dataset that includes manual segmentations of atrial chamber masks, which may be used to train the 3D U-Net model collaboratively.

            Conclusions

            We proposed an efficient algorithm for fully automatic 3D left atrial segmentation. In our method, 3D U-Net is used to extract 3D features, and a two-stage strategy is used to decrease the segmentation error caused by the class imbalance problem. Our network architecture with only one shared 3D U-Net has relatively low complexity, low memory requirements and a short training time. Our automatic method was highly reproducible and objective, producing a Dice score of 0.918 and a Jaccard index of 0.848, thus, outperforming the traditional six methods. Before its application, its performance must be further evaluated on an edge computing platform, and its effectiveness must be assessed in clinical settings.

            Conflict of Interest

            The authors declare that they have no competing interests.

            Data and Materials Sharing

            Relevant data can be obtained from the following website: http://atriaseg2018.cardiacatlas.org/.

            Code is available at: https://github.com/Yaucleo/Shared.3D.U-Net.

            Citation Information

            References

            1. , , , , . MRI of the left atrium: predicting clinical outcomes in patients with atrial fibrillation. Expert Rev Cardiovasc Ther 2011;9(1):105–11.

            2. , , , . Automatic 3D atrial segmentation from GE-MRIs using volumetric fully convolutional networks. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 211–20.

            3. , , , , , , et al. A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging. Med Image Anal 2021;67:101832.

            4. , , , , , . Fully automatic left atrium segmentation from late gadolinium enhanced magnetic resonance imaging using a dual fully convolutional neural network. IEEE Trans Med Imaging 2019;38(2):515–24.

            5. , , , , , , et al. Adiposity-associated atrial fibrillation: molecular determinants, mechanisms, and clinical significance. Cardiovasc Res 2023;119(3):614–30.

            6. , , , , , , et al. Atrial fibrillation driven by micro-anatomic intramural re-entry revealed by simultaneous sub-epicardial and sub-endocardial optical mapping in explanted human hearts. Eur Heart J 2015;36(35):2390–401.

            7. , , , , , , et al. Three-dimensional integrated functional, structural, and computational mapping to define the structural "fingerprints" of heart-specific atrial fibrillation drivers in human heart ex vivo. J Am Heart Assoc 2017;6(8):e005922.

            8. , , , . Medical image analysis on left atrial LGE MRI for atrial fibrillation studies: a review. Med Image Anal 2022;77:102360.

            9. , , . Assessment of left atrial fibrosis by late gadolinium enhancement magnetic resonance imaging: methodology and clinical implications. JACC Clin Electrophysiol 2017;3(8):791–802.

            10. , , , , , , et al. Quantification of left atrial fibrosis by 3D late gadolinium-enhanced cardiac magnetic resonance imaging in patients with atrial fibrillation: impact of different analysis methods. Eur Heart J Cardiovasc Imaging 2022;23(9):1182–90.

            11. , , , , . Reference left atrial dimensions and volumes by steady state free precession cardiovascular magnetic resonance. J Cardiovasc Magn Reson 2010;12(1):65.

            12. , , , , . 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2016. pp. 424–432.

            13. , , . U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical image Computing and Computer-Assisted Intervention. Cham: Springer; 2015.

            14. , , , , , , et al. Pyramid network with online hard example mining for accurate left atrium segmentation. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 237–45.

            15. , , , , , . A semantic-wise convolutional neural network approach for 3-D left atrium segmentation from late gadolinium enhanced magnetic resonance imaging. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 329–38.

            16. , , . Multi-task learning for left atrial segmentation on GE-MRI. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 292–301.

            17. , , , , , , et al. Automatically segmenting the left atrium from cardiac images using successive 3D U-nets and a contour loss. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 221–29.

            18. , , , , , , et al. Attention based hierarchical aggregation network for 3D left atrial segmentation. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 255–64.

            19. , , , . Deep learning based method for left atrial segmentation in GE-MRI. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 311–18.

            20. , , , . Segmentation of the left atrium from 3D gadolinium-enhanced MR images with convolutional neural networks. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 265–72.

            21. , , , , , , et al. Left atrial segmentation in a few seconds using fully convolutional network and transfer learning. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 339–47.

            22. , , . V-FCNN: volumetric fully convolution neural network for automatic atrial segmentation. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 273–81.

            23. , , , , , , et al. Convolutional neural networks for segmentation of the left atrium from gadolinium-enhancement MRI images. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 348–56.

            24. , , . Dilated convolutions in neural networks for left atrial segmentation in 3D gadolinium enhanced-MRI. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 319–28.

            25. , , , , , , et al. Combating uncertainty with novel losses for automatic left atrium segmentation. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 246–54.

            26. , , , . Fully automated left atrium cavity segmentation from 3D GE-MRI by multi-atlas selection and registration. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 230–6.

            27. , , , , , , et al. Left atrial segmentation combining multi-atlas whole heart labeling and shape-based atlas selection. In: International Workshop on Statistical Atlases and Computational Models of the Heart. Springer; 2018. pp. 302–10.

            28. , , , , . A contrastive consistency semi-supervised left atrium segmentation model. Comput Med Imaging Graph 2022;99:102092.

            29. , , , , , . Uncertainty-guided symmetric multilevel supervision network for 3D left atrium segmentation in late gadolinium-enhanced MRI. Med Phy 2022;49(7):4554–65.

            Author and article information

            Journal
            CVIA
            Cardiovascular Innovations and Applications
            CVIA
            Compuscript (Ireland )
            2009-8782
            2009-8618
            16 June 2023
            : 8
            : 1
            : e976
            Affiliations
            [1] 1College of Information Science and Technology, Jinan University, Guangzhou, China
            [2] 2Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
            [3] 3Department of Magnetic Resonance, First Affiliated Hospital of Harbin Medical University, Harbin, China
            [4] 4Department of Cardiology, First Affiliated Hospital of Harbin Medical University, Harbin, China
            Author notes
            Correspondence: Jieyun Bai, E-mail: bai_jieyun@ 123456126.com
            Article
            cvia.2023.0039
            10.15212/CVIA.2023.0039
            f2e03a34-5465-4493-a89e-f15eb0643c0d
            Copyright © 2023 Cardiovascular Innovations and Applications

            This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 Unported License (CC BY-NC 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See https://creativecommons.org/licenses/by-nc/4.0/.

            History
            : 05 December 2022
            : 12 March 2023
            : 23 May 2023
            Page count
            Figures: 4, Tables: 3, References: 29, Pages: 9
            Funding
            Funded by: Guangdong Provincial Natural Science Foundation
            Award ID: 2023A1515012833
            Funded by: Science and Technology Program of Guangzhou
            Award ID: 202201010544
            Funded by: National Natural Science Foundation of China
            Award ID: 61901192
            Funded by: National Key Research and Development Project
            Award ID: 2019YFC0120100
            Funded by: National Key Research and Development Project
            Award ID: 2019YFC0121907
            Funded by: National Key Research and Development Project
            Award ID: 2019YFC0121904
            Funded by: Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization
            Award ID: 2021B1212040007
            This research was funded by the Guangdong Provincial Natural Science Foundation (2023A1515012833; J.B.), Science and Technology Program of Guangzhou (202201010544; J.B.), the National Natural Science Foundation of China (61901192; J.B.), National Key Research and Development Project (2019YFC0120100, 2019YFC0121907 and 2019YFC0121904; H.W., J.B. and Y.L.) and Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization (2021B1212040007).
            Categories
            Research Article

            General medicine,Medicine,Geriatric medicine,Transplantation,Cardiovascular Medicine,Anesthesiology & Pain management
            Atrial fibrillation,Left atrium,Deep learning,Image segmentation,Magnetic resonance imaging

            Comments

            Comment on this article