AUTHOR=Cai Die , Pan Minmin , Liu Chenyuan , He Wenjie , Ge Xinting , Lin Jiaying , Li Rui , Liu Mengting , Xia Jun TITLE=Deep-learning-based segmentation of perivascular spaces on T2-Weighted 3T magnetic resonance images JOURNAL=Frontiers in Aging Neuroscience VOLUME=16 YEAR=2024 URL=https://www.frontiersin.org/journals/aging-neuroscience/articles/10.3389/fnagi.2024.1457405 DOI=10.3389/fnagi.2024.1457405 ISSN=1663-4365 ABSTRACT=Purpose

Studying perivascular spaces (PVSs) is important for understanding the pathogenesis and pathological changes of neurological disorders. Although some methods for automated segmentation of PVSs have been proposed, most of them were based on 7T MR images that were majorly acquired in healthy young people. Notably, 7T MR imaging is rarely used in clinical practice. Herein, we propose a deep-learning-based method that enables automatic segmentation of PVSs on T2-weighted 3T MR images.

Method

Twenty patients with Parkinson’s disease (age range, 42–79 years) participated in this study. Specifically, we introduced a multi-scale supervised dense nested attention network designed to segment the PVSs. This model fosters progressive interactions between high-level and low-level features. Simultaneously, it utilizes multi-scale foreground content for deep supervision, aiding in refining segmentation results at various levels.

Result

Our method achieved the best segmentation results compared with the four other deep-learning-based methods, achieving a dice similarity coefficient (DSC) of 0.702. The results of the visual count of the PVSs in our model correlated extremely well with the expert scoring results on the T2-weighted images (basal ganglia: rs = 0.845, P < 0.001; rs = 0.868, P < 0.001; centrum semiovale: rs = 0.845, P < 0.001; rs = 0.823, P < 0.001 for raters 1 and 2, respectively). Experimental results show that the proposed method performs well in the segmentation of PVSs.

Conclusion

The proposed method can accurately segment PVSs; it will facilitate practical clinical applications and is expected to replace the method of visual counting directly on T1-weighted images or T2-weighted images.