ORIGINAL RESEARCH article
Front. Med.
Sec. Nuclear Medicine
Volume 12 - 2025 | doi: 10.3389/fmed.2025.1530361
This article is part of the Research TopicMethods and Strategies for Integrating Medical Images Acquired from Distinct ModalitiesView all 3 articles
Automatic Segmentation and Volume Measurement of Anterior Visual Pathway in Brain 3D-T1WI Using Deep Learning
Provisionally accepted- 1First Affiliated Hospital of Chongqing Medical University, Chongqing, China
- 2School of Physics and Electronic Engineering, Chongqing Normal University, Chongqing, Chongqing Municipality, China
- 3College of Computer and Information Science, Chongqing Normal University, Chongqing, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Objective: Accurate anterior visual pathway (AVP) segmentation is vital for clinical applications, but manual delineation is time-consuming and resource-intensive. We aim to explore the feasibility of automatic AVP segmentation and volume measurement in brain T1-weighted imaging (T1WI) using the 3D UX-Net deep-learning model.Clinical data and brain 3D T1WI from 119 adults were retrospectively collected. Two radiologists annotated the AVP course in each participant's images. The dataset was randomly divided into training (n=89), validation (n=15), and test sets (n=15). A 3D UX-Net segmentation model was trained on the training data, with hyperparameters optimized using the validation set. Model accuracy was evaluated on the test set using dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and average symmetric surface distance (ASSD). The 3D UX-Net's performance was compared against 3D U-Net, Swin UNEt TRansformers (UNETR), UNETR++, and Swin Soft Mixture Transformer (Swin SMT). The AVP volume in the test set was calculated using the model's effective voxel volume, with volume difference (VD) assessing measurement accuracy. The average AVP volume across all subjects was derived from 3D UX-Net's automatic segmentation.The 3D UX-Net achieved the highest DSC (0.893 ± 0.017), followed by Swin SMT (0.888 ± 0.018), 3D U-Net (0.875 ± 0.019), Swin UNETR (0.870 ± 0.017), and UNETR++ (0.861 ± 0.020). For surface distance metrics, 3D UX-Net demonstrated the lowest median ASSD (0.234 mm [0.188-0.273]). The VD of Swin SMT was significantly lower than that of 3D U-Net (p = 0.008), while no statistically significant differences were observed among other groups. All models exhibited identical HD95 (1 mm [1-1]). Automatic segmentation across all subjects yielded a mean AVP volume of 1446.78 ± 245.62 mm³ , closely matching manual segmentations (VD = 0.068 ± 0.064). Significant sex-based volume differences were identified (p < 0.001), but no age correlation was observed.We provide normative values for the automatic MRI measurement of the AVP in adults. The 3D UX-Net model based on brain T1WI achieves high accuracy in segmenting and measuring the volume of the AVP.
Keywords: Magnetic Resonance Imaging, Optic Nerve, deep learning, Convolutional Neural Network, Medical Image Processing
Received: 18 Nov 2024; Accepted: 11 Apr 2025.
Copyright: © 2025 Han, Wang, Luo, Wang, Zeng, Zheng, Dai, Wei, Zhu, Lin, Cui and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Shaoguo Cui, College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
Yongmei Li, First Affiliated Hospital of Chongqing Medical University, Chongqing, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.