
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
METHODS article
Front. Aging Neurosci.
Sec. Alzheimer's Disease and Related Dementias
Volume 17 - 2025 | doi: 10.3389/fnagi.2025.1532470
This article is part of the Research Topic Explainable AI for Neuroimaging Biomarkers in Disease Detection and Monitoring View all 3 articles
The final, formatted version of the article will be published soon.
You have multiple emails registered with Frontiers:
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Conventional computer-aided diagnostic techniques for Alzheimer's disease (AD) predominantly rely on magnetic resonance imaging (MRI) in isolation. Genetic imaging methods, by establishing the link between genes and brain structures in disease progression, facilitate early prediction of AD development. While deep learning methods based on MRI have demonstrated promising results for early AD diagnosis, the limited dataset size has led most AD studies to lean on statistical approaches within the realm of imaging genetics. Existing deep-learning approaches typically utilize pre-defined regions of interest and risk variants from known susceptibility genes, employing relatively straightforward feature fusion methods that fail to fully capture the relationship between images and genes. To address these limitations, we proposed a multi-modal deep learning classification network based on MRI and single nucleotide polymorphism (SNP) data for AD diagnosis and mild cognitive impairment (MCI) progression prediction. Our model leveraged a convolutional neural network (CNN) to extract whole-brain structural features, a Transformer network to capture genetic features, and employed a cross-transformer-based network for comprehensive feature fusion. Furthermore, we incorporated an attention-map-based interpretability method to analyze and elucidate the structural and risk variants associated with AD and their interrelationships. The proposed model was trained and evaluated using 1541 subjects from the ADNI database. Experimental results underscored the superior performance of our model in effectively integrating and leveraging information from both modalities, thus enhancing the accuracy of AD diagnosis and prediction.
Keywords: Multiscale deep convolutional networks, Alzheimer's disease, MRI, SNP, transformer
Received: 26 Nov 2024; Accepted: 05 Mar 2025.
Copyright: © 2025 Li, Niu, Qi, Liang and Long. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Xiaojing Long, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.