Skip to main content

ORIGINAL RESEARCH article

Front. Artif. Intell.
Sec. Medicine and Public Health
Volume 7 - 2024 | doi: 10.3389/frai.2024.1456069

Advanced Interpretable Diagnosis of Alzheimer's Disease Using SECNN-RF Framework with Explainable AI

Provisionally accepted

The final, formatted version of the article will be published soon.

    Early detection of Alzheimer's disease (AD) is vital for effective treatment, as interventions are most successful in the disease's early stages. Combining Magnetic Resonance Imaging (MRI) with artificial intelligence (AI) offers significant potential for enhancing AD diagnosis. However, traditional AI models often lack transparency in their decision-making processes. Explainable Artificial Intelligence (XAI) is an evolving field that aims to make AI decisions understandable to humans, providing transparency and insight into AI systems. This research introduces the Squeeze-and-Excitation Convolutional Neural Network with Random Forest (SECNN-RF) framework for early AD detection using MRI scans. The SECNN-RF integrates Squeeze-and-Excitation (SE) blocks into a Convolutional Neural Network (CNN) to focus on crucial features and uses Dropout layers to prevent overfitting. It then employs a Random Forest classifier to accurately categorize the extracted features. The SECNN-RF demonstrates high accuracy (99.89%) and offers an explainable analysis, enhancing the model's interpretability. Further exploration of the SECNN framework involved substituting the Random Forest classifier with other machine learning algorithms like Decision Tree, XGBoost, Support Vector Machine, and Gradient Boosting. While all these classifiers improved model performance, Random Forest achieved the highest accuracy, followed closely by XGBoost, Gradient Boosting, Support Vector Machine, and Decision Tree which achieved lower accuracy.

    Keywords: explainable artificial intelligence (XAI), Squeeze-and-Excitation (SE block), Squeezeand-Excitation Convolutional Neural Network with Random Forest (SECNN-RF), Explainability analysis, saliency map

    Received: 01 Jul 2024; Accepted: 12 Aug 2024.

    Copyright: © 2024 Abdelaziz, Said, AbdElhafeez and Ali. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Mohamed M. AbdElhafeez, Zagazig University, Zagazig, Egypt

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.