Skip to main content

ORIGINAL RESEARCH article

Front. Oncol.
Sec. Cancer Epidemiology and Prevention
Volume 15 - 2025 | doi: 10.3389/fonc.2025.1535478
This article is part of the Research Topic Harnessing Explainable AI for Precision Cancer Diagnosis and Prognosis View all articles

Explainable AI in Medical Imaging: An Interpretable and Collaborative Federated Learning Model for Brain Tumor Classification

Provisionally accepted
  • 1 University of the West of England, Bristol, United Kingdom
  • 2 Prince Mohammad bin Fahd University, Khobar, Saudi Arabia
  • 3 Najran University, Najran, Saudi Arabia
  • 4 Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
  • 5 University of Essex, Colchester, East of England, United Kingdom

The final, formatted version of the article will be published soon.

    Introduction A brain tumor is a collection of abnormal cells in the brain that can become life-threatening due to its ability to spread. Therefore, a prompt and meticulous classification of the brain tumor is an essential element in healthcare care. Magnetic Resonance Imaging (MRI) is the central resource for producing high-quality images of soft tissue and is considered the principal technology for diagnosing brain tumors. Recently, computer vision techniques such as deep learning (DL) have played an important role in the classification of brain tumors, most of which use traditional centralized classification models, which face significant challenges due to the insufficient availability of diverse and representative datasets and exacerbate the difficulties in obtaining a transparent model. This study proposes a collaborative federated learning model (CFLM) with explainable artificial intelligence (XAI) to mitigate existing problems using state-of-the-art methods.The proposed method addresses four class classification problems to identify glioma, meningioma, no tumor, and pituitary tumors. We have integrated GoogLeNet with a federated learning (FL) framework to facilitate collaborative learning on multiple devices to maintain the privacy of sensitive information locally. Moreover, this study also focuses on the interpretability to 1 Sample et al. make the model transparent using Gradient-weighted class activation mapping (Grad-CAM) and saliency map visualizations.In total, 10 clients were selected for the proposed model with 50 communication rounds, each with decentralized local datasets for training. The proposed approach achieves 94% classification accuracy. Moreover, we incorporate Grad-CAM with heat maps and saliency maps to offer interpretability and meaningful graphical interpretations for healthcare specialists.This study outlines an efficient and interpretable model for brain tumor classification by introducing an integrated technique using FL with GoogLeNet architecture. The proposed framework has great potential to improve brain tumor classification to make them more reliable and transparent for clinical use.

    Keywords: Explainable AI, Federated learning, brain tumors, GoogLeNet, medical diagnosis

    Received: 27 Nov 2024; Accepted: 23 Jan 2025.

    Copyright: © 2025 Mastoi, Latif, Brohi, Ahmad, Alqhatani, Alshehri, Al Mazroa and Ullah. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Rahmat Ullah, University of Essex, Colchester, CO4 3SQ, East of England, United Kingdom

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.