The final, formatted version of the article will be published soon.
ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Medicine and Public Health
Volume 7 - 2024 |
doi: 10.3389/frai.2024.1482141
This article is part of the Research Topic Outbreak Oracles: How AI's Journey through COVID-19 Shapes Future Epidemic Strategy View all 3 articles
Towards Explainable Deep Learning in Healthcare through Transition Matrix and User-Friendly Features
Provisionally accepted- 1 Khmelnytskyi National University, Khmel’nyts’kyy, Ukraine
- 2 Taras Shevchenko National University of Kyiv, Kyiv, Ukraine
- 3 National Aerospace University – Kharkiv Aviation Institute, Kharkiv, Ukraine
- 4 V. N. Karazin Kharkiv National University, Kharkiv, Ukraine
Modern artificial intelligence (AI) solutions often face challenges due to the “black box” nature of deep learning (DL) models, which limits their transparency and trustworthiness in critical medical applications. In this study, we propose and evaluate a scalable approach based on a transition matrix to enhance the interpretability of DL models in medical signal and image processing by translating complex model decisions into user-friendly and justifiable features for healthcare professionals. The criteria for choosing interpretable features were clearly defined, incorporating clinical guidelines and expert rules to align model outputs with established medical standards. The proposed approach was tested on two medical datasets: electrocardiography (ECG) for arrhythmia detection and magnetic resonance imaging (MRI) for heart disease classification. The performance of the DL models was compared with expert annotations using Cohen’s Kappa coefficient to assess agreement, achieving coefficients of 0.89 for the ECG dataset and 0.80 for the MRI dataset. These results demonstrate strong agreement, underscoring the reliability of the approach in providing accurate, understandable, and justifiable explanations of DL model decisions. The scalability of the approach suggests its potential applicability across various medical domains, enhancing the generalizability and utility of DL models in healthcare while addressing practical challenges and ethical considerations.
Keywords: healthcare, artificial intelligence, deep learning, Medical Signal Processing, Medical Image Analysis, Model interpretability
Received: 17 Aug 2024; Accepted: 06 Nov 2024.
Copyright: © 2024 Barmak, Krak, Yakovlev, Manziuk, Radiuk and Kuznetsov. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Sergiy Yakovlev, National Aerospace University – Kharkiv Aviation Institute, Kharkiv, 61070, Ukraine
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.