About this Research Topic
Despite these technological strides, the fusion of AI with biosensor technology challenges the transparency and interpretability of the computational decisions and outcomes. This gap in explicability becomes a barrier to the broader acceptance and trust in AI applications within medical diagnostics, where understanding the basis of AI recommendations is crucial. Professionals in healthcare often remain skeptical of AI interpretations, which do not clearly outline the contributing factors behind diagnostic conclusions. Consequently, improving the clarity and comprehensibility of AI processes in medical diagnostics is not just beneficial but essential for ensuring these technologies' integration into healthcare practices, thereby enhancing both practitioner trust and patient care outcomes.
This Research Topic aims to elevate the precision and usability of medical biosensor technologies through the development of interpretable and explainable models. To gather further insights within this scope, we welcome articles addressing, but not limited to, the following themes:
• Explainable Machine Learning in Medical Biosensors
• Explainable Deep Learning in Medical Biosensors
• Explainable Models for Biosensing Data Analysis
• Explainable Models for Biosensing Imaging Technology
• User-Friendly Interfaces for Biosensor Data Interpretation
• Human-AI Interaction in Medical Diagnostics
• Explaining AI-Driven Automatic Biomedical Diagnosis
• Optimization-Driven Biomedical Data Enhancement
• Explainable Model for Biomedical Engineering Data Analysis
• Explainable Methods for Biomedical Engineering Foundation Model
Keywords: artificial intelligence, deep learning, biosensors, biosensing
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.