In the dynamic domains of biosensors and biosensing technology, artificial intelligence (AI) and deep learning (DL) have catalyzed significant innovations, enabling advanced data interpretation across numerous biomedical applications. As biosensors generate extensive datasets, AI and DL are pivotal in decoding complex biological data, such as DNA sequences, protein structures, and cellular imagery, improving accuracy and facilitating rapid classifications of biological samples. Furthermore, AI-enhanced biosensors support real-time health monitoring and diagnostics, providing critical insights into patient health by analyzing physiological parameters through AI-driven predictive models, which foresee disease progression and individual health threats.
Despite these technological strides, the fusion of AI with biosensor technology challenges the transparency and interpretability of the computational decisions and outcomes. This gap in explicability becomes a barrier to the broader acceptance and trust in AI applications within medical diagnostics, where understanding the basis of AI recommendations is crucial. Professionals in healthcare often remain skeptical of AI interpretations, which do not clearly outline the contributing factors behind diagnostic conclusions. Consequently, improving the clarity and comprehensibility of AI processes in medical diagnostics is not just beneficial but essential for ensuring these technologies' integration into healthcare practices, thereby enhancing both practitioner trust and patient care outcomes.
This Research Topic aims to elevate the precision and usability of medical biosensor technologies through the development of interpretable and explainable models. To gather further insights within this scope, we welcome articles addressing, but not limited to, the following themes:
• Explainable Machine Learning in Medical Biosensors
• Explainable Deep Learning in Medical Biosensors
• Explainable Models for Biosensing Data Analysis
• Explainable Models for Biosensing Imaging Technology
• User-Friendly Interfaces for Biosensor Data Interpretation
• Human-AI Interaction in Medical Diagnostics
• Explaining AI-Driven Automatic Biomedical Diagnosis
• Optimization-Driven Biomedical Data Enhancement
• Explainable Model for Biomedical Engineering Data Analysis
• Explainable Methods for Biomedical Engineering Foundation Model
Keywords:
artificial intelligence, deep learning, biosensors, biosensing
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
In the dynamic domains of biosensors and biosensing technology, artificial intelligence (AI) and deep learning (DL) have catalyzed significant innovations, enabling advanced data interpretation across numerous biomedical applications. As biosensors generate extensive datasets, AI and DL are pivotal in decoding complex biological data, such as DNA sequences, protein structures, and cellular imagery, improving accuracy and facilitating rapid classifications of biological samples. Furthermore, AI-enhanced biosensors support real-time health monitoring and diagnostics, providing critical insights into patient health by analyzing physiological parameters through AI-driven predictive models, which foresee disease progression and individual health threats.
Despite these technological strides, the fusion of AI with biosensor technology challenges the transparency and interpretability of the computational decisions and outcomes. This gap in explicability becomes a barrier to the broader acceptance and trust in AI applications within medical diagnostics, where understanding the basis of AI recommendations is crucial. Professionals in healthcare often remain skeptical of AI interpretations, which do not clearly outline the contributing factors behind diagnostic conclusions. Consequently, improving the clarity and comprehensibility of AI processes in medical diagnostics is not just beneficial but essential for ensuring these technologies' integration into healthcare practices, thereby enhancing both practitioner trust and patient care outcomes.
This Research Topic aims to elevate the precision and usability of medical biosensor technologies through the development of interpretable and explainable models. To gather further insights within this scope, we welcome articles addressing, but not limited to, the following themes:
• Explainable Machine Learning in Medical Biosensors
• Explainable Deep Learning in Medical Biosensors
• Explainable Models for Biosensing Data Analysis
• Explainable Models for Biosensing Imaging Technology
• User-Friendly Interfaces for Biosensor Data Interpretation
• Human-AI Interaction in Medical Diagnostics
• Explaining AI-Driven Automatic Biomedical Diagnosis
• Optimization-Driven Biomedical Data Enhancement
• Explainable Model for Biomedical Engineering Data Analysis
• Explainable Methods for Biomedical Engineering Foundation Model
Keywords:
artificial intelligence, deep learning, biosensors, biosensing
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.