The rapid development of modeling techniques has produced novel and powerful methods for machine learning (ML) and artificial intelligence (AI). Based on the integration of multiple data sources for biomedical applications, they offer high precision and accuracy. This facilitates their transition from pure exploratory research to the applied real world, from drug development to clinical applications and therapy personalization. However, despite good learning and validation results, some of these models are still difficult to explain. In the biomedical field, this poses problems, as models provide fascinating predictions that are difficult to explain, potentially influencing customer decisions in an area where high confidence in the models is obligatory. This distrust of practitioners about AI models is one reason why they are not widely used in the medical field.
In recent years, a variety of methods have been developed that aim to make ML and AI more transparent to ensure that the results can be easily explained in terms of model and data structure. Such methods are typically used in image recognition to understand, for example, how anomalies in a radiography are detected, to make a diagnosis. In general, transparency looks not only at the model structure, mathematical background, the selection and balancing of databases for training, but also the way they are validated and how the validation metrics should be interpreted. In this Research Topic, we want to follow the current advances aimed at harnessing explainability in ML and AI models, exploring the opportunities offered by new explainable AI and mathematical models for various applications in biomedicine, aiming to better understand how model parameters capture and influence the underlying biological mechanisms.
As such, the goal of this Research Topic is to collect research on explainable ML and AI in biomedicine. This concerns inherent explainable models, post-hoc explanations, as well as methods that provide an acceptable level of model causality, i.e., how a model prediction relates to a particular input. Thus, the scope is limited not only to explainable AI methods, but also to methods aimed at improving data interpretability, as well as reports on opposing results in order to better understand the limits of reported methods in the literature, also with regard to the input data. The topics can range into different fields, for example drug discovery, in silico analyses or different clinical applications, such as the evaluation of health records using different ML techniques. To this end, we are looking for review articles, expert opinions, scope articles and research on novel AI techniques that use model transparency.
The rapid development of modeling techniques has produced novel and powerful methods for machine learning (ML) and artificial intelligence (AI). Based on the integration of multiple data sources for biomedical applications, they offer high precision and accuracy. This facilitates their transition from pure exploratory research to the applied real world, from drug development to clinical applications and therapy personalization. However, despite good learning and validation results, some of these models are still difficult to explain. In the biomedical field, this poses problems, as models provide fascinating predictions that are difficult to explain, potentially influencing customer decisions in an area where high confidence in the models is obligatory. This distrust of practitioners about AI models is one reason why they are not widely used in the medical field.
In recent years, a variety of methods have been developed that aim to make ML and AI more transparent to ensure that the results can be easily explained in terms of model and data structure. Such methods are typically used in image recognition to understand, for example, how anomalies in a radiography are detected, to make a diagnosis. In general, transparency looks not only at the model structure, mathematical background, the selection and balancing of databases for training, but also the way they are validated and how the validation metrics should be interpreted. In this Research Topic, we want to follow the current advances aimed at harnessing explainability in ML and AI models, exploring the opportunities offered by new explainable AI and mathematical models for various applications in biomedicine, aiming to better understand how model parameters capture and influence the underlying biological mechanisms.
As such, the goal of this Research Topic is to collect research on explainable ML and AI in biomedicine. This concerns inherent explainable models, post-hoc explanations, as well as methods that provide an acceptable level of model causality, i.e., how a model prediction relates to a particular input. Thus, the scope is limited not only to explainable AI methods, but also to methods aimed at improving data interpretability, as well as reports on opposing results in order to better understand the limits of reported methods in the literature, also with regard to the input data. The topics can range into different fields, for example drug discovery, in silico analyses or different clinical applications, such as the evaluation of health records using different ML techniques. To this end, we are looking for review articles, expert opinions, scope articles and research on novel AI techniques that use model transparency.