Human health is of vital importance to our society, especially in the era of the COVID-19 pandemic. With the enormous amount of health-related data, especially data from electronic health records (EHRs), state-of-the-art AI approaches have been applied to predict health outcomes and optimize healthcare data tools. Machine learning (ML) is a subfield of AI that learns from data without defining a priori. Traditional machine learning highly depends on laborious feature engineering. However, these techniques cannot discover complex patterns of EHR with high dimensionality, thereby yielding suboptimal performance. Deep learning (DL) methods allow a machine to automatically detect the intricate relationships of the features and extract salient knowledge from data. More recently, advanced DL models such as transformers and auto-encoders have shown promise for representing complex clinical data of multiple modalities. These models have significantly improved downstream tasks such as finding similar patients, predicting disease onset, and deep phenotyping.
As DL algorithms incorporate high-degree interactions between input features using a multi-layer nonlinear structure with many neurons, they are typically regarded as black-box models. To justify, rationalize, and trust the prediction of a DL model for high-stake medical applications, medical professionals have to understand how the system comes up with an outcome prediction. More importantly, these explanations are crucial for ensuring fairness and accountability in the clinical decision-making process. This is important, as a single incorrect prediction outcome from the system may lead to serious medical errors. The European Union’s General Data Protection Regulation (GDPR) was recently enacted to require organizations that use patient data for classifications and recommendations to provide on-demand explanations. The White House developed guidance of artificial intelligence applications, in which transparency is one of the central principles for the stewardship of AI applications. Sufficient explanations of the AI models allow medical doctors to understand and trust these AI-based clinical decision support systems. Thus, the research on explainable AI (XAI) in healthcare is increasing.
To provide an open-access platform for researchers to disseminate novel methods and applications of XAI in healthcare, this Research Topic aims to include original research articles that present innovative XAI methodologies and applications in healthcare. Review articles that provide an in-depth understanding of a particular subtopic are also welcome. Contributions that address the following topics will be considered for publication:
• Explainable AI for healthcare applications
• Interpretable machine learning models
• Case-based reasoning
• Human-Computer Interaction (HCI) for XAI
• Fairness, accountability, and trustworthiness of AI systems
• AI-enabled clinical decision support tools
• AI-based healthcare applications
Human health is of vital importance to our society, especially in the era of the COVID-19 pandemic. With the enormous amount of health-related data, especially data from electronic health records (EHRs), state-of-the-art AI approaches have been applied to predict health outcomes and optimize healthcare data tools. Machine learning (ML) is a subfield of AI that learns from data without defining a priori. Traditional machine learning highly depends on laborious feature engineering. However, these techniques cannot discover complex patterns of EHR with high dimensionality, thereby yielding suboptimal performance. Deep learning (DL) methods allow a machine to automatically detect the intricate relationships of the features and extract salient knowledge from data. More recently, advanced DL models such as transformers and auto-encoders have shown promise for representing complex clinical data of multiple modalities. These models have significantly improved downstream tasks such as finding similar patients, predicting disease onset, and deep phenotyping.
As DL algorithms incorporate high-degree interactions between input features using a multi-layer nonlinear structure with many neurons, they are typically regarded as black-box models. To justify, rationalize, and trust the prediction of a DL model for high-stake medical applications, medical professionals have to understand how the system comes up with an outcome prediction. More importantly, these explanations are crucial for ensuring fairness and accountability in the clinical decision-making process. This is important, as a single incorrect prediction outcome from the system may lead to serious medical errors. The European Union’s General Data Protection Regulation (GDPR) was recently enacted to require organizations that use patient data for classifications and recommendations to provide on-demand explanations. The White House developed guidance of artificial intelligence applications, in which transparency is one of the central principles for the stewardship of AI applications. Sufficient explanations of the AI models allow medical doctors to understand and trust these AI-based clinical decision support systems. Thus, the research on explainable AI (XAI) in healthcare is increasing.
To provide an open-access platform for researchers to disseminate novel methods and applications of XAI in healthcare, this Research Topic aims to include original research articles that present innovative XAI methodologies and applications in healthcare. Review articles that provide an in-depth understanding of a particular subtopic are also welcome. Contributions that address the following topics will be considered for publication:
• Explainable AI for healthcare applications
• Interpretable machine learning models
• Case-based reasoning
• Human-Computer Interaction (HCI) for XAI
• Fairness, accountability, and trustworthiness of AI systems
• AI-enabled clinical decision support tools
• AI-based healthcare applications