As the machine learning models become more complex and relevant AI applications become more prevalent in high-risk domains such as finance, justice and healthcare, an important concern is raised over their transparency and fairness. In particular, the state-of-the-art research in health informatics is ...
As the machine learning models become more complex and relevant AI applications become more prevalent in high-risk domains such as finance, justice and healthcare, an important concern is raised over their transparency and fairness. In particular, the state-of-the-art research in health informatics is dominated by deep learning, as opposed to the use of a small set of intelligible features and an inherently interpretable family of models. Combined with the risk of the health application, the blackbox models for health informatics lack the necessary trust, possibility to debug/audit and correct them, which eventually result in their ineffective use. Thus, there is an increasing, urgent need to develop methods for data-driven interpretable, explainable, accountable and fair healthcare applications.
In this Research Topic, we welcome state-of-the-art methodological research and applications for making black-box models explainable, transforming black-box models to intelligible and interpretable counterparts, analyze models for their interpretability and fairness, and propose solutions to mitigate bias, among others.
Keywords:
fairness in machine learning, interpretability in machine learning, explainable AI, accountibility in high-risk applications
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.