Even with the tremendous success of Machine Learning (ML) in many domains, humans are still hesitant to use ML methods because humans cannot understand the internal decision-making process of machine learning methods (black-box nature). Especially for human-in-the-loop systems, humans need to understand these algorithms to use them appropriately, trust their decision-making capabilities, and improve these by understanding their weaknesses. Addressing this question, the Explainable Machine Learning (XAI/ Interpretable AI) research area has been introduced. Explainable ML aims to provide reasoning for ML model outputs, allowing humans to understand and trust ML models' decision-making process. It concentrates on making machine learning models with the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. Further, XAI focuses on the production of AI models with high-performance levels while allowing users to understand, trust, and effectively manage machine learning algorithms.
Our goal in this Frontier Research Topic is to develop explainable machine learning methodologies addressing four overlapping areas. Explain to justify - reasons/justifications for outcomes of machine learning algorithms; Explain to improve: allows continuous improvements of algorithms through understanding the model decision-making process of algorithms; Explain to control: protect models from making wrongful outcomes by providing visibility of unknowns vulnerabilities, flows, and help to identify and correct errors through debugging, Explain to discover: explaining to learn new facts, gather information, and gain knowledge.
Further, our topic also focuses on evaluating strategies for Explainable ML models, connecting to use-cases, evaluation metrics, and impact. We also hope to develop new approaches for expanding the boundaries of the field by exploring the above areas from a multi-disciplinary perspective.
This Research Topic will provide a forum for researchers to submit articles with new ideas and techniques of explainable machine learning, especially focusing on human-in-the-loop machine learning systems. It will also provide a platform for the convergence of interdisciplinary research techniques that combine methods from computer science, machine learning, and social science methods towards designing, developing, and evaluating explainable AI systems. The scope of this special issue on Explainable AI research includes but is not limited to:
- Designing and developing novel post-hoc explainable machine learning methods
- Developing inherently explainable machine learning models that outperform black-box methods while maintaining explainability
- Effective customizations of existing explainable ML methods and their outputs to satisfy requirements of practical use-cases
- Application-grounded and human-grounded methods and metrics for evaluating the effectiveness of explainable ML methods in different settings
Even with the tremendous success of Machine Learning (ML) in many domains, humans are still hesitant to use ML methods because humans cannot understand the internal decision-making process of machine learning methods (black-box nature). Especially for human-in-the-loop systems, humans need to understand these algorithms to use them appropriately, trust their decision-making capabilities, and improve these by understanding their weaknesses. Addressing this question, the Explainable Machine Learning (XAI/ Interpretable AI) research area has been introduced. Explainable ML aims to provide reasoning for ML model outputs, allowing humans to understand and trust ML models' decision-making process. It concentrates on making machine learning models with the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. Further, XAI focuses on the production of AI models with high-performance levels while allowing users to understand, trust, and effectively manage machine learning algorithms.
Our goal in this Frontier Research Topic is to develop explainable machine learning methodologies addressing four overlapping areas. Explain to justify - reasons/justifications for outcomes of machine learning algorithms; Explain to improve: allows continuous improvements of algorithms through understanding the model decision-making process of algorithms; Explain to control: protect models from making wrongful outcomes by providing visibility of unknowns vulnerabilities, flows, and help to identify and correct errors through debugging, Explain to discover: explaining to learn new facts, gather information, and gain knowledge.
Further, our topic also focuses on evaluating strategies for Explainable ML models, connecting to use-cases, evaluation metrics, and impact. We also hope to develop new approaches for expanding the boundaries of the field by exploring the above areas from a multi-disciplinary perspective.
This Research Topic will provide a forum for researchers to submit articles with new ideas and techniques of explainable machine learning, especially focusing on human-in-the-loop machine learning systems. It will also provide a platform for the convergence of interdisciplinary research techniques that combine methods from computer science, machine learning, and social science methods towards designing, developing, and evaluating explainable AI systems. The scope of this special issue on Explainable AI research includes but is not limited to:
- Designing and developing novel post-hoc explainable machine learning methods
- Developing inherently explainable machine learning models that outperform black-box methods while maintaining explainability
- Effective customizations of existing explainable ML methods and their outputs to satisfy requirements of practical use-cases
- Application-grounded and human-grounded methods and metrics for evaluating the effectiveness of explainable ML methods in different settings