About this Research Topic
The rapid development of computer-mediated communication technologies (i.e., blogs, chats, posts on social networks, etc.) has fostered the production of user-generated content that can be delivered to a large crowd. Detecting the veracity or deception of online digital information is not a trivial task. Artificial intelligence (AI)-based methods and systems that autonomously detect mis/disinformation have been developed to overcome this problem. However, these systems often provide unexplainable results because they work as black boxes taking a huge amount of input data without explaining how their outcomes are generated. The key idea of XAI was born to overcome this issue and relies on design principles, methodologies, and processes that make AI algorithms as white boxes adding transparency and understandability of the rationale behind their outcomes. This is a key factor for ethics in AI. The field of XAI can strongly contribute to explaining the decisions made by automatic mis/disinformation detection systems by incorporating fairness and/or explainability properties into these systems. People, indeed, need to understand how decisions are made and to judge their correctness, fairness, and transparency and that requires human-centred design approaches to XAI. XAI solutions to fight misinformation provide increased transparency on fact-checking processes by enabling the end users to make informed decisions and helping to fight the “infodemic”, i.e., the uncontrolled rapid propagation of disinformation.
This Research Topic has a twofold aim. It covers challenging topics in fact-checking misinformation using automated, computational, and crowd-sourced approaches to fact-checking, including provenance and sources, critical thinking, knowledge graphs, network flows, social media proliferation, emotion, viral models, contexts, satire, manipulated content, imposter content, deep fakes, social bots, malicious actors, reporting and tracking of real-world events, rating and reviewing systems, recommendation systems, and more. Second, it is devoted to new methodologies and applications of XAI for fighting mis/disinformation from a wide range of cross-disciplinary and collaborative domain knowledge related to computing, communication, journalism, social psychology, law, etc., with the goals of highlighting new achievements and developments, and to feature promising new directions and extensions.
Both Original Research articles that enhance the existing body of fact-checking mis/disinformation and Review Articles based on survey research are highly solicited. The topics of interest include, but are not limited to:
● Computational approaches and applications to fact-checking
● Crowd-sourced approaches to fact-checking
● Deep learning techniques for fact-checking
● AI for detecting fake news on social media
● Graph algorithms for mis/disinformation detection
● Natural language processing and fact-checking
● Generation and identification of deep fakes
● XAI algorithms to detect fake news, misinformation, and disinformation
● Methodologies and approaches for designing AI explanations in mis/disinformation detection systems
● Design principles for human-centric XAI for mis/disinformation detection
● Explainable Machine Learning for mis/disinformation detection
● Metrics and evaluation of AI transparency, explainability and interpretability in mis/disinformation detection systems
● Trust and social acceptance of models of XAI for mis/disinformation detection
● Social, ethical, and legal implications of XAI for mis/disinformation
● Applications of XAI for fighting mis/disinformation
Keywords: explainable AI, Explainable Artificial Intelligence (XAI), social engagement, human-centric XAI, responsible XAI, mis/disinformation detection
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.