About this Research Topic
As XAI is still a growing field, there is plenty of room for innovation to improve the explainability of NLP systems. In recent works, explainable NLP models have captured linguistic knowledge of neural networks, explain predictions, stress-test models via challenge sets or adversarial examples, and interpret language embeddings.
The goal of this Research Topic is to better understand the present status of the XAI in NLP by identifying: new dimensions for a better explanation, evaluation techniques used to measure the quality of explanations, approaches or developments of new software toolkits to explain XAI in NLP, and transparent deep learning models for different NLP task.
The scope of this Research Topic covers (but is not restricted to) the following topics:
• Survey of XAI in NLP in general or any particular NLP task such as NER, QA, Sentiment analysis, social media (SocialNLP), etc.
• Explainable Neural models in Machine Translation
• Explainable Neural models in Named Entity Recognition
• Explainable Neural models in Question Answering
• Explainable Neural models in Sentiment Analysis
• Explainable Neural models in Opinion Mining
• Explainable Neural models in SocialNLP
• Evaluation techniques used to measure the quality of explanations
• Tools for explaining explainability
• Resources related to XAI in the context of NLP
The Research Topic welcomes contributions toward interpretable models for efficient solutions to NLP research problems that explain the explainability of the proposed model using suitable explainability technique(s) (e.g., example-driven, provenance, feature importance, induction, surrogate models, etc.), visualization technique(s) (e.g., raw examples, saliency, raw declarative, etc.), and other aspects. Software toolkits or approaches that can help users express explainability to their models and ML pipelines are also welcome.
Keywords: NLP, Explainable AI, Explainability, Interpretability, Deep Learning
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.