AUTHOR=Leben Derek
TITLE=Explainable AI as evidence of fair decisions
JOURNAL=Frontiers in Psychology
VOLUME=14
YEAR=2023
URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1069426
DOI=10.3389/fpsyg.2023.1069426
ISSN=1664-1078
ABSTRACT=
This paper will propose that explanations are valuable to those impacted by a model's decisions (model patients) to the extent that they provide evidence that a past adverse decision was unfair. Under this proposal, we should favor models and explainability methods which generate counterfactuals of two types. The first type of counterfactual is positive evidence of fairness: a set of states under the control of the patient which (if changed) would have led to a beneficial decision. The second type of counterfactual is negative evidence of fairness: a set of irrelevant group or behavioral attributes which (if changed) would not have led to a beneficial decision. Each of these counterfactual statements is related to fairness, under the Liberal Egalitarian idea that treating one person differently than another is justified only on the basis of features which were plausibly under each person's control. Other aspects of an explanation, such as feature importance and actionable recourse, are not essential under this view, and need not be a goal of explainable AI.