AUTHOR=Majumdar Puspita , Chhabra Saheb , Singh Richa , Vatsa Mayank
TITLE=Subgroup Invariant Perturbation for Unbiased Pre-Trained Model Prediction
JOURNAL=Frontiers in Big Data
VOLUME=3
YEAR=2021
URL=https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2020.590296
DOI=10.3389/fdata.2020.590296
ISSN=2624-909X
ABSTRACT=
Modern deep learning systems have achieved unparalleled success and several applications have significantly benefited due to these technological advancements. However, these systems have also shown vulnerabilities with strong implications on the fairness and trustability of such systems. Among these vulnerabilities, bias has been an Achilles’ heel problem. Many applications such as face recognition and language translation have shown high levels of bias in the systems towards particular demographic sub-groups. Unbalanced representation of these sub-groups in the training data is one of the primary reasons of biased behavior. To address this important challenge, we propose a two-fold contribution: a bias estimation metric termed as Precise Subgroup Equivalence to jointly measure the bias in model prediction and the overall model performance. Secondly, we propose a novel bias mitigation algorithm which is inspired from adversarial perturbation and uses the PSE metric. The mitigation algorithm learns a single uniform perturbation termed as Subgroup Invariant Perturbation which is added to the input dataset to generate a transformed dataset. The transformed dataset, when given as input to the pre-trained model reduces the bias in model prediction. Multiple experiments performed on four publicly available face datasets showcase the effectiveness of the proposed algorithm for race and gender prediction.