Skip to main content

ORIGINAL RESEARCH article

Front. Psychol.
Sec. Media Psychology
Volume 15 - 2024 | doi: 10.3389/fpsyg.2024.1416504

Michael is Better Than Mehmet: Exploring the Perils of Algorithmic Biases and Selective Adherence to Advice from Automated Decision Support Systems in Hiring

Provisionally accepted
  • RWTH Aachen University, Aachen, Germany

The final, formatted version of the article will be published soon.

    Artificial intelligence algorithms are increasingly adopted as decisional aides in many contexts such as human resources, often with the promise of being fast, efficient, and even capable of overcoming biases of human decision-makers. Simultaneously, this promise of objectivity and the increasing supervisory role of humans may make it more likely for existing biases in algorithms to be overlooked, as humans are prone to over-rely on such automated systems. This study therefore aims to investigate such reliance on biased algorithmic advice in a hiring context.Simulating the algorithmic pre-selection of applicants we confronted participants with biased or non-biased recommendations in a 1x2 between-subjects online experiment (n = 260). The findings suggest that the algorithmic bias went unnoticed for about 60% of the participants in the bias condition when explicitly asking for this. However, overall individuals relied less on biased algorithms making more changes to the algorithmic scores. Reduced reliance on the algorithms led to the increased noticing of the bias. The biased recommendations did not lower general attitudes towards algorithms but only evaluations for this specific hiring algorithm, while explicitly noticing the bias affected both. Individuals with a more negative attitude towards decision subjects were more likely to not notice the bias. This study extends the literature by examining the interplay of (biased) human operators and biased algorithmic decision support systems to highlight the potential negative impacts of such automation for vulnerable and disadvantaged individuals.

    Keywords: Algorithmic decision-making, Algorithmic bias, selective adherence, human bias, Discrimination, hiring, Human Resources

    Received: 12 Apr 2024; Accepted: 08 Jul 2024.

    Copyright: Ā© 2024 Rosenthal-von Der PĆ¼tten and Sach. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Astrid M. Rosenthal-von Der PĆ¼tten, RWTH Aachen University, Aachen, Germany

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.