WHEN ELIMINATING BIAS ISN’T FAIR: ALGORITHMIC REDUCTIONISM AND PROCEDURAL JUSTICE IN HUMAN RESOURCE DECISIONS, Organizational Behavior and Human Decision Processes.
David T. Newman, Nathanael J. Fast, and Derek J. Harmon
The perceived fairness of decision-making procedures is a key concern for organizations, particularly when evaluating employees and determining personnel outcomes. Algorithms have created opportunities for increasing fairness by overcoming biases commonly displayed by human decision makers. However, while HR algorithms may remove human bias in decision-making, we argue that those being evaluated may perceive the process as reductionistic, leading them to think that certain qualitative information or contextualization is not being taken into account. We argue that this can undermine their beliefs about the procedural fairness of using HR algorithms to evaluate performance by promoting the assumption that decisions made by algorithms are based on less accurate information than identical decisions made by humans. Results from four laboratory experiments (N = 798) and a large-scale randomized experiment in an organizational setting (N = 1,654) confirm this hypothesis. Theoretical and practical implications for organizations using algorithms and data analytics are discussed.