Skip to main content

Research Shows Effect of Inventor Bias on Technology, Algorithms, and AI

Research Shows Effect of Inventor Bias on Technology, Algorithms, and AI

In an article published in the International Journal of Human-Computer Interaction, PhD Candidate Maya Cratsley and Professor Nathanael Fast explored the impact of identity in shaping attitudes towards the use of HR algorithms in organizations.

Two hands type on a laptop keyboard underneath a graphic contain multiple generic computer symbols.

Cratsley and Fast's research suggests inventors may have a bias concerning applications of their work. [Photo/iStock]

Stay Informed + Stay Connected

A 2022 report from MCKINSEY found that the use of artificial intelligence (AI) in organizations has more than doubled since 2017, and organizational investment in AI has also increased substantially. As the market is flooded with new AI products, leaders are faced with difficult decisions about whether and how to adopt these new tools in their organizations.

Due to the opaque nature of these technologies, organizational leaders and policymakers OFTEN RELY ON AI DEVELOPERS AND ENGINEERS to explain the fairness and usefulness of their products. However, while researchers have explored how organizational decision-makers, employees, and the general public feel about the use of AI, less is known about how developers perceive the tools they create and how those perceptions compare to those of other stakeholders.

In a recent article published in the INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, Marshall PhD Candidate MAYA CRATSLEY and Associate Professor of Management and Organization and Jorge Paulo and Susanna Lemann Chair in Entrepreneurship NATHANAEL FAST explored how people’s roles shape attitudes towards the use of algorithms in organizations. Across two experiments, they found initial evidence for what they call the “Inventor’s Bias Effect,” the propensity for inventors to be over-optimistic about the positive features and uses of the products they create. Their study explored this effect in the context of the development of an HR decision-making algorithm for layoffs and promotions.

In their first experiment, the authors found that participants who played the role of “inventors” of the HR algorithm viewed the tool as an extension of their self-identity. The tendency for these inventors to personally identify with the products they created caused them to perceive the algorithm as fairer relative to the ratings of other stakeholders (CEOs, employees, and the general public). This difference held, even when participants were given quantifiable feedback on the low performance of their algorithm.

A second experiment showed that the elevated perceptions of fairness among those playing the role of inventor translated into an increased desire for the organization to continue using the HR algorithm, even though it was reportedly inaccurate for a third of all decisions.

According to Cratsley and Fast, such findings reveal the dilemma leaders face when evaluating the potential risks and benefits of new products. Although the individuals who have created the products are arguably most capable of educating others how they work, treating AI developers as trusted experts and weighting their evaluations too heavily may lead decision makers to overly positive expectations regarding the benefits and fairness of AI systems.

Overall, Cratsley and Fast’s research provides important insights into the psychology of AI developers and highlights the need for multiple diverse voices and perspectives when evaluating the potential benefits and harms of new technology.