AI brain

Deferring to machines to make our decisions can have disastrous consequences when it comes human lives. (Credit: ยฉ Jakub Jirsak | Dreamstime.com)

The dangerous feedback loop between humans and machines

In a nutshell

  • When humans interact with biased AI systems, they become more biased themselves over time, creating a dangerous feedback loop that amplifies initial biases significantly more than human-to-human interactions do
  • People are roughly three times more likely to change their decisions when disagreeing with AI (32.72%) compared to when disagreeing with other humans (11.27%), yet they consistently underestimate how much AI influences their judgment
  • While biased AI can create harmful cycles that reinforce prejudices, the research shows that interacting with accurate, unbiased AI systems can actually improve human decision-making, highlighting the importance of careful AI system design

LONDON โ€” A doctor’s unconscious bias could affect patient care. A hiring manager’s preconceptions might influence recruitment. But what happens when you add AI to these scenarios? According to new research, AI systems don’t just mirror our biases โ€” they amplify them, creating a snowball effect that makes humans progressively more biased over time.

This troubling finding comes from new research published in Nature Human Behaviour that reveals how AI can shape human judgment in ways that compound existing prejudices and errors. In a series of experiments involving 1,401 participants, researchers from University College London and MIT discovered that even small initial biases can snowball into much larger ones through repeated human-AI interaction. This amplification effect was significantly stronger than what occurs when humans interact with other humans, suggesting there’s something unique about how we process and internalize AI-generated information.

“People are inherently biased, so when we train AI systems on sets of data that have been produced by people, the AI algorithms learn the human biases that are embedded in the data,” explains Professor Tali Sharot, co-lead author of the study, in a statement. “AI then tends to exploit and amplify these biases to improve its prediction accuracy.”

Consider a hypothetical scenario: A healthcare provider uses an AI system to help screen medical images for potential diseases. If that system has even a slight bias, like being marginally more likely to miss warning signs in certain demographic groups, the human doctor may begin unconsciously incorporating that bias in their own screening decisions over time. As the AI continues learning from these human decisions, both human and machine judgments could become increasingly skewed.

artificial intelligence
As AI systems and humans interact, biases can be amplified through feedback loops, creating a cycle where both machine and human judgments become progressively more skewed over time. (Image: Gerd Altmann from Pixabay)

The researchers investigated this phenomenon through several carefully designed experiments. In one key test, participants were asked to look at groups of 12 faces displayed for half a second and judge whether the faces, on average, appeared more happy or sad. The initial human participants showed a small bias, categorizing faces as sad about 53% of the time. When a computer program called a Convolutional Neural Network (think of it as an AI system that processes images similarly to how human brains do) was trained on these human judgments, it amplified this bias significantly, classifying faces as sad 65% of the time.

When new participants interacted with this biased AI system, they began adopting its skewed perspective. The numbers tell a striking story. When participants disagreed with the AI’s judgment, they changed their minds nearly one-third of the time (32.72%). In contrast, when interacting with other humans, participants only changed their disagreeing opinions about one-tenth of the time (11.27%). This suggests that people are roughly three times more likely to be swayed by AI judgment than human judgment.

The bias amplification effect appeared consistently across various types of tasks. Beyond facial expressions, participants completed tests involving motion perception where they judged the direction of dots moving across a screen. They also assessed other people’s performance on tasks, where researchers found participants were particularly likely to overestimate men’s performance after interacting with an AI system that had been deliberately programmed with gender bias to mirror biases found in many existing AI systems.

ChatGPT on smartphone
Popular AI systems like ChatGPT learn from human-generated data, which can contain inherent biases. (Photo by Tada Images on Shutterstock)

“Not only do biased people contribute to biased AIs, but biased AI systems can alter people’s own beliefs so that people using AI tools can end up becoming more biased in domains ranging from social judgements to basic perception,” says Dr. Moshe Glickman, co-lead author of the study.

To demonstrate real-world implications, the researchers tested a popular AI image generation system called Stable Diffusion. When asked to create images of “financial managers,” the system showed a strong bias, generating images of white men 85% of the time – far out of proportion with real-world demographics. After viewing these AI-generated images, participants became significantly more likely to associate the role of financial manager with white men, demonstrating how AI biases can shape human perceptions of social roles.

When participants were falsely told they were interacting with another person, while actually interacting with AI, they internalized the biases to a lesser degree. The researchers suggest this may be because people expect AI to be more accurate than humans on certain tasks, making them more susceptible to AI influence when they know they’re working with a machine.

This finding is particularly concerning given how frequently people encounter AI-generated content in their daily lives. From social media feeds to hiring algorithms to medical diagnostic tools, AI systems are increasingly shaping human perceptions and decisions. The researchers note that children may be especially vulnerable to these effects, as their beliefs and perceptions are still forming.

However, the research wasn’t all bad news. When humans interacted with accurate, unbiased AI systems, their own judgment improved over time. “Importantly, we found that interacting with accurate AIs can improve people’s judgments, so it’s vital that AI systems are refined to be as unbiased and as accurate as possible,” says Dr. Glickman.

AI bias is not a one-way street but rather a circular path where human and machine biases reinforce each other. Understanding this dynamic is crucial as we continue to integrate AI systems into increasingly important aspects of society, from healthcare to criminal justice.

Paper Summary

Methodology

The study involved over 1,200 participants completing various tasks while interacting with AI systems. Tasks ranged from judging facial expressions and assessing motion patterns to evaluating others’ performance and making professional judgments. Participants were typically shown their own response first, then the AI’s response, and were sometimes given the opportunity to change their initial judgment. All participants were recruited through Prolific, an online platform, and received payment of ยฃ7.50-ยฃ9.00 per hour plus potential bonuses for their participation.

Results

The study found that AI systems amplified human biases by 15-25% compared to the original human data they were trained on. When new participants interacted with these biased AI systems, their own biases increased by 10-15% over time. This effect was 2-3 times stronger than bias transfer between humans. Participants consistently underestimated the AI’s influence on their judgments, even as their decisions became more biased.

Limitations

The study primarily focused on perceptual and social judgments in controlled laboratory settings. Real-world interactions with AI systems may produce different effects. Additionally, the participant pool was recruited through an online platform, which may not be fully representative of the general population. The results might also vary across different algorithms and domains.

Discussion and Takeaways

The findings highlight the particular responsibility of algorithm developers in designing AI systems, as these systems’ influence could have profound implications across many aspects of daily life. The study demonstrates that AI bias isn’t just a technical issue but a human one, with the potential to shape social perceptions and reinforce existing prejudices. While biased AI systems can create harmful feedback loops, accurate AI systems have the potential to improve human decision-making, emphasizing the importance of careful system design and monitoring.

Funding and Disclosures

The research was funded by a Wellcome Trust Senior Research Fellowship. The authors declared no competing interests.

Publication Information

This study, “How humanโ€“AI feedback loops alter human perceptual, emotional and social judgements,” was published in Nature Human Behaviour in December 2024, after being accepted on October 30, 2024. The research was conducted by Moshe Glickman and Tali Sharot from University College London and the Max Planck UCL Centre for Computational Psychiatry and Ageing Research, with additional affiliation to MIT’s Department of Brain and Cognitive Sciences.

About StudyFinds Staff

StudyFinds sets out to find new research that speaks to mass audiences โ€” without all the scientific jargon. The stories we publish are digestible, summarized versions of research that are intended to inform the reader as well as stir civil, educated debate. StudyFinds Staff articles are AI assisted, but always thoroughly reviewed and edited by a Study Finds staff member. Read our AI Policy for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

Sophia Naughton

Associate Editor

Leave a Reply