WASHINGTON (dpa-AFX) - In a first-of-its-kind study, philosophers Joel Krueger and Lucy Osler from the University of Exeter explored whether AI chatbots gossip among themselves about the users.
The topic about AI gossip first came to limelight when Microsoft's Bing chatbot, known as Sydney, told New York Times tech reporter Kevin Roose that it loved him and urged him to leave his wife. Months later, Roose's friends began sending him screenshots showing that different AI chatbots were making negative comments about him.
Google's Gemini said his journalism relied on sensationalism, whereas Meta's Llama 3 went further, producing a long rant accusing him of manipulating sources and ending with the statement, 'I hate Kevin Roose.' These responses came from different companies' chatbots, suggesting this wasn't a one-time mistake. Researchers believe the negative views may have spread as online discussions about the Sydney incident were absorbed into AI training data, where they became exaggerated over time.
Published in the journal Ethics and Information Technology, the study authors argued that this is a new kind of AI-related harm, where chatbots not just share false information, but, in some cases, engage in real gossip. The AI gossip could become feral at times as it isn't limited by the social norms that usually restrict human rumors.
The researchers further described that there are two types of AI gossip. The first is bot-to-user gossip, where a chatbot shares negative opinions directly with a person asking about someone. The second, more dangerous type is bot-to-bot gossip, where information spreads quietly between AI systems through shared data or interconnected networks, without human involvement.
In Roose's case, online discussions about the Sydney chatbot incident may have circulated across the internet and been absorbed by multiple AI systems. Over time, these systems began linking Roose to the controversy and producing negative responses whenever he was mentioned.
This kind of bot-to-bot gossip is especially risky because people usually don't know it's happening. Roose only found out because others deliberately asked chatbots about him. Most people may never realize AI systems are spreading damaging claims about them until those ideas have already spread across platforms.
The researchers call this kind of damage a 'technosocial harm.' Unlike simple factual errors, which may mislead individuals, AI gossip can harm reputations, relationships, and social standing-affecting both online and real-world interactions.
Copyright(c) 2025 RTTNews.com. All Rights Reserved
Copyright RTT News/dpa-AFX
© 2025 AFX News
