AI chatbots fail at accurate news, major study reveals

AI Summary
A study conducted by 22 international public broadcasters, including DW and BBC, revealed that AI chatbots like ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity AI misrepresent news content 45% of the time. The research evaluated criteria such as accuracy, sourcing, context provision, editorializing, and distinguishing fact from opinion. It found that nearly half of all responses had significant issues, with 31% having serious sourcing problems and 20% containing major factual errors. For instance, Olaf Scholz was incorrectly named German Chancellor after Friedrich Merz took the role, and Jens Stoltenberg was inaccurately listed as NATO secretary general when Mark Rutte held the position. The study highlights systemic issues across languages and territories, potentially endangering public trust in news accuracy.
Key Topics & Entities
Keywords
Sentiment Analysis
Source Transparency
This article was automatically classified using rule-based analysis. The political bias score ranges from -1 (far left) to +1 (far right).
Topic Connections
Explore how the topics in this article connect to other news stories