Leading ChatGPT, Copilot, Gemini, and Perplexity assistants misrepresent news content in almost half their responses, according to recent research published by the European Broadcasting Union (EBU) and the BBC. The study analyzed 3,000 responses to news queries in 14 languages, assessing the assistants for accuracy, proper sourcing, and the distinction between opinion and fact. Overall, 45% of responses contained at least one significant issue, and 81% showed some form of problem.
Twenty-two public-service media organizations from 18 countries—including France, Germany, Spain, Ukraine, Britain, and the United States—participated in the study. The EBU warned that the increasing use of AI assistants for news, instead of traditional search engines, could undermine public trust.
EBU Media Director Jean Philip De Tender commented, "When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation."
The report recommends that companies behind leading ChatGPT, Copilot, Gemini, and Perplexity AI assistants be held accountable and urged improvements in how these tools respond to news-related queries.
OpenAI and Microsoft have acknowledged the problem of "hallucinations"—the generation of incorrect or misleading information—and are actively working on solutions. Perplexity claims its "Deep Research" mode achieves 93.9% factual accuracy, while Gemini encourages users to provide feedback to enhance its platform’s reliability and helpfulness.
Stay Informed with the Latest news and trends in AI
The Domain has been successfully submitted. We will contact you ASAP.