|
|
AI assistants struggle with news accuracy, new research shows
The recent research indicates that the popular AI assistants misrepresent news content in nearly half of their responses when asked about news and current affairs.
According to new research published by the European Broadcasting Union (EBU), the organization analyzed about 3,000 responses to questions about the news from leading conversational AI assistants. Advertisement
For that purpose, it assessed AI assistants in 14 distinct languages to analyze accuracy, capability, and the ability to distinguish opinion from fact including ChatGPT, Copilot, Gemini, and Perplexity.
The research study results showed that overall, 45% of the AI responses contained at least one significant issue, and 81% had some kind of problem.
OpenAI and Microsoft have previously clarified that they are seeking to resolve the issue of misleading information generated, often due to factors such as insufficient data.
On the contrary, a third of AI assistants showed serious sourcing errors such as misleading, missing or incorrect attribution.
Furthermore, the study showed that 72% of responses by Gemini, Google’s AI assistant, had major sourcing issues, compared with below 25% for all other assistants.
However, issues of accuracy were found in 20% of responses from all AI assistants, including instances of outdated information.
It has been observed that twenty-two public service media organizations from 18 countries including France, Germany, the United Kingdom, Spain, Ukraine, and the United States participated in the study.
In this connection, EBU Media Director Jean Philip De Tender said in an official statement, “When people don’t know what to trust they end up trusting nothing at all, and that can deter democratic participation.”
The new report urged AI companies to be held accountable to offer a reliable source for news summaries, and to ensure accuracy in response to news-related queries. |
|