AI assistants are increasingly becoming primary news sources, but new research reveals a critical flaw: they frequently misrepresent information, particularly concerning news and elections. Studies from organizations like the European Broadcasting Union and AI Democracy Projects show significant errors, from incorrect sourcing to outright fabrications, raising serious concerns about public trust and democratic participation.
In an age where information is at our fingertips, AI assistants like ChatGPT, Gemini, and Copilot are rapidly transforming how we access news and critical updates. Heralded for their speed and ability to synthesize vast amounts of data, these tools promised a new era of instant knowledge. However, recent studies are casting a long shadow over this promise, revealing a disturbing trend of widespread errors, factual inaccuracies, and significant sourcing issues that threaten to undermine public trust and even impact democratic processes.
A Deep Dive into News Misinformation
New research published by the European Broadcasting Union (EBU) and the BBC has brought the extent of AI news misinformation into sharp focus. This international study, which analyzed 3,000 responses from leading AI assistants across 14 languages, found that nearly half (45%) of all responses contained at least one significant issue. An alarming 81% had some form of problem, ranging from minor inaccuracies to serious factual errors, as reported by Reuters.
The EBU/BBC Study Findings
The study specifically assessed AI assistants for their accuracy, sourcing, and ability to differentiate between opinion versus fact. The results were concerning:
- Overall Inaccuracy: 45% of responses had at least one significant issue, and 81% had some form of problem.
- Sourcing Errors: A third of responses displayed serious sourcing problems, including missing, misleading, or incorrect attribution. Google’s Gemini was particularly poor in this area, with 72% of its responses showing significant sourcing issues, compared to below 25% for all other assistants.
- Factual Accuracy: 20% of all responses contained accuracy issues, such as outdated information.
Specific Examples of Factual Errors
The research highlighted concrete instances of AI assistants getting basic facts wrong. For example, Gemini incorrectly reported changes to a law on disposable vapes. More disturbingly, ChatGPT once stated Pope Francis was the current Pope several months after his actual death, an error that could have significant implications for users relying on the assistant for timely and accurate information.
These findings suggest that despite their advanced capabilities, AI models struggle with the nuanced, constantly evolving nature of news, often generating “hallucinations” – incorrect or misleading information – due to factors like insufficient or outdated data. Companies like OpenAI and Microsoft have acknowledged this as an issue they are working to resolve, while Perplexity claims 93.9% factuality in its “Deep Research” mode, a figure that the broader research suggests is not universally achieved across all modes or platforms.
The Peril of Election Misinformation
Beyond general news, the threat of AI misinformation extends critically into the realm of elections. With the U.S. presidential primaries underway, and a growing number of Americans turning to chatbots for political information, a new study from AI Democracy Projects and the nonprofit media outlet Proof News found that AI-powered tools produce inaccurate election information more than half the time. This includes answers that are harmful or incomplete, sparking fears of voter manipulation and weakened democratic processes.
Harmful Electoral Advice and Hallucinations
The study tested several prominent AI models, including OpenAI’s ChatGPT-4, Meta’s Llama 2, Google’s Gemini, Anthropic’s Claude, and Mistral’s Mixtral. The results were stark:
- Illegal Voting Methods: Meta’s Llama 2 erroneously told users that California voters could vote by text message – a method illegal anywhere in the U.S.
- Misleading Polling Place Rules: None of the five models correctly stated that wearing clothing with campaign logos, such as a MAGA hat, is prohibited at Texas polls under state law.
- Incorrect Voter Registration: In Nevada, where same-day voter registration has been legal since 2019, four out of five chatbots wrongly asserted that voters would be blocked from registering weeks before Election Day. Nevada Secretary of State Francisco Aguilar expressed his concern, stating, “it scared me, more than anything, because the information provided was wrong.”
These incidents underscore the severe risks posed by AI hallucinations. The potential for these tools to suggest non-existent polling places or invent illogical responses based on dated information could actively discourage people from voting or lead them to violate election laws. This is not merely a technical glitch but a direct threat to the integrity of electoral systems, as confirmed by AI Democracy Projects and Proof News.
The “Hallucination” Factor: Beyond News and Elections
The problem of AI hallucinations isn’t limited to textual information. Google recently paused its Gemini AI picture generator due to the technology producing images with historical inaccuracies and concerning responses. For example, when asked to create images of German soldiers during World War 2, the generator produced racially diverse images, which was historically inaccurate for the context of Nazi Germany, as noted by The Wall Street Journal. This incident, alongside the “no African country begins with ‘k'” error cited in the incident journal for Google’s Search AI, further highlights the pervasive challenge of ensuring factual accuracy across all AI-generated content.
The Broader Implications: Trust and Democracy
The increasing reliance on AI assistants for news and information carries profound implications for public trust and democratic participation. When people encounter frequent inaccuracies or misleading content from sources they expect to be authoritative, trust erodes. Jean Philip De Tender, EBU Media Director, articulated this concern powerfully: “When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.”
Public Perception and Concerns
A recent poll from the Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy confirmed widespread apprehension, showing that most adults in the U.S. fear that AI tools will increase the spread of false and misleading information during elections. This public sentiment reflects a growing awareness of AI’s potential downsides and a demand for greater reliability.
The Call for Accountability and Regulation
The studies unanimously urged AI companies to be held accountable for the content their assistants generate and to drastically improve their response accuracy for news-related queries. Despite the clear and present dangers, the U.S. Congress has yet to pass laws regulating AI in politics, leaving the tech companies largely to self-govern. This lack of formal regulation means that the responsibility for ensuring accurate, unbiased, and safe AI information currently rests primarily with the very developers creating these powerful tools.
Looking Ahead: Navigating the AI Information Landscape
As AI assistants continue to evolve and integrate into our daily lives, the challenge of combating misinformation will only grow. For users, a critical approach to information, always seeking to cross-reference and verify, becomes paramount. For AI developers, the imperative to prioritize accuracy, transparency, and ethical safeguards must be at the forefront of their innovation. Without robust mechanisms for accountability and potentially, thoughtful regulation, the promise of artificial intelligence as an information enhancer risks being overshadowed by its capacity to propagate falsehoods, thereby endangering the very foundations of informed citizenry and robust democracy.