Voters can be rapidly swayed by AI chatbots
12-07-2025

Voters can be rapidly swayed by AI chatbots

A growing number of people ask AI chatbots for help with homework, recipes, travel plans, even break-up texts.

Now research shows that a brief back-and-forth with an artificial intelligence system can also shift how people feel about presidents, prime ministers, and hot-button policies.

New scientific studies reveal that chatting with large language model (LLM) systems can move voters’ views by 10 percentage points or more in many situations.

These systems were not using secret psychological tricks. They were mostly doing something more straightforward – piling up lots of claims that support their side of an argument.

AI chatbots enter the campaign trail

Researchers at Cornell University wanted to see how strong AI-driven persuasion could be when used to talk about real candidates and policies.

David Rand is a professor of information science and marketing and management communications, and a senior author on both papers.

“LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said Rand.

“But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.”

Voters and AI chatbots

In the Nature study, the researchers set up text conversations between voters and AI chatbots that were programmed to advocate for one of two sides in high-stakes elections.

Participants were randomly assigned to talk to a chatbot supporting one candidate or the other, and the bots were instructed to focus on policy, not personality or insults. After the chat, the team measured any change in attitudes and voting intentions.

The team ran this experiment in three different countries. One experiment focused on the 2024 U.S. presidential election. Another used the 2025 Canadian federal election. A third looked at the 2025 Polish presidential election.

The idea was to see if AI persuasion worked in different political systems – not just in one country with one set of issues.

Shifts among opposition voters

In the United States, more than 2,300 Americans took part about two months before the election. The researchers used a 100-point scale to track support for each candidate.

When the chatbot argued for Vice President Kamala Harris’s policies, it nudged likely Donald Trump voters 3.9 points toward Harris.

That effect was roughly four times larger than effects seen in tests of some traditional political ads during the 2016 and 2020 campaigns.

When the chatbot argued for Trump’s policies, it pulled likely Harris voters 1.51 points toward Trump on the same scale. Those changes may sound small, but in a close race, small shifts among opposition voters can matter.

In the Canadian and Polish experiments, the pattern was similar but the size of the effect grew. The study included 1,530 Canadians and 2,118 Poles.

In these two countries, AI chats shifted opposition voters’ attitudes and voting intentions by about 10 percentage points.

“This was a shockingly large effect to me, especially in the context of presidential politics,” Rand said.

How chatbots influence voters

The chatbots did not rely on insults or emotional appeals. They used a mix of tactics, but polite tone and evidence-based arguments appeared most often.

When the researchers blocked the bots from using factual claims and limited them to vague reasoning, the systems became much less persuasive. That result pointed to a key ingredient: specific claims that sound factual.

To understand these claims better, the team used a separate AI model to fact-check the arguments, after validating that model against professional human fact-checkers.

On average, the chatbots’ statements contained more correct than incorrect information.

However, in all three countries, bots instructed to push right-leaning candidates produced more inaccurate claims than bots backing left-leaning candidates.

This pattern matched earlier work showing that social media users on the political right share more inaccurate information than users on the left, according to co-senior author Gordon Pennycook.

Tuning AI for persuasion

The Science paper, which was conducted with colleagues at the AI Security Institute, zoomed out from elections to a huge set of political questions.

Nearly 77,000 participants in the United Kingdom chatted with AI systems about more than 700 policy issues, ranging from taxes to environmental rules.

The researchers tested different model sizes, instructions, and training strategies. They found a clear pattern.

“Bigger models are more persuasive, but the most effective way to boost persuasiveness was instructing the models to pack their arguments with as many facts as possible, and giving the models additional training focused on increasing persuasiveness,” Rand said.

“The most persuasion-optimized model shifted opposition voters by a striking 25 percentage points.”

That jump came from a model explicitly tuned to win people over, not just to answer questions.

Persuasion versus accuracy

As the models became more persuasive, their accuracy slipped. The more the chatbot tried to supply factual claims, the more it bumped into the limits of what it knew.

At some point, it started producing incorrect statements that sounded plausible. Rand suspects that pressure to keep producing “facts” caused the systems to fabricate when they ran out of correct information.

This tension between persuasiveness and accuracy raises concerns for election seasons.

A tool that can quickly assemble a long list of reasons to support one side, but that becomes less reliable as it gets more persuasive, poses a challenge for voters trying to sort truth from fiction.

Conspiracy theories and AI chats

The third study – published in PNAS Nexus by Rand, Pennycook and colleagues – stepped away from candidates and policy debates and into the world of conspiracy theories.

In this study, arguments crafted by AI chatbots reduced belief in conspiracy claims, even when participants thought they were talking to a human expert instead of a machine.

The results suggest that the power of the messages is linked to the content of the arguments rather than any special trust people placed in AI as an authority.

The systems’ ability to assemble coherent, fact-rich explanations appeared to matter more than who people believed was talking to them.

Ethics, safeguards, and real-world limits

Across the studies, the researchers followed safeguards. Every participant was told they were chatting with an AI system, not a human.

The direction of persuasion was randomized so that, on average, the experiments did not push public opinion toward any one party or candidate overall. Everyone was fully debriefed afterward.

The team notes that chatbots can only influence people who choose to interact with them. Getting large numbers of voters to spend time in political chats is not guaranteed.

Still, as political campaigns and advocacy groups look for new tools, AI systems sit high on the list.

“The challenge now is finding ways to limit the harm – and to help people recognize and resist AI persuasion,” Rand said.

The full study was published in the journal Science.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe