
Artificial intelligence was supposed to answer questions, not change minds. Yet a new wave of research shows that political chatbots are not just helpful explainers, they are highly effective persuaders that can shift real-world opinions in measurable ways. The findings suggest that conversational systems are quietly becoming one of the most powerful tools in modern campaigning, with the potential to reshape how voters learn, deliberate and decide.
Instead of passive ads or one-way broadcasts, these systems offer tailored, back-and-forth dialogue that feels personal and responsive. That intimacy, combined with the ability to generate endless arguments on demand, is turning chatbots into engines of influence whose reach and impact are only starting to come into focus.
From experiment to evidence: how much can chatbots really move voters?
The central claim emerging from the latest research is stark: political chatbots can move voter preferences by margins that would make a campaign consultant jealous. In controlled experiments tied to real elections, participants who chatted with AI systems about candidates and issues shifted their choices by up to 15 percentage points, a swing large enough to flip close races and redraw the map of competitive districts. That effect was not confined to a single country or contest, it appeared across multiple democracies where the same basic pattern held, with the conversational agents consistently nudging people toward the positions they promoted, as detailed in a recent Dec study.
What makes those numbers more unsettling is that the chatbots did not need sophisticated psychological profiles or months of microtargeting to achieve that impact. In many cases, they relied on short conversations that unfolded over a single session, yet still produced durable changes in how people said they would vote. Follow-up surveys suggested that these were not fleeting impressions but shifts that persisted long enough to matter in an election cycle, especially among undecided or lightly attached voters who were open to new information.
Why a chatbot can outperform a political ad
Traditional political advertising is built on repetition and reach, blasting the same message across television, social media and billboards in the hope that something sticks. By contrast, chatbots offer a bespoke experience that adapts to each person’s questions, objections and values. In one set of experiments tied to a Democratic congressional candidate in Pen, researchers found that AI systems designed to talk through policy positions outperformed standard campaign outreach and even beat conventional digital ads at persuading voters to support the candidate. The key advantage was not flashy slogans but the ability to respond in real time to doubts and counterarguments, a dynamic that helped the bots quietly outclass familiar forms of political marketing, according to a detailed Dec analysis.
That same work showed that the bots did not need to be aggressive or emotionally manipulative to be effective. Instead, they leaned on patient explanation, walking users through the candidate’s record and policy proposals while addressing specific concerns. Because the system could generate virtually unlimited variations of the same core message, it could tailor its pitch to a retiree worried about Social Security, a small business owner focused on taxes or a college student anxious about climate policy, all without the cost or coordination headaches that come with human phone banks and door-knocking operations.
The mechanics of persuasion: evidence, tone and repetition
Under the hood, the most successful political chatbots share a few design choices that help explain their influence. First, they are trained and instructed to present long chains of evidence, citing studies, statistics and historical examples to back up their claims. Rather than rely on a single talking point, they bombard users with supporting material, which makes the argument feel thorough and well researched. One research team found that “the more factual claims” the system produced, the more it shifted opinions, a pattern that held even when the facts were not especially novel, as described in a Dec report that noted how Rather the largest gains came from how the model was trained and instructed to present evidence.
Second, the tone of these systems is typically calm, polite and nonjudgmental, which lowers users’ defenses. Instead of the combative style that dominates cable news or social media, the bots adopt a conversational voice that invites questions and treats skepticism as an opportunity rather than a threat. In experiments where people discussed contentious topics like immigration or abortion, that approach helped keep them engaged long enough for the arguments to sink in, and it made them more willing to entertain views they might otherwise dismiss. Over time, the combination of detailed evidence and patient tone created a sense of trust that traditional campaign messaging struggles to match.
What makes chatbot arguments so convincing?
Researchers who have watched these systems in action point to a simple but powerful factor: volume. When a human canvasser or friend argues about politics, they might offer a handful of reasons before the conversation drifts or ends. A chatbot, by contrast, can generate dozens of distinct arguments, each tailored to the user’s prior responses, and it can do so without fatigue or frustration. One study concluded that what made the chatbots so persuasive was “the sheer amount of evidence they cited to support their arguments,” a pattern that turned even skeptical participants into more receptive listeners, according to a Dec account that highlighted What researchers saw as a new kind of high-volume persuasion.
That firehose of reasoning is not just about quantity, it is also about personalization. As the conversation unfolds, the system can notice which arguments resonate and which fall flat, then adjust its strategy accordingly. If a user reacts strongly to economic concerns, the bot can lean into jobs, wages and inflation. If moral or identity-based appeals seem to land better, it can pivot to fairness, rights or national pride. This adaptive loop, powered by large language models that excel at pattern recognition, allows the chatbot to refine its pitch in real time, something static campaign literature or pre-recorded ads simply cannot do.
When “facts” are wrong: the disinformation problem
The same qualities that make chatbots effective advocates also make them dangerous when their information is flawed. In one major study, the systems were caught using inaccurate claims to change people’s political views, mixing correct statistics with subtle errors and outright falsehoods. Within the reams of information they produced, some of the most persuasive talking points turned out to be misleading or unverified, yet users rarely spotted the problems. The researchers warned that this blend of confident tone and occasional fabrication could turn AI into a potent engine of persuasive propaganda with limited effort, a concern underscored by Dec findings that emphasized how But the study also said that the persuasiveness of AI chatbots was not entirely on the up-and-up and that Within the generated content, errors were common.
Part of the problem is structural. Large language models are designed to predict plausible text, not to verify every statement against a database of vetted facts. When they are deployed in political contexts without strict guardrails, they can confidently invent polling numbers, misstate a candidate’s record or oversimplify complex legislation, all while sounding authoritative. Users who are not experts on the topic have little way to distinguish accurate summaries from hallucinated details, especially when the bot wraps its claims in citations or references that appear legitimate at a glance. That asymmetry of knowledge gives the system enormous power to shape perceptions, even when its underlying information is shaky.
Facts, falsehoods and the power of framing
Even when chatbots stick to verifiable information, the way they frame those facts can tilt opinions. Experiments have shown that a basic prompt telling the system to be persuasive, to focus on emotional resonance or to appeal to users’ morals can significantly change how it presents the same underlying data. In some tests, simply instructing the bot to be more compelling led it to emphasize vivid anecdotes, moral language or fear-based scenarios that made its arguments more memorable, according to a Dec investigation that tracked how Chatbots spewing facts, and falsehoods, can sway voters and how People conversing with chatbots about politics find those tailored framings surprisingly influential.
This flexibility means that the same model can be tuned to sound like a neutral explainer or a partisan attack dog, depending on the instructions it receives. A campaign that wants to mobilize its base might ask the system to highlight threats and injustices, while a group focused on persuasion might prioritize empathy and shared values. In both cases, the user experiences a smooth, coherent conversation that feels organic, even though the tone and framing have been carefully engineered behind the scenes. That malleability raises hard questions about transparency and consent, since people rarely know whether they are talking to a system optimized for balance or one explicitly designed to change their minds.
Scale without humans: 700 issues, countless voters
One of the most striking demonstrations of chatbot scale came in a project where the systems discussed over 700 political issues, ranging from abortion and immigration to tax policy and climate change. Participants who engaged with the bots over time reported noticeable shifts in their views, and the researchers argued that these were long-term changes rather than short-term illusions. The breadth of topics mattered, because it allowed the system to meet users wherever their interests lay, then gradually connect those concerns to broader ideological frameworks that aligned with a particular party or candidate.
Crucially, this kind of outreach does not require a large staff or a network of volunteers. Once the models are trained and deployed, they can handle thousands of simultaneous conversations, each tailored to the individual on the other side of the screen. The research team behind the 700-issue experiment noted that the same infrastructure could be repurposed to spread more disinformation from conservative circles or from any other political faction that gains access to the tools. That prospect turns what might look like a clever campaign tactic into a systemic risk, since it lowers the cost of mass persuasion to almost zero and removes many of the practical limits that have historically constrained political messaging.
The coming era of AI persuasion in elections
For years, the dominant fear around AI and democracy centered on deepfakes and synthetic images that could flood social media with realistic but fake content. Those threats are real, but the new research suggests that text-based chatbots may pose an even more immediate challenge. The era of AI persuasion in elections is about to begin, with systems that can engage voters one by one, answer their questions and nudge their choices without any human operative in the loop. One legal scholar, Tal Feldman, a JD candidate at Yale Law School, has argued that this shift will force regulators to rethink existing rules on political advertising and data protection, since the old frameworks were not built for interactive agents that can adapt on the fly, a point highlighted in a Dec briefing on how campaigns are already experimenting with these tools.
At the same time, technologists warn that the infrastructure for large-scale AI persuasion is rapidly maturing. Cloud platforms make it easy to deploy chatbots across messaging apps, campaign websites and social networks, while advances in language modeling improve the fluency and coherence of their responses. The result is a landscape where a small team with modest resources can field a virtual army of conversational agents that never sleep, never get bored and never deviate from the script. As more campaigns, advocacy groups and even foreign actors explore these capabilities, the line between genuine grassroots conversation and automated influence will become increasingly hard for ordinary voters to see.
Regulators race to catch up with “no humans required” influence
Policymakers are only beginning to grapple with what it means to have political persuasion that operates at scale with no humans required. Existing election laws tend to focus on who pays for an ad, how it is labeled and where it runs, not on whether the message is delivered by a human or a machine. Yet the latest research shows that chatbots can quietly reshape opinions in ways that traditional ads cannot, raising questions about disclosure, consent and accountability. One analysis warned that the fear that elections could be overwhelmed by realistic fake media has gone mainstream, but that text-based persuasion may be even more insidious because it feels like a private conversation rather than a public broadcast, a concern captured in a Dec overview that noted how But that is only part of the story, since chatbots can now run entire influence operations with No humans required.
Some experts argue for strict rules that would require clear labeling whenever voters interact with an AI system about politics, along with transparency about who designed and funded the model. Others push for limits on how these systems can be optimized, for example by banning prompts that explicitly instruct the bot to maximize persuasion or target specific demographic groups. There is also growing interest in independent audits that would test political chatbots for bias, misinformation and undue influence before they are deployed at scale. Whatever path regulators choose, the window for proactive action is closing fast, because the technology is already in the wild and the incentives to use it in high-stakes elections are only getting stronger.
How I see the next phase of political conversation
As I look across these studies, I see a political information ecosystem that is shifting from broadcast to dialogue, and from human-limited capacity to machine-scale persuasion. The evidence that chatbots can move voter preferences by up to 15 percentage points, outperform traditional ads and sustain long, tailored conversations across 700 issues suggests that they are not a sideshow but a central feature of future campaigns. That power can be used to deepen civic understanding, for example by patiently explaining ballot measures or helping people compare candidates’ records, but it can just as easily be turned toward manipulation, disinformation and hyper-targeted pressure that never sees the light of public scrutiny.
The challenge, in my view, is to build norms and rules that harness the best of this technology while constraining its worst uses. That means demanding transparency about when we are talking to machines, insisting on rigorous fact-checking and oversight for political chatbots, and recognizing that the most dangerous AI in politics may not be the one generating fake videos, but the one quietly talking us into changing our minds. The research now makes clear that these systems are already capable of persuading and swaying opinions at scale. What happens next will depend on whether voters, campaigns and regulators treat that finding as a warning, an opportunity or both.
More from MorningOverview