Imagine scrolling through a heated political thread and seeing dozens of accounts, each with a distinct voice and posting history, all converging on the same talking point. None of them are real people. That scenario is no longer science fiction, according to a coalition of more than a dozen social scientists, AI researchers, and public figures who published a detailed warning in a preprint first posted in May 2025 and revised on arXiv in January 2026. Their central claim: networks of autonomous AI agents, powered by large language models, can now be orchestrated to flood online platforms with synthetic opinions and manufacture the appearance of grassroots agreement where none exists.
The paper arrives at a moment when the underlying technology is no longer theoretical. Documented cases of AI-generated propaganda, experimental proof that language models produce near-undetectable disinformation on demand, and lab evidence that AI conversations can durably shift human beliefs have all appeared in peer-reviewed literature over the past two years. Taken together, the researchers argue, these capabilities amount to a new class of information weapon, one that could distort democratic debate heading into the 2028 U.S. presidential election cycle and beyond.
From scripted bots to adaptive swarms
Older bot networks were blunt instruments. They relied on scripted messages, recycled talking points, and patterns that platform moderators learned to flag: unusual posting frequency, identical phrasing, clusters of accounts created on the same day. The new threat is qualitatively different. By pairing agentic AI, software that can plan, adapt, and act autonomously, with the conversational fluency of large language models, operators can spin up swarms of fake personas that argue, joke, hedge, and even disagree with each other in ways that look organic.
This is not a projection built on speculation alone. Before the term “AI swarm” had entered mainstream research vocabulary, investigators had already documented AI-powered social botnets using ChatGPT-era tools to generate clusters of convincing fake personas. Those botnets operated with enough sophistication to evade standard detection for extended periods. A peer-reviewed analysis published in PNAS Nexus went further, presenting empirical evidence that generative AI techniques were deployed in a state-backed disinformation campaign, with measurable patterns distinguishing machine-generated propaganda from human-written material.
What makes the swarm concept more dangerous than these earlier examples is integration. The consortium’s paper argues that agentic AI collapses the distance between content generation and coordinated deployment into a single automated loop. An operator no longer needs a team of human trolls managing dozens of accounts. A single system can generate thousands of distinct, contextually appropriate messages, assign them to synthetic personas with fabricated histories, and time their release to mimic the rhythm of genuine public discourse.
The production line is already running
A study published in PLOS ONE quantified just how easy it is to weaponize current language models. Researchers built the DisElect dataset, 2,200 malicious prompts and 50 benign ones designed to test whether LLMs would comply with requests to produce election disinformation. The results were stark: models consistently generated high-quality disinformation that human evaluators struggled to tell apart from authentic political commentary. The cost per message is negligible. The volume is limited only by compute budget.
That raw production capacity matters because persuasion at scale depends on saturation. A single fake post is easy to ignore. Hundreds of seemingly independent accounts echoing the same sentiment in slightly different words can reshape what observers perceive as the majority opinion, a dynamic social psychologists call the “false consensus effect.” When that effect is engineered deliberately, it can suppress dissent, shift fence-sitters, and make authentic voices feel outnumbered even when they are not.
An earlier framework paper mapped how generative language models fit into automated influence-operations pipelines, identifying intervention points across model building, access controls, content dissemination, and belief formation. That taxonomy gave the field its working vocabulary for discussing threats and defenses. The swarm paper builds directly on it, treating the pipeline not as a sequence of separate risks but as a system that agentic AI can now run end to end.
Evidence that AI persuasion actually works
Producing convincing text is one thing. Changing minds is another. A controlled experiment published in Science showed that AI-mediated conversations could durably reduce conspiracy beliefs in participants, with effects persisting weeks after the interaction. The study focused specifically on conspiracy beliefs rather than general political attitudes, but its implications are broad: if a single AI dialogue can move someone off a deeply held position, coordinated swarms delivering tailored messages across platforms could amplify that effect by orders of magnitude.
There are important caveats. The Science experiment took place in controlled lab conditions where participants engaged in sustained, one-on-one exchanges with an AI. Social media is chaotic. Users encounter competing messages, vary widely in trust levels, and rarely engage in the kind of extended back-and-forth the experiment required. Whether swarm-generated content can replicate that depth of persuasion in a noisy feed remains unproven. But the researchers behind the swarm warning argue that volume and repetition may compensate for shallow engagement: even brief, repeated exposure to a manufactured consensus can shift perception over time.
What nobody has proven yet
For all the documented building blocks, several critical gaps remain. No published evidence as of April 2026 confirms that a fully autonomous, self-coordinating AI swarm has operated at scale during a live election. The consortium’s paper models the threat and draws on prior case studies, but the specific scenario of integrated swarms swaying actual votes has not been forensically documented. Demonstrating that individual components work is different from proving the assembled weapon has been fired.
Platform responses are another blind spot. Neither Meta nor X has released detailed public statements about swarm-specific detection tools. Whether their existing systems for flagging coordinated inauthentic behavior can catch swarms that use generative models to vary language, timing, and persona behavior is unclear. Experts quoted in The Guardian‘s January 2026 coverage flagged the 2028 U.S. presidential race as a likely pressure point, but no on-record responses from U.S. election officials addressing the swarm threat have surfaced publicly.
There is also the question of collateral damage. If swarms can simulate consensus, they can just as easily drown out genuine grassroots movements by flooding the same channels with synthetic noise. Real organizers could find their messages buried, potentially pushing them toward more extreme tactics simply to break through. That dynamic has not been studied empirically, but it follows logically from the mechanics the researchers describe, and it raises uncomfortable questions about whether the cure (aggressive content moderation) might itself suppress legitimate speech.
Adaptation is another wild card. Once platforms and regulators begin targeting swarm tactics, adversaries will shift. They might blend human operators with AI agents, migrate to smaller or semi-private communities, or fragment swarms into micro-networks that stay below detection thresholds. The research to date offers limited insight into how these cat-and-mouse dynamics will unfold across multiple election cycles.
Where policy and platforms need to move
The consortium’s warning points toward several near-term priorities for policymakers. Transparency requirements around political advertising and automated accounts could make it harder for swarms to pass as ordinary citizens. Election regulators may need updated guidance on AI-generated campaign material, including disclosure norms and record-keeping obligations for parties and candidates using generative tools. The European Union’s AI Act, which entered force in stages beginning in 2024, includes provisions on manipulation and deception, but whether its enforcement mechanisms can keep pace with swarm-speed deployment is an open question.
Platforms face pressure on two fronts. Detection systems need new signals beyond traditional bot markers like posting frequency or IP clustering; behavioral analysis tuned to the subtle coordination patterns of agentic swarms will likely be necessary. At the same time, platforms will need clearer policies on the acceptable use of AI for persuasion, including whether to label AI-generated content in political threads and how to handle accounts that blend human and automated activity.
Independent auditing may be the most urgent structural need. External researchers currently have limited access to the platform data required to study emerging swarm tactics in real time. Without some form of privacy-preserving data sharing, proposed defenses cannot be tested, and new adversary strategies will go undetected until the damage is done.
Why skepticism is a civic skill now
Structural responses from governments and platforms will take time. In the interim, individual users are not helpless, but they do need to update their instincts. The old heuristic, “If lots of people are saying it, there must be something to it,” no longer holds when a single operator can generate hundreds of distinct voices overnight. Treating apparent online consensus as a signal to investigate rather than a reason to agree is a practical first step.
Diversifying information sources helps too: relying less on any single platform’s algorithmic feed and more on direct subscriptions, established outlets, and offline networks. When emotionally charged political content surfaces from unfamiliar accounts, pausing to check provenance and cross-reference claims with independent reporting can blunt the impact of synthetic persuasion, whether the source is a bot swarm or a human troll farm.
None of this justifies fatalism. The same AI capabilities that enable swarm manipulation also power tools for fact-checking, content authentication, and public education. But the emerging research makes one thing clear: the technical and psychological prerequisites for manufacturing fake public consensus at scale now exist. Whether democratic institutions, platforms, and ordinary citizens can adapt fast enough is a question the next several election cycles will force into the open.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.