From Troll Farms to AI-Powered Factories
Kremlin information warfare has shifted from the manual labor of St. Petersburg troll farms to campaigns driven by generative AI, according to the Centre for International Governance Innovation. The difference is not just speed but versatility. Where earlier operations required dozens of paid employees writing social media posts in broken English, current campaigns use off-the-shelf AI tools to generate polished text, fabricated images, deepfake videos, and even fake websites at minimal cost. The Justice Department documented how one social media bot farm used elements of AI to create fictitious profiles purporting to belong to real individuals, then deployed those accounts to advance Russian government objectives. The operation that the Justice Department disrupted, Doppelganger, went further still. It paired AI-generated articles with paid social media ads, some of which were themselves created with AI tools, to push Kremlin narratives directly into American news feeds. Consumer-grade tools have lowered the barrier to entry dramatically. A campaign tracked under the name Operation Overload used freely available AI applications to produce pictures, videos, QR codes, and counterfeit websites, all designed to trick those who encountered them. The practical result is that a single operator with a laptop can now generate the kind of multimedia disinformation that once required a team and a budget. Analysts interviewed by NPR’s coverage of Russian AI propaganda warned that these tools are being used specifically to undermine public backing for continued military and financial support to Ukraine.Poisoning the Well That AI Drinks From
The most strategically ambitious element of Russia’s AI-enabled propaganda is not what it puts on social media but what it seeds across the open web for AI systems to absorb. A joint investigation by the Atlantic Council’s DFRLab and Finnish company CheckFirst found that pro-Kremlin networks are rewriting online reference material and manipulating Wikipedia entries to skew how large language models process information about Russia and Ukraine. France’s national digital vigilance agency, VIGINUM, identified a sprawling network of propaganda sites operating under the label Portal Kombat. That network, uncovered by French investigators, flooded the internet with articles formatted to look like local news, creating a massive volume of text that web crawlers and AI training pipelines would naturally ingest. Reporting in the Bulletin of the Atomic Scientists describes how such Russian-linked networks are deliberately saturating the web with low-quality but on-message content in hopes of corrupting the data that powers commercial chatbots. This tactic is sometimes called “LLM grooming,” a deliberate strategy to corrupt the training data that chatbots and search tools rely on so that their outputs gradually reflect Kremlin-friendly framing. It builds on the insight that AI systems are only as reliable as the material they are trained on. If enough poisoned content is mixed into that corpus, the resulting bias can be difficult to detect and even harder to remove. This matters because AI chatbots are increasingly used as research tools by students, journalists, and policymakers. If the underlying training data carries systematic bias toward Russian narratives, the distortion does not announce itself. It shows up as a subtle reframing: Ukrainian self-determination gets erased, and Russian aggression gets recast as a defensive reaction, according to analysis by the Center for European Policy Analysis. That kind of shift in how history is presented could persist long after the current conflict ends, shaping how future generations understand the war. Evidence suggests this is not a hypothetical risk. A report highlighted by local U.S. news coverage detailed how a Russian-linked network exploited mainstream AI chatbots to launder propaganda into seemingly neutral answers to user questions. In many cases, the chatbots reproduced false or misleading claims about the war in Ukraine, presented without context or critical scrutiny, giving them an aura of objectivity they did not deserve.Deepfakes and the Weaponization of Trust
Beyond text, Russian operations are deploying increasingly sophisticated audiovisual deception. Ukraine’s Center for Countering Disinformation has cataloged the AI techniques in active use, including full deepfakes that replace a person’s face or voice to create realistic but entirely fabricated video, as well as partial deepfakes that alter specific segments of otherwise authentic footage. These tools allow operators to put false words in the mouths of political leaders or fabricate battlefield clips that look genuine on a phone screen. The BBC reported that AI-generated videos are supercharging Russia’s disinformation campaigns on platforms like TikTok, where content linked to Kremlin-aligned units circulated widely before the platform removed it. In one example described by researchers, a legitimate academic reel posted by King’s College London was edited, revoiced with AI tools, and reuploaded with a misleading caption that inverted the original message. Viewers who encountered the altered clip had little reason to doubt its authenticity: the visuals appeared genuine, the speaker’s lips were synced, and the institutional branding remained intact. Such deepfakes exploit a basic feature of social media consumption: people often watch short videos with the sound off, glancing only at subtitles or on-screen text. A convincing logo or familiar face can be enough to trigger trust, even if the underlying audio has been synthetically altered. Once shared and reshared across networks, the doctored video takes on a life of its own, detached from any corrections issued later. Experts warn that as generative tools continue to improve, distinguishing between authentic and fabricated footage will become even harder for average users. That erosion of confidence is itself a strategic goal. If citizens come to believe that any video could be fake, then verified evidence of war crimes, corruption, or election interference can be dismissed as just another deepfake, blunting accountability and helping aggressors escape consequences.Defending the Information Ecosystem
Confronting this new wave of AI-enabled propaganda will require a mix of technical, legal, and civic responses. On the technical side, AI developers are racing to build better filters that can detect coordinated inauthentic behavior and flag content that appears to be machine-generated. Some companies are experimenting with watermarking and provenance tools that can help verify whether a piece of media has been altered. Governments, for their part, are expanding legal authorities to seize infrastructure and expose covert networks, as the Justice Department did when it took down the Doppelganger domains. European regulators are investing in monitoring units like VIGINUM to track cross-border information operations and share findings with allies, while independent watchdogs and research labs continue to map how pro-Kremlin narratives move across platforms and languages. Yet technical and legal fixes can only go so far. The same generative systems that Russia is attempting to corrupt are also being used by educators, journalists, and civil society groups to fact-check claims, translate reliable reporting, and reach audiences that might otherwise be vulnerable to disinformation. The challenge is to harden these tools against manipulation without abandoning their promise. Ultimately, resilience will depend on public literacy as much as on platform policy. Teaching users to question screenshots, reverse-image search suspicious photos, and treat AI-generated summaries as starting points rather than final answers can blunt the impact of even sophisticated campaigns. In an era when propaganda can be mass-produced by algorithms and smuggled into the very systems people trust to explain the world, the ability to interrogate information, not just consume it, has become a core democratic skill. More from Morning Overview*This article was researched with the help of AI, with human editors creating the final content.