Morning Overview

Former security chief warns Iran uses AI to shape global narratives

A former U.S. security official’s warning that Iran is actively using artificial intelligence to shape global narratives has drawn fresh attention to a threat that multiple federal agencies and private-sector researchers have been tracking for more than a year. The concern is not hypothetical: U.S. intelligence agencies, the FBI, and major technology firms have each documented Iranian operations that use generative AI to manufacture disinformation, target elections, and distort wartime coverage. What makes the current moment distinct is the widening gap between Iran’s actual AI capabilities and the outsized influence its AI-assisted campaigns have already achieved.

U.S. Agencies Flag AI-Driven Election Interference

The clearest official accounting of Iran’s AI-enabled influence work came through a pair of federal statements tied to the 2024 U.S. general election. The FBI and the Cybersecurity and Infrastructure Security Agency issued a public warning that foreign adversaries, explicitly naming Iran and Russia, were using generative AI and related tools to craft and spread misleading election content. The agencies described tactics including deepfake videos and manipulated media designed to erode voter confidence.

That warning was soon reinforced by a joint statement from the Office of the Director of National Intelligence, the FBI, and CISA that provided additional details on Iran’s election influence efforts. The interagency update built on earlier public disclosures, signaling that intelligence officials were tracking an evolving and persistent campaign rather than a one-off incident. Taken together, the two statements established a clear U.S. government position: Iran had moved beyond traditional propaganda into AI-assisted information warfare aimed squarely at American democratic processes.

For ordinary voters, this matters in a direct way. AI-generated content can be produced at scale, tailored to specific audiences, and distributed through social media channels that most people use daily. The speed and volume of synthetic media make it harder for individuals and platforms alike to distinguish authentic reporting from state-manufactured fiction, especially during high-stakes election periods. Even when false stories are eventually debunked, they can leave a residue of doubt that weakens trust in institutions.

Microsoft Research Tracks a Sharp Increase

Private-sector findings have corroborated the government’s assessment. Microsoft and OpenAI disclosed that adversaries including Iran had begun using generative AI in cyber operations, a development that blurs the line between espionage, sabotage, and influence campaigns. The reporting described how state-linked actors leveraged large language models to draft phishing emails, generate persuasive propaganda, and assist with technical tasks that support hacking efforts.

Separately, Microsoft research found that Russia, China, Iran, and North Korea had sharply increased their use of AI to deceive people online and mount cyberattacks against the United States. While the specific tools and tradecraft varied, the pattern was consistent: generative systems lowered the barrier to producing convincing text, imagery, and video at industrial scale. For a government with limited economic resources, AI became a force multiplier, enabling a relatively small team of operators to produce content that could flood global information channels.

The significance of the Microsoft findings extends beyond the technical details. When a company with visibility into billions of user accounts and enterprise networks quantifies a rise in state-backed AI activity, it provides a baseline that government agencies alone cannot easily establish. The research showed that AI was not simply a theoretical risk for future election cycles but an operational tool already in use by multiple adversaries, with Iran among the most active.

Sanctions Target AI-Powered Disinformation Networks

Washington has responded with enforcement actions. The U.S. government has imposed sanctions on Iranian and Russian groups tied to disinformation targeting American voters, describing how these networks used AI to manufacture fake videos and operate sham news sites that mimicked legitimate outlets. By combining synthetic media with fabricated branding, operators could push state-aligned narratives while disguising their origins.

Sanctions are a blunt instrument, and their effectiveness against decentralized digital operations is debatable. Shutting down one fake news site does little when the underlying AI tools can spin up replacements within hours. Still, the enforcement actions serve a naming-and-shaming function, making it harder for the targeted entities to operate openly and raising the cost of future campaigns. They also create a legal framework that can be used to freeze assets and restrict financial flows tied to disinformation infrastructure, signaling that information warfare will be treated as a matter of national security rather than mere online misbehavior.

Rhetoric Versus Reality in Iran’s AI Ambitions

One of the less discussed dimensions of this story is the tension between what Iranian officials claim about their AI capabilities and what the country can actually deliver. A peer-reviewed study published in Cambridge University Press’s Iranian Studies journal, titled “Artificial Intelligence in Iran: National Narratives and Material Realities,” examines how AI is constructed in Iranian national discourse compared to on-the-ground capability constraints. The research on Iran’s AI narratives documents state rhetoric around AI-enabled warfare and assassinations, narratives that serve domestic political purposes but often outstrip Iran’s actual technical infrastructure.

This gap between narrative and capability creates a paradox. Iran’s AI tools for disinformation, while effective enough to draw formal U.S. government warnings and sanctions, are not evidence of a world-class AI sector. They reflect a strategic choice to invest in low-cost, high-impact information operations rather than in the kind of foundational AI research that requires massive computing resources and open scientific collaboration. The approach is asymmetric by design: it costs relatively little to generate synthetic media that forces an adversary to spend far more on detection, moderation, and public education.

That asymmetry has implications for how Western governments and technology companies allocate resources. Over-indexing on Iran’s rhetorical claims about AI-powered military capabilities could divert attention from the more immediate and proven threat: the steady production of synthetic content designed to fracture public trust in elections, media, and institutions. At the same time, underestimating the sophistication of Iran’s information operators because of broader infrastructure limits would be an equally serious error.

Wartime Disinformation Adds a New Front

The problem extends beyond elections. In regional conflicts, Iran-linked media channels have circulated AI-altered images and videos that purport to show battlefield successes, civilian atrocities, or foreign conspiracies. These clips often appear first on fringe platforms before migrating into mainstream social feeds, where they can be picked up by partisan commentators and, occasionally, by traditional outlets under deadline pressure.

Wartime information environments are already chaotic, with genuine footage, miscaptioned images, and outright fabrications colliding in real time. Generative AI intensifies that confusion. It enables operators to fabricate “evidence” that aligns perfectly with preexisting narratives, whether portraying Iran as a besieged victim or a technologically advanced power capable of striking enemies with precision. Once such content is seeded into the ecosystem, it can shape perceptions long after fact-checkers have identified it as false or manipulated.

For military planners and diplomats, this creates a risk that public opinion (and even elite decision-making) may be influenced by AI-generated artifacts rather than verified intelligence. For civilians in conflict zones, it can distort their understanding of immediate threats and safe routes, with potentially life-or-death consequences.

Defensive Measures and the Role of Institutions

Confronting AI-enabled disinformation from Iran and other states will require a layered response that goes beyond sanctions and public advisories. Technology platforms are investing in detection tools that can flag synthetic media, but these systems are locked in an arms race with constantly improving generation models. Civil society groups and journalists, meanwhile, are developing verification protocols and training programs to help audiences recognize telltale signs of manipulation.

Academic publishers and research institutions also have a role to play in clarifying what is known about state-level AI capabilities. Cambridge University Press, for example, not only disseminates scholarship like the Iranian Studies article but also maintains support resources for readers and authors. Those seeking help navigating access, permissions, or technical issues can draw on general support information, reach out via listed contact channels, or submit specific assistance requests related to AI and security research. Ensuring that high-quality, peer-reviewed work is widely accessible is a quiet but crucial counterweight to the flood of synthetic misinformation.

Ultimately, the challenge posed by Iran’s AI-assisted influence campaigns is less about raw computational power and more about the vulnerabilities of human information systems. Democracies that rely on open debate and pluralistic media are, by design, exposed to manipulation. The task ahead is to harden those systems, through transparency, education, and resilient institutions, without sacrificing the openness that makes them worth defending.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.