Morning Overview

Disinformation on US-Iran war is flooding the internet

Iranian-linked actors are waging an aggressive disinformation campaign across social media and state television as the U.S.-Iran conflict intensifies, according to multiple U.S. government agencies and media analysts. The effort blends AI-generated propaganda, cyber intrusions, and coordinated influence operations aimed at distorting the war’s narrative and destabilizing American public trust. Federal authorities have responded with sanctions, joint cybersecurity warnings, and public advisories, but the false content continues to spread faster than institutions can flag it.

U.S. Treasury Targets Iran’s Disinformation Network

The clearest sign that Washington views Iranian information warfare as a direct national security threat came when the Treasury Department’s sanctions arm, OFAC, formally designated a group of Iranian regime operatives for attempting to interfere in U.S. elections. In its public notice, OFAC described how these Iran-linked agents used online intimidation, spoofed identities, and coordinated messaging to spread falsehoods and threaten voters. The action, taken under Executive Order 13848, named specific individuals and entities tied to Iran’s government and froze any assets they held under U.S. jurisdiction, making clear that information operations can trigger the same financial penalties as physical attacks on American interests.

What makes these sanctions significant beyond the legal penalties is the signal they send about how seriously the U.S. government treats the overlap between wartime propaganda and election interference. Iran-linked operatives are not simply pushing battlefield narratives; they are running influence campaigns designed to fracture American domestic opinion during a period of active military conflict. The Treasury designations establish a formal record that these operations exist, that they are state-directed, and that Washington considers them serious enough to warrant economic punishment. That record matters because it gives journalists, researchers, and platform trust-and-safety teams a verified baseline for identifying coordinated inauthentic behavior tied to Iran, rather than relying on speculation about who is behind suspicious networks of accounts.

Federal Agencies Sound the Alarm on Cyber Threats

While Treasury focused on financial tools, U.S. security agencies have been equally blunt about the cyber dimension of Iran’s strategy. In a joint advisory, the NSA, CISA, FBI, and the Defense Cyber Crime Center warned that Iranian hacking units are actively probing vulnerable U.S. networks. The accompanying fact sheet lays out specific techniques, such as exploitation of unpatched VPNs and web applications, credential harvesting, and the use of publicly available tools to maintain persistence inside compromised systems. A four-agency alert of this scope is unusual and underscores the concern that Iran’s digital operations are aimed far beyond traditional military targets.

The timing and framing of this advisory suggest that U.S. intelligence officials see cyber operations and disinformation as tightly interwoven. A successful intrusion into a hospital, municipal network, or energy provider would not only disrupt services but also create fertile ground for panic-inducing rumors and fabricated narratives about government failures. Disinformation thrives when trust in institutions erodes, and a visible cyberattack can serve as both a technical operation and a psychological weapon. By treating Iranian hackers as a threat to civilian infrastructure and information systems that ordinary Americans rely on, the joint alert implicitly links network defense to information integrity, arguing that resilience in one domain reinforces stability in the other.

AI-Generated Propaganda Distorts the War

Iranian state media has taken the information battle a step further by deploying artificial intelligence to manufacture content that portrays its military in a favorable light and its adversaries as inept or malicious. Reporting from U.S. outlets describes how state television and affiliated social accounts have pushed AI-synthesized videos and stylized battle scenes that present a defiant, triumphant image of Iranian forces regardless of events on the ground. One analyst quoted in U.S. media coverage called it “stunning” how rapidly the Iranian cyber apparatus has scaled up AI-related content to boost the military’s image and seed doubt about Western reporting.

This AI-driven output is not limited to crude deepfakes or obviously doctored images. Instead, it spans a spectrum from subtly enhanced footage and polished infographics to fully synthetic clips that mimic news reports or eyewitness videos. These assets often circulate first on platforms where verification tools are weak or fragmented, then jump into larger networks as sympathetic influencers and anonymous accounts repost them. The sheer speed and volume of AI-generated content create an asymmetry that traditional fact-checking cannot resolve in real time. By the time a fabricated clip is debunked, it may have been shared thousands of times and woven into partisan narratives, where corrections reach only a fraction of the original audience. The result is a feedback loop in which AI-enhanced falsehoods reinforce each other across channels, gradually shifting perceptions even among users who never see the original state media broadcasts.

Why Social Media, Not AI Chatbots, Is the Real Vector

Public debate around “AI and disinformation” often fixates on chatbots, but current data indicates that conversational tools are still a niche source of news. A recent survey from the Pew Research Center found that only a relatively small share of Americans regularly turn to chat-based AI systems for information about current events. The study, which details its methodology and questionnaire, suggests that the dominant channels for wartime narratives (both legitimate and false) remain familiar platforms such as X, Telegram, Instagram, and Facebook, along with encrypted messaging apps and partisan video sites.

This distinction has practical implications for how policymakers, platforms, and civil society groups prioritize defenses. If most people are not asking chatbots for war updates, then the biggest risk is not a malicious AI assistant but the rapid spread of AI-polished content in feeds that users already trust. Iranian-linked operators appear to understand this reality. Their strategy hinges on flooding social spaces with emotionally resonant images, short clips, and slogans that are easy to share and hard to contextualize, rather than trying to hijack the relatively limited audience for AI chat interfaces. Effective countermeasures therefore focus on strengthening platform moderation, investing in provenance technologies like cryptographic content signatures, and expanding media literacy so users can better interpret what they see. Efforts to regulate or restrict chatbots may be warranted for other reasons, but they do little to blunt the current wave of Iran-backed influence operations coursing through mainstream social networks.

The Structural Challenge Ahead

The combination of state-directed propaganda, aggressive cyber tactics, and AI-enabled content production presents a structural challenge that goes beyond any single platform or election cycle. Iranian-linked campaigns operate in an environment already saturated with polarized commentary, conspiracy theories, and low-cost misinformation. Live coverage from international outlets has documented how online narratives around the conflict can shift by the hour as new claims, images, and casualty figures circulate in real time, with audiences struggling to separate verified facts from rumor and spin. In this environment, Iran’s operators do not need to invent every story from scratch. They can amplify existing doubts, selectively highlight real but unrepresentative incidents, and insert fabricated details into ongoing conversations tracked by rolling news feeds and social threads.

Responding to this threat requires more than reactive fact-checks or periodic sanctions. Governments will need sustained coordination between intelligence agencies, financial regulators, and cyber defenders to map and disrupt foreign information networks before they reach scale. Platforms face pressure to redesign recommendation systems that reward outrage and virality, while also providing clearer signals about the provenance of high-risk content. Civil society organizations, educators, and newsrooms can play a role by teaching audiences how to recognize hallmarks of coordinated influence operations and by building habits of verification, such as checking multiple reputable outlets or consulting original government documents, before sharing sensational claims. None of these steps will eliminate Iranian disinformation, but together they can raise the cost of running such campaigns and reduce their impact on public trust. In an era where AI tools allow adversaries to manufacture persuasive fictions at industrial scale, the resilience of democratic societies will hinge on how quickly they adapt their defenses to match the evolving tactics of foreign information warriors.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.