Morning Overview

Cybersecurity experts sound alarm on new scam targeting millions of Americans

The FBI’s Internet Crime Complaint Center issued a public service announcement warning that scammers have been impersonating senior U.S. officials, including White House Cabinet-level figures, state government leaders, and members of Congress, in a messaging campaign active since 2023. The scheme arrives amid record-setting fraud losses across the country, with Americans reporting more than $12.5 billion stolen in 2024 alone. What makes this particular threat stand out is its blend of government impersonation with AI-powered tools that make fake messages, calls, and even video appearances increasingly difficult to distinguish from the real thing.

Scammers Posing as Senior U.S. Officials

The campaign described in the FBI’s recent advisory targets individuals through messages that appear to come from high-ranking government figures. Victims receive urgent communications, often via text or messaging apps, designed to build trust before extracting personal information or money. The operational pattern typically involves an initial contact that references a real official’s name and title, followed by requests that escalate in urgency and financial stakes, such as supposed emergency funds, bogus fines, or “security deposits” to unlock promised benefits.

This is not an isolated tactic. The FBI separately warned that criminals are also pretending to be IC3 staff to revictimize people who already lost money in earlier scams. In those cases, fraudsters contact prior victims and claim to have recovered their stolen funds, then demand fees or sensitive data to release the supposed recovery. The FBI received more than 100 reports of this specific revictimization scheme between December 2023 and February 2025. The layered nature of these operations, where victims of one scam become targets for a second, points to an organized and persistent criminal infrastructure, rather than opportunistic one-off attempts.

Fraud Losses Hit $12.5 Billion in 2024

The impersonation campaigns sit within a broader surge in financial fraud. The Federal Trade Commission reported that consumer losses to fraud topped $12.5 billion in 2024, a sharp increase over prior years. Investment scams led all categories at $5.7 billion in losses, while imposter scams accounted for $2.95 billion. Losses tied to government imposter scams specifically grew during the same period, and reports and losses from job and employment scams also climbed. These figures come from the FTC’s Consumer Sentinel Network, which aggregates complaints submitted directly to the agency and through partner organizations nationwide.

The FTC cautions that the Sentinel database is built from unverified consumer submissions, meaning actual losses could be higher or lower than reported. Even with that caveat, the trend lines are unmistakable. The FBI’s own annual internet crime report echoes these patterns, identifying phishing and spoofing, extortion, and personal data breaches as the top crime categories by complaint count, with investment fraud, including cryptocurrency schemes, driving the largest dollar losses. The consistency between the FTC and FBI data sets suggests the problem is not a statistical artifact of one agency’s reporting pipeline but a genuine acceleration in the volume and sophistication of fraud targeting Americans across multiple channels.

AI Tools Are Supercharging the Playbook

A key reason these scams are growing harder to detect is the adoption of generative artificial intelligence by criminal networks. The IC3 has warned that criminals now use AI-written messages free of the grammatical errors that once served as red flags, along with synthetic images, forged identification documents, voice cloning, and deepfake video. For the average person receiving a message that sounds exactly like a known government official or a video call that looks authentic, the traditional advice to “watch for typos” or “trust your instincts” no longer holds up. The psychological pressure of hearing a familiar voice or seeing a convincing likeness on screen can override skepticism, especially when combined with threats of legal trouble or promises of financial windfalls.

The financial sector is also feeling the pressure. The Financial Crimes Enforcement Network, part of the U.S. Department of the Treasury, issued an alert on deepfake-enabled fraud targeting financial institutions, reminding banks of their Bank Secrecy Act reporting obligations when they encounter suspicious activity involving manipulated media. That a Treasury bureau felt the need to issue specific guidance to the banking industry signals that deepfakes have moved well beyond novelty status and into active criminal toolkits. Similar concerns are surfacing abroad: Hong Kong’s anti-deception authorities recently warned of investment rings that pose as experts in unsolicited messages and steer victims into fraudulent trading apps, illustrating how digital impersonation and high-pressure sales tactics now span jurisdictions and markets.

Regulatory Response and Practical Defenses

Federal regulators have started to push back. The FTC’s Government and Business Impersonation Rule, which took effect in April 2024, gives the agency clearer authority to pursue actors who mimic government agencies or well-known companies to extract money or data. In its first year of enforcement, the FTC has highlighted actions to shut down websites and services that misused official logos, spoofed email domains, or deployed robocalls to create the illusion of government contact. Still, enforcement alone cannot keep pace with the volume of scams. Many impersonation attempts originate overseas or rely on anonymizing technologies, complicating jurisdiction and slowing efforts to identify ringleaders even when individual domains or phone numbers are disabled.

In parallel, consumer-protection tools are being updated to reflect evolving threats. The federal Do Not Call registry remains a baseline defense against unwanted telemarketing, but it does not block all scam calls, particularly those initiated from abroad or through internet-based dialing systems. Security experts recommend layering defenses: using call-screening features on smartphones, enabling multifactor authentication on financial and email accounts, and treating any unsolicited request for payment or personal data as suspicious, even if it appears to come from a recognizable official. Consumers are also urged to independently verify claims by using known contact information from official government websites rather than phone numbers or links provided in unexpected messages.

Staying Ahead of Impersonation Scams

For individuals, the most effective protection is to slow down interactions that feel urgent. Government agencies and legitimate financial institutions rarely demand immediate payment over text, messaging apps, or gift cards, and they do not threaten arrest or deportation over the phone. If someone claiming to be a senior official reaches out unexpectedly, experts advise hanging up or ignoring the message and then checking through alternate channels, such as official agency hotlines or secure online portals. Consumers should also monitor financial statements and credit reports regularly, since early detection of unauthorized activity can limit losses and simplify recovery.

For policymakers and industry leaders, the challenge is to keep pace with adversaries who rapidly adopt new tools. That means investing in authentication technologies that make it easier to verify when a message or call truly originates from a government entity, supporting research into AI-based detection of deepfakes, and ensuring that reporting channels like the IC3 and the FTC’s complaint systems remain easy to access and widely publicized. As impersonation scams grow more sophisticated, the line between cybercrime and traditional fraud continues to blur. The latest wave of schemes leveraging senior-official personas and AI-generated content underscores a broader reality. Without coordinated action across law enforcement, regulators, technology providers, and the public, the cost of online deception is likely to keep rising.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.