A phone call from your daughter, panicked and begging for bail money. A Google ad for a financial service that looks legitimate down to the fine print. A voicemail from the IRS threatening arrest. Each of these scams has been supercharged by artificial intelligence, and the companies and regulators trying to stop them are now locked in a high-speed contest against criminals wielding the same technology.
Google confirmed in its most recent Ads Safety Report that it blocked 5.5 billion ads and suspended more than 12.7 million advertiser accounts in 2024 alone, with a significant share of those suspensions tied to suspected scam operations. The company said machine-learning systems now handle the bulk of that enforcement, scanning enormous volumes of ad activity to catch misleading pitches, fake financial products, and other policy violations before they reach users.
That scale of automated policing reflects a blunt reality: manual review cannot keep up. Scam networks now use generative AI to produce convincing fake websites, realistic product images, and finely targeted ad copy in minutes. Google representatives have said the company treats AI not just as a tool for improving ad performance for legitimate businesses but as a necessary weapon against adversaries deploying the same underlying technology.
Voice cloning and the new generation of phone scams
The threat extends well beyond fraudulent ads. In a formal comment submitted to the Federal Communications Commission in July 2024, the Federal Trade Commission singled out AI-powered voice cloning as one of the most urgent consumer risks on its radar. Modern cloning tools can learn a person’s vocal patterns from a short audio clip and then generate speech that sounds like that individual saying anything a scammer scripts. The result: calls from a “grandchild” in trouble, a “boss” authorizing a wire transfer, or a “government official” demanding immediate payment.
To get ahead of the problem, the FTC launched its Voice Cloning Challenge in early 2024, calling on researchers and companies to develop detection tools, audio watermarks, and other safeguards that could limit misuse of synthetic voices. The agency framed the initiative as an attempt to seed a defensive ecosystem before AI-enabled voice fraud becomes entrenched, rather than reacting after a wave of high-profile losses.
The financial stakes are enormous. The FTC reported that U.S. consumers lost more than $10 billion to fraud in 2023, a record at the time. While the agency has not published a precise breakdown of how much of that total involved AI-generated content, its public warnings make clear that synthetic media is accelerating both the volume and the persuasiveness of scam attempts.
Google expands defenses beyond advertising
Google’s efforts have also moved beyond its ad platform. In late 2025, the company announced expanded AI-powered scam detection features across Chrome, Google Search, and Android devices. Those protections use on-device machine learning to flag suspicious websites and notifications in real time, aiming to catch threats that slip past ad-level enforcement or arrive through other channels entirely.
That expansion matters because scammers routinely migrate to the weakest link. If ad-side enforcement tightens, fraudsters shift to phishing emails, social media messages, or phone calls that bypass the ad platform altogether. Without cross-product protections, blocking bad ads risks simply pushing the problem into Gmail inboxes or Google Voice calls. Google has signaled it understands this dynamic, though the company has not published the same level of granular enforcement data for non-advertising products as it has for its ad network.
What consumers can do now
The FTC has directed consumers to three specific resources. People who believe they have been targeted by a scam can submit detailed reports at reportfraud.ftc.gov, which aggregates complaints and helps investigators identify patterns. Those whose personal information has already been compromised can find step-by-step recovery plans at identitytheft.gov, including guidance on contacting creditors, placing fraud alerts, and documenting losses. And anyone trying to reduce unwanted calls, a frequent entry point for AI-powered robocalls, can register with the National Do Not Call Registry at donotcall.gov.
Security researchers also recommend a few practical habits that cost nothing:
- If you receive a distress call from a family member, hang up and call them back on a number you already have saved. Cloned voices cannot intercept a separate call.
- Treat any unsolicited request for payment or personal data with suspicion, even if the caller or message appears to come from a known contact or institution.
- Enable scam and spam detection features on your phone and browser. Both Android and iOS now offer built-in call screening, and Chrome flags known malicious sites automatically.
- Report suspicious ads, emails, or calls promptly. Aggregated reports are one of the primary ways the FTC and platforms identify emerging scam campaigns.
The arms race is far from settled
Nearly two years after the FTC’s 2024 warning, the contest between AI-powered fraud and AI-powered defense remains fluid. Google’s enforcement numbers demonstrate that automated systems can operate at a scale no human review team could match, but blocked ads do not always equal prevented losses. A suspended account can reappear under a new identity within hours. And the public record still lacks a shared, transparent dataset measuring whether the combined impact of platform enforcement and regulatory action is actually reducing successful scams or whether attackers are adapting just as fast.
What is clear, as of spring 2026, is that both sides are investing heavily. Criminals are using generative AI to produce more convincing lures at lower cost. Google, the FTC, and other institutions are responding with their own AI-driven tools and new reporting infrastructure. For ordinary internet users, the practical takeaway has not changed as much as the technology has: skepticism remains the best first line of defense whenever a convincing ad, email, or voice on the line asks for money or sensitive information.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.