Image by Freepik

American households are entering 2026 with a new kind of financial threat: industrial‑scale scams powered by artificial intelligence. Consumers lost $12.5B to fraud in a single year, and experts now expect AI tools to supercharge everything from fake investments to synthetic identities. I see a widening gap between what criminals can automate and what ordinary people can realistically spot with the naked eye.

The stakes are not abstract. Behind that $12.5B figure are drained retirement accounts, hijacked bank logins and job seekers tricked into handing over their identities. As agentic AI systems, deepfakes and automated social engineering mature, the fraud problem is shifting from isolated cons to a persistent, AI‑driven drag on household wealth and trust in digital life.

The $12.5B wake‑up call

The starting point for understanding the new fraud era is the sheer scale of recent losses. According to multiple analyses of Federal Trade Commission data, Americans lost a staggering $12.5 billion to fraud in 2024, a sum that dwarfs the $1.48 billion recorded a decade earlier. One breakdown notes that losses from investment scams alone accounted for nearly half of that amount, underscoring how convincingly criminals can mimic legitimate brokers and platforms when they target consumers online, a trend highlighted in reporting by Bree Fowler. I read those numbers as a sign that fraud is no longer a side effect of the digital economy, it is becoming one of its core business models.

Regulators are sounding similar alarms. One analysis of Federal Trade Commission figures notes that Newest FTC Data to fraud and that the most expensive scams can mean losing thousands of dollars at a time. A separate review of Payment Security trends notes that Americans Lost Over in 2024, describing the figure as a marker of the growing sophistication and scale of financial fraud. Taken together, these snapshots show a fraud economy that is already enormous before the next wave of AI tools fully hits.

How AI is supercharging scams

What changes in 2026 is not that criminals suddenly discover AI, but that AI becomes the default engine behind their operations. One early warning came from security researchers who framed 2026 as the moment when Trend Micro Predicts the Year Scams Become AI Driven, Scaled and Emotion engineered, with AI‑powered phishing and social engineering expected to become the most common global threat. I see that prediction as less about science fiction and more about automation: once a model can generate thousands of personalized scam messages or voice calls, the marginal cost of each new victim approaches zero.

Credit and identity specialists are seeing the same pattern. In a recent fraud forecast, Experian Warns AI Powered Fraud to Explode After $12.5 losses, noting that consumers lost $12.5 in the prior year and that AI is now being used to evade detection in shopping and hiring. The company’s own annual report, released from COSTA MESA, Calif, highlights how technology adoption is fueling new levels of digital fraud, with agentic AI systems able to carry out complex, multi step scams with minimal human oversight. When I look at those warnings, I see a pivot from one‑off deepfake clips to persistent AI agents that can chat, negotiate and adapt in real time to keep a victim on the hook.

Agentic AI, deepfake candidates and machine‑to‑machine crime

The most unsettling shift is the move from static AI outputs to agentic AI, systems that can act on goals across multiple platforms. In its latest outlook, Experian flags Experian’s New Fraud that agentic AI, deepfake job candidates and cyber break‑ins are top threats for 2026, with machine‑to‑machine interactions creating new blind spots. A related briefing drills into Machine to machine mayhem, describing how automated systems can flood banks and retailers with synthetic applications, probe defenses and adapt faster than human fraud teams can respond. From my vantage point, that is the real break with the past: fraud no longer scales only with the number of human scammers, it scales with compute.

Those same reports warn that deepfake job candidates are emerging as a new attack vector, with fake applicants using AI generated video and audio to slip through remote hiring processes and gain access to corporate systems. The annual report notes that annual report reveals deepfake job candidates and cyber break‑ins are converging into a single threat, as criminals use stolen or fabricated identities to land roles that give them direct access to sensitive data. I see this as a warning that traditional background checks and video interviews are no longer enough, because the person on screen may not exist at all.

Deepfake voices, banking risk and “authorized” payments

Nowhere is the AI shift more visible than in banking. Analysts at the Deloitte Center for have documented how deepfake audio and video are being used to impersonate customers and executives, tricking staff into approving transfers or resetting account credentials. Research manager Joshua Henderson outlines scenarios in which an aggressive adoption of generative tools by criminals could sharply increase losses, particularly if banks rely too heavily on voice biometrics or video calls as proof of identity. From my perspective, the lesson is blunt: if a bank can be convinced by a voice on the phone, so can a fraudster with a good model and a few minutes of training data.

On the consumer side, the line between being tricked and “authorizing” a payment is getting blurrier. Fraud specialists warn that Authorized Push Payment scams, in which victims are persuaded to send money themselves, are surging, with one expert quoted as saying that once the funds are gone they are often lost to bad actors forever. A broader fraud forecast for 2026 notes that as AI makes scam messages more convincing and personalized, more people will be nudged into clicking “send” on transfers that banks then treat as voluntary. I see that as a looming consumer protection battle: the more AI shapes the conversation, the harder it becomes to argue that a payment was truly informed consent.

How businesses and consumers can fight back

The good news is that defenders are also arming themselves with AI, though they face a delicate balance between security and user experience. One strategic analysis aimed at insurers argues that Strategy that keeps insurers ahead of the next wave must use AI to spot anomalies while avoiding so much friction that customers walk away, with Analysts expecting those losses to grow at a faster pace as generative tools spread. A separate advisory on Top Fraud Trends to Watch in 2026 highlights the Rise of Realistic AI Driven Fraud and Deepfakes, and urges organizations to harden identity verification so that synthetic media cannot easily bypass traditional security measures. In my view, that means moving beyond passwords and one‑time codes toward layered checks that look at device behavior, network patterns and biometric signals together.

For individuals, the defenses are more behavioral than technical, but they matter. Analysts who track how much Americans lose to fraud each year stress that education about red flags can still prevent many losses, even as AI makes scams slicker. Consumer advocates who parse Scams and fraud trends emphasize that knowing how to spot a scam, from pressure tactics to payment requests via crypto or gift cards, remains one of the most effective shields. I would add one more habit for the AI age: treat any unexpected digital interaction, whether a video call from a “relative” or a job offer that arrives out of the blue, as potentially synthetic until you have verified it through a second, trusted channel.

More from Morning Overview