Morning Overview

Facebook scammers hack the algorithm to prey on your grandparents

Scammers on Facebook are gaming the platform’s recommendation engine to push fraudulent content directly into the feeds of older adults, turning a social network built for family updates into a pipeline for financial exploitation. The Federal Trade Commission has tied billions in reported fraud losses to scams originating on social media, with Facebook and Instagram repeatedly named in categories like romance fraud. As AI-generated content makes these schemes harder to spot, a growing body of federal, academic, and congressional evidence points to a system where the algorithm itself becomes the scammer’s most effective tool.

How the Algorithm Becomes the Weapon

Facebook’s recommendation system is designed to surface content that drives engagement. Scammers have learned to exploit that logic. By creating pages that post high volumes of emotionally charged, AI-generated images, they attract likes, shares, and comments from users who do not realize the content is fabricated. That engagement signals the algorithm to push the content further, especially into the feeds of users whose profiles suggest vulnerability. Researchers at Stanford’s Internet Observatory studied 120 Facebook pages that each posted at least 50 AI-generated images, classifying them into spam and scam categories. The pattern is clear: fabricated content earns algorithmic promotion, and that promotion funnels real people toward fraud.

The result is a self-reinforcing cycle. Profit and clout-motivated page operators use AI-generated images to build fake followings on Facebook, which in turn increases their visibility. Seniors are disproportionately caught in this loop because the content often mimics the sentimental, family-oriented posts they already engage with. A Facebook feed that promises family updates instead delivers sophisticated scams designed to exploit loneliness and trust, according to reporting based on complaint data. The algorithm does not distinguish between a grandchild’s birthday photo and a deepfake designed to extract money. It only sees engagement.

Why Older Adults Bear the Heaviest Losses

The FTC’s analysis of fraud originating on social media identified platforms like Facebook and Instagram as lucrative funnels for scams, with romance fraud among the costliest categories. The agency’s annual report to Congress on protecting older adults confirmed that seniors report major losses from scams and that scammers often target them on social media. Romance scams are especially effective against older users because they prey on people seeking new relationships on social media or dating sites, where fraudsters cultivate fake romantic connections over weeks or months before requesting money, often through cryptocurrency or gift cards that are hard to recover.

The targeting is not random. Scammers impersonate utility companies, government agencies, and even family members. The National Council on Aging has documented schemes where fraudsters pretend to be electric or gas providers threatening to cut off services unless immediate payment is made. AI has supercharged these tactics. With just a short audio clip pulled from social media, artificial intelligence can now clone a loved one’s voice, making the classic grandparent scam far more convincing. The FTC has issued specific alerts on these AI-enhanced grandparent scams, and traditional fraud methods have not disappeared either. As Ihar Kliashchou, Chief Technology Officer of Regula, has noted, the key shift is that fraudsters are now combining old techniques with new AI tools. The threat is compounding, not replacing itself.

Inside the New AI-Driven Scam Playbook

Today’s scam operations resemble small marketing agencies, except their product is fraud. Operators set up clusters of pages and profiles that cross-promote one another, seeding each with AI-generated photos of attractive people, heartwarming scenes, or fabricated charity appeals. These images are tuned for virality: cute animals, military veterans, sick children, or inspirational quotes over scenic backdrops. Once a page amasses enough engagement, it can be repurposed overnight, shifting from posting sentimental content to pushing investment schemes, fake sweepstakes, or links to off-platform phishing sites. The same engagement that convinced Facebook’s algorithm a page was popular now guarantees a large audience for whatever scam is plugged in next.

Some schemes are tailored specifically to older adults. A detailed breakdown from consumer protection reporting describes how scammers watch for public comments that reveal age, health issues, or bereavement, then use those signals to initiate one-on-one contact. A widowed user who shares a grief-related meme may receive a friend request from a fake profile that appears to share similar experiences, complete with AI-generated photos and a backstory crafted by chatbots. Over time, that relationship can be steered toward requests for emergency loans, bogus investment opportunities, or supposed medical bills, all framed as tests of trust and affection.

Regulators, Lawsuits, and the Limits of Meta’s Fixes

Federal scrutiny of Meta’s role in enabling fraud has intensified. In November 2025, Senators Richard Blumenthal and Josh Hawley called for a federal investigation into Meta’s profiting from scams and fraud, citing alleged internal metrics including the volume of “higher risk” ads running on the platform. Separately, the Justice Department previously secured a settlement with Meta over allegations that its ad delivery system can create discriminatory outcomes, a finding that, while focused on housing, established an official record that Meta’s algorithms can produce biased exposure patterns. That precedent matters because it shows regulators already recognize the platform’s delivery mechanics as capable of steering harmful content toward specific demographic groups, including older adults who may be more susceptible to financial exploitation.

Meta has responded by filing lawsuits against scam advertisers, announcing legal action against operators who use tactics like impersonation, cloaking, and subscription fraud. But enforcement through litigation is reactive by design. It targets individual bad actors after the damage is done, not the algorithmic infrastructure that amplifies their reach in the first place. Meanwhile, darknet AI tools like DIG AI, which are built to evade content rules and filtering mechanisms in modern AI systems, continue to lower the barrier for producing convincing scam content at scale. As long as Meta’s core incentives reward engagement above all else, regulators warn that even well-publicized enforcement actions will struggle to keep up with the speed and adaptability of AI-enabled fraud.

What Older Users and Families Can Do Now

While regulators debate systemic fixes, older adults and their families are left to manage the immediate risk. Advocates recommend a mix of technical settings, behavioral habits, and family coordination. On Facebook, that includes locking down friend lists, limiting who can comment on public posts, and turning off location tagging that can reveal when someone is traveling or alone. Families can also create “safe words” to verify emergencies: if a supposed grandchild calls or messages asking for money, the older adult can request the agreed phrase before acting. Because scammers often pressure victims to keep conversations secret, relatives should normalize regular check-ins about new online relationships, investment offers, or sudden requests for financial help.

Experts further advise that older adults treat unsolicited messages, even those that appear to come from known contacts, with caution, especially when they involve money, prizes, or urgent threats. Phone numbers and email addresses can be spoofed, and AI-generated photos and voices can make imposters sound and look real. Consumers can reduce some exposure to phone-based fraud by registering their numbers with the National Do Not Call Registry, although scammers frequently ignore those rules. Ultimately, the most effective defense is often social rather than technical: families and caregivers who talk openly about scams, review privacy settings together, and encourage “pause and verify” habits can blunt the impact of an algorithmic system that currently treats every emotional click as just another opportunity for engagement, no matter who gets hurt in the process.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.