Fraudsters in China are quietly turning generative AI into a new kind of crowbar, prying open e-commerce refund systems that were built on trust and quick resolution. By fabricating hyper-realistic images of supposedly damaged goods, they are securing payouts and replacements that legitimate sellers say are based on pure fiction.

What began as a fringe tactic has grown into a visible pattern across major platforms, with merchants reporting a surge of bogus complaints backed by AI-altered photos and videos. I see in this trend not just a new scam, but a stress test for how online marketplaces, regulators and ordinary shoppers will live with artificial intelligence embedded in everyday transactions.

The new playbook: AI “damage” on demand

The core of the scam is simple: buyers receive a perfectly fine product, then use image generators or editing tools to make it look ruined, spoiled or broken, and submit those visuals as proof for a refund. Reports from China describe food items digitally turned moldy, cosmetics made to appear contaminated and electronics shown with cracked screens that never existed in real life. The fraud works because customer service teams are trained to trust photographic evidence and resolve complaints quickly, especially during high-volume shopping festivals.

In detailed accounts of this trend, investigators describe Scammers in China Are Using AI tools to generate images of dead crabs, shredded bed sheets and dented home goods that never left the warehouse in that condition. The same reporting notes that these visuals are often convincing enough to pass internal checks, especially when they show plausible defects like a cracked ceramic cup instead of a cardboard cup or subtle dents in packaging. The scammer’s advantage is not just creativity, it is the speed and low cost of producing dozens of fake “damage” scenarios in minutes.

From niche trick to rampant return fraud

What might once have been a one-off hustle has, according to merchants, turned into a widespread headache. During major online promotions, sellers say they now brace for a wave of suspicious complaints that arrive almost as soon as parcels are delivered. The pattern is consistent: a buyer claims the item arrived defective, sends a few polished photos, and demands either a full refund or a replacement without returning the original product.

One analysis describes how Return fraud caused by AI-generated images is rampant in China, with online stores struggling to find any scalable way to verify what is real. The same account notes that during the Double 11 shopping festival, merchants saw a spike in claims supported by suspiciously uniform photos of damage, suggesting that templates or shared AI prompts were being reused. For smaller shops that operate on thin margins, even a modest percentage of fraudulent returns can wipe out profits from a big sales event.

How scammers weaponise consumer-friendly policies

China’s e-commerce boom was built on generous buyer protections, including no-questions-asked returns and fast refunds that arrive before a seller has time to inspect the item. Scammers are now exploiting those protections, using AI to fabricate evidence that fits neatly into existing complaint workflows. The result is a perverse inversion of consumer rights: policies designed to shield honest shoppers from bad actors on the seller side are being turned against legitimate merchants.

Analysts tracking the trend describe how What happened is an “AI oversight gap,” where platforms have sophisticated tools to police sellers but far fewer mechanisms to scrutinise buyers. That imbalance, combined with automated refund systems, has created what some merchants see as an open invitation to abuse. When a platform’s default is to side with the customer to preserve its reputation, even a small number of AI-assisted fraudsters can tilt the economics of doing business online.

Inside the image factory: tools, templates and tactics

Behind each fake complaint is a small production line of digital manipulation. Scammers start with a genuine photo of the product they received, then feed it into an AI editor that can add cracks, stains, mold or tears while preserving the original lighting and background. Others skip the real photo entirely and generate a synthetic image that merely resembles the item, adjusting textures and defects until it looks like a plausible snapshot from a smartphone.

Reports on these schemes describe how China consumers use AI to alter product photos so that fresh seafood appears rotten, packaged snacks look infested and clothing seems ripped or stained. In some cases, scammers share prompts and presets in private chat groups, trading tips on how to make damage look random enough to evade automated filters. The same techniques are now being applied beyond goods, with Employees using AI tools like image generators from OpenAI and Google to create fake receipts that pass corporate audits, suggesting the skills honed in consumer scams are bleeding into workplace fraud.

Real-world victims: merchants on the front line

For sellers, the impact is not abstract. Each fraudulent refund is a direct hit to revenue, often compounded by penalties from platforms that treat high complaint rates as a sign of poor service. Chinese merchants have described how a single viral complaint, even if based on AI fakery, can drag down a store’s rating and push it lower in search results, making it harder to attract new customers.

Accounts from the ground detail how Chinese sellers reported buyers using AI edited photos to fake damage during major online sales, then refusing to return the goods while insisting on full refunds. One seller quoted in that coverage complained that platform dispute teams seemed to ignore “common sense,” accepting obviously inconsistent images as proof. Another case, involving a merchant named Gao, ended only after police determined that videos of damaged items were fabricated and detained the buyer, but by then the seller had already spent time and money fighting a claim that should never have passed initial review.

Platforms and regulators scramble to respond

China’s major marketplaces are not blind to the problem, but their responses so far show how hard it is to retrofit fraud controls onto systems optimised for speed. Some platforms have begun flagging accounts that file an unusually high number of damage complaints, while others are experimenting with requiring returns before issuing refunds in high-risk categories like luxury goods and fresh food. These measures, however, risk slowing service for honest customers and undermining the convenience that made online shopping so attractive in the first place.

One prominent example is how Taobao and Tmall Group, owned by Alibaba Group, have set up dedicated teams to review suspicious refund requests and are testing AI tools of their own to spot manipulated images. At the regulatory level, Additionally, China implemented new regulations on the identification of AI-generated content from September 1, requiring clearer labelling and technical safeguards. Those rules are aimed broadly at deepfakes and synthetic media, but they also provide a legal basis for punishing buyers who knowingly submit fabricated images in commercial disputes.

The legal and ethical grey zone

On paper, using AI to fake damage is straightforward fraud. In practice, enforcement is messy, because not every altered image is malicious and not every complaint is clearly false. Some buyers do lightly edit photos to improve clarity or highlight defects, and platforms are wary of criminalising ordinary behaviour. That ambiguity gives scammers cover, allowing them to claim they simply “enhanced” images rather than invented damage from scratch.

Legal experts quoted in coverage of Similar scams note that cases often hinge on technical forensics and intent, which are hard to prove at scale. In one incident reported by Legal Daily, an online toy store owner from Guangxi Zhuan reported a buyer who repeatedly sent suspicious photos, prompting a police investigation. That case underscores a broader dilemma: if every disputed image requires law enforcement and expert analysis, the system will grind to a halt. Yet if platforms treat AI fakery as a minor infraction, they risk normalising a culture where lying with pixels is just part of the game.

Trust on the line: what this means for shoppers

For ordinary consumers, the rise of AI refund scams might sound like someone else’s problem, but the fallout is already creeping into everyday shopping. As merchants and platforms tighten verification, buyers are encountering more hoops: requests for additional photos, demands to return low-value items that used to be refunded instantly, and longer waits while complaints are manually reviewed. The friction is a direct response to a minority of bad actors, yet it affects everyone.

Commentary on the trend warns that WIRED reports that fraudsters in China are driving policy shifts that could unfairly penalise honest customers, especially those who lack the time or technical skills to navigate more complex dispute processes. At the same time, viral posts like Scammers Are Using AI Images to Fake Refunds, and This Is Exactly the Future We Were Warned About capture a broader unease: the sense that every photo, every receipt and every complaint could now be synthetic. That erosion of confidence does not just hurt sellers, it chips away at the social contract that makes online commerce feel safe and predictable.

Where AI arms races lead next

As scammers refine their techniques, platforms are racing to deploy their own AI to detect subtle artifacts in images, cross-check complaint histories and flag improbable patterns of damage. This emerging arms race will likely define the next phase of e-commerce security, with machine learning models judging not only what is in a photo but how that photo was created. The risk is that in trying to outsmart fraudsters, companies end up building opaque systems that are hard to challenge when they make mistakes.

Some observers see the current wave of refund scams as a preview of a broader shift, where AI-generated content becomes a routine tool in disputes over everything from insurance claims to workplace expenses. The same dynamics already appear in corporate settings, where The new AI refund scam involving fake receipts shows how quickly these techniques can migrate. If there is a lesson in China’s experience, it is that systems built on photographic proof alone are no longer enough. I find myself returning to a simple conclusion: in an era when any image can be fabricated, trust will depend less on what we see and more on how well we can verify who is behind it and what incentives they face.

More from MorningOverview