Image by Freepik

AI photo generators are now good enough to fool most people at a glance, which makes knowing how to spot an AI image a basic digital survival skill. I focus on six dead giveaways that consistently show up in fakes and pair them with free tools I actually trust. Learn these patterns and you can move from gut feeling to evidence when you decide whether to share or challenge a suspicious picture.

Garbled or impossible text in the scene

Garbled text is still one of the most reliable tells that an image is synthetic. As detailed in a Table of Contents that explicitly lists “Garbled text” as a core warning sign, AI models often mangle letters on storefronts, protest signs, T‑shirts, or license plates. You might see half-formed alphabets, repeated characters, or words that look like English until you try to read them closely and realize they collapse into nonsense.

This happens because image generators learn visual patterns of letters rather than language rules, so they struggle to keep text consistent across a scene. For newsrooms, brand teams, and election officials, spotting this kind of glitch is critical, since fake posters or documents can be used to impersonate campaigns or companies. When I suspect a viral “leaked memo” or protest banner is AI, the first thing I do is zoom in on every bit of writing and check whether the typography behaves like real print.

Lighting and shadow “shenanigans” that break physics

Lighting and shadow shenanigans are another dead giveaway. Cybersecurity expert Theresa Payton literally labels this the “Lighting/Shadow Shenanigans Check” and urges people to inspect whether a Face is lit differently from the background or whether shadows flip directions inconsistently. If the sun appears to hit a subject from the left but the building behind them is lit from the right, you are probably looking at an AI composite rather than a single captured moment.

These inconsistencies matter because they reveal that no real camera and light source could have produced the scene as shown. For platforms moderating political content, or for lawyers evaluating alleged evidence, a simple shadow check can prevent manipulated images from shaping public opinion or court cases. I recommend mentally tracing where the light should fall on noses, ears, and objects on the ground, then asking whether every shadow in the frame obeys the same rule.

Uncanny faces, hands, and “too perfect” people

Uncanny faces and hands remain a classic sign of AI fakery. A widely cited Drawing on research by Hany Farid, Florian Groh and colleagues notes that “other telltale stylistic artifacts” include skin that looks airbrushed to plastic, hair that dissolves into the background, and jewelry or glasses that merge with skin. Groh and his team also highlight how AI tends to produce faces that are a little too symmetrical and attractive, which can feel subtly unreal even when you cannot pinpoint why.

Hands and ears are still weak spots, with extra fingers, fused rings, or earrings that do not pierce the ear at all. For advertisers and political campaigns, these glitches are more than curiosities, they can expose undisclosed synthetic models or staged “supporters.” When I evaluate a suspicious portrait, I scan fingers, teeth, and ears in that order, then look for pores, flyaway hairs, and minor blemishes that real lenses capture but AI often smooths away.

Backgrounds that don’t add up on closer inspection

Backgrounds that do not add up are another strong indicator. One detailed guide on AI giveaways singles out “Backgrounds That Don’t Add Up,” noting that at first glance the foreground may appear convincing until you notice impossible architecture, repeating window patterns, or crowds that blur into each other when you really squint your eyes. Street signs may float without poles, or reflections in water and glass may fail to match the objects they supposedly mirror.

These errors arise because generative models prioritize the main subject and treat the rest as filler texture. For journalists and fact-checkers, that filler can reveal whether a dramatic protest scene or disaster photo ever existed. I advise scanning the edges of the frame, where AI often gets lazy, and checking whether background text, logos, and reflections tell the same story as the central subject or quietly contradict it.

Free detectors with transparent scores and samples

Once visual clues raise suspicion, I turn to free detectors that publish transparent benchmarks. A detailed review of current tools includes a “Scores at a Glance” section that compares Scores across multiple “Samples,” such as “Test 1 (CG)” and “Test 4 (AL),” and even labels sets like “Human‑True” and “Human‑50.” That kind of disclosure helps me gauge whether a detector is conservative or aggressive before I trust its verdict on a borderline image.

Separate research compiles a Full Comparison Table 14 AI Content Detectors, listing each Detector, its Rank, and reported Accuracy. I treat these tools as decision support rather than oracles, running a suspicious image through at least two services and comparing their confidence scores. For newsroom editors, brand safety teams, and educators, combining human pattern recognition with quantified detector output is the safest way to flag fakes without overcorrecting and mislabeling genuine photos.

Purpose-built image checkers like WasItAI

For everyday users who just want a quick answer, purpose-built image checkers are invaluable. One example is WasItAI, described as an AI-image checker designed to detect AI-generated images by analyzing artifacts and patterns within an image. Unlike generic content scanners, it focuses specifically on visual cues, which can make it more sensitive to the subtle textures and edge cases that text-centric detectors miss.

Tools like this matter because they lower the barrier to verification for people who are not professional investigators but still face real stakes, from voters evaluating campaign imagery to parents checking suspicious “school incident” photos. I recommend using a checker like WasItAI after you have already spotted one or two of the giveaways above, then treating a strong AI score as a prompt to seek corroborating evidence before you share or act on the image.

More from Morning Overview