Image Credit: Jernej Furman from Slovenia – CC BY 2.0/Wiki Commons

Google is turning the mirror on your phone into a fitting room, letting shoppers see clothes on a realistic digital version of themselves using nothing more than a selfie. The company’s upgraded AI try-on tool replaces awkward full-body uploads with a faster, more natural flow that mirrors how people already take photos of themselves. As virtual fitting rooms move from novelty to default, the change signals how central computer vision is becoming to everyday shopping.

From full-body uploads to a simple selfie

The most striking shift in Google’s virtual fitting room is how little effort it now asks from shoppers. Instead of stepping back, framing a head-to-toe shot and hoping the lighting cooperates, I can now start with the kind of selfie I already take every day and let the system handle the rest. That small change in input lowers the friction that kept many people from trying AI-powered fitting tools in the first place, and it turns a once fussy feature into something that feels like a natural extension of mobile search.

Earlier versions of the feature required a full-body photo so the system could map garments onto a detailed avatar, which made the experience feel closer to a mini photo shoot than a quick style check. The latest update, as described by Google, keeps the same AI core but now builds that avatar from a single selfie, then uses it to preview tops, bottoms and dresses from supported brands. The result is a try-on flow that feels less like a separate app and more like a natural step in the shopping journey that starts in Google Search.

How Google’s AI try-on actually works

Under the hood, the try-on system is doing more than just pasting clothes onto a flat image. It is analyzing the selfie to infer body shape, pose and proportions, then combining that with product images so the garment appears to drape and fold in a way that looks physically plausible. I am not just seeing a shirt stickered onto my photo, I am seeing a simulated version of how that shirt might hang on my shoulders or bunch at my waist, which is what makes the feature feel more like a fitting room than a filter.

On the merchant side, How Google explains the process in two parts: the system first analyzes a retailer’s product images to understand how each item looks from different angles, then it uses AI to combine those images with the user’s photo so the clothing appears directly on the shopper’s body. That same documentation notes that the tool is designed to work specifically with tops, bottoms and dresses, which aligns with the consumer-facing guidance that lingerie, bathing suits and accessories are not yet supported.

Where to find the feature and what you can try on

For shoppers, the try-on experience starts in the same place as most online browsing: a search box. I can search for a top, bottom or dress, tap into the shopping results and, when supported, see an option to preview how a piece might look on my own body. That keeps the feature tightly integrated with the discovery process instead of hiding it behind a separate app or experimental tab, which is critical if Google wants it to become a default part of how people evaluate clothes online.

The consumer help pages spell out the boundaries clearly. When I Search for any top, bottom or dress, I can access the try-on tool, but lingerie, bathing suits and accessories are excluded from the experience. That limitation reflects both technical constraints and privacy sensitivities, and it also hints at where the system might expand next as Google refines how its models handle more complex or sensitive categories.

Studio-quality avatars without the studio

The promise behind the selfie-based upgrade is not just convenience, it is realism. Google is pitching the feature as a way to generate a studio-quality digital version of myself that can stand in for the harsh lighting and rushed decisions of a physical fitting room. Instead of squinting at a mirror under fluorescent bulbs, I can see a consistent, well lit representation of my body wearing different outfits, all from my phone.

In its own description of the experience, Google frames the tool as a way to Say goodbye to bad dressing room lighting by letting shoppers generate a digital version of themselves to virtually try on clothes. That positioning matters because it sets the bar higher than a playful filter. The company is not just offering a fun effect, it is promising a level of visual fidelity that could meaningfully influence what people decide to buy, keep or return.

Why ditching the full-body requirement changes adoption

Requiring a full-body photo created a subtle but real barrier to entry. Many people are comfortable snapping a quick selfie, but asking them to step back, clear space, adjust the camera and capture their entire body can feel intrusive or awkward, especially in shared spaces. By shifting to a selfie-first design, Google is aligning the try-on flow with the most common type of personal photo people already take, which lowers both the technical and emotional friction.

The change also reflects how the feature is being talked about publicly. In one widely shared post, @google is described as making its virtual try-on feature “way easier” by finally dropping the full-body photo requirement, with the update framed as available Now. That kind of messaging underscores the strategic goal: turn virtual try-on from a niche experiment into something casual enough that people will use it while scrolling through outfits on the couch.

What the upgraded experience feels like in practice

From a user’s perspective, the new flow is designed to feel almost as simple as applying a filter in a social app, but with more practical stakes. I start by choosing a supported item, upload or take a selfie when prompted, and then watch as the system renders a version of me wearing the piece. I can swap sizes or colors, compare different items and get a sense of how the fabric might sit on my frame, all without leaving the product page.

Guides to the feature emphasize that the upgrade lets shoppers strike a pose and use Google’s AI-powered try-on without building a full-body avatar from scratch each time. One walkthrough explains that the company is upgrading its existing tool so users can rely on a selfie instead of constructing a separate avatar, effectively letting the system handle the heavy lifting of mapping clothes onto the body image in the background. That experience is captured in a how-to that invites people to Strike a Pose and see the upgraded virtual try-on in action.

Privacy, data handling and what happens to your selfie

Any feature that asks for a photo of your body raises immediate questions about privacy, and Google is clearly aware that trust will determine how widely this tool is used. I might be willing to upload a selfie to see how a jacket fits, but only if I know that image is not being turned into a biometric profile or quietly stored for unrelated purposes. The company’s documentation leans heavily on assurances that the system is designed to minimize long term data exposure.

In its merchant guidance, the company spells out that Privacy and data handling are central to the experience, stating that Google (Google Search) does not collect or store any biometric data during the try-on process and that personal identifiers are not linked to the images or generated looks from this feature. That kind of explicit language is meant to reassure both shoppers and brands that the AI is focused on rendering clothes, not building a deeper profile of the person wearing them.

What this means for brands and online returns

For retailers, the appeal of a more accurate virtual fitting room is straightforward: fewer surprises after checkout. When shoppers can see a realistic preview of how a garment might look on their own body, they are less likely to order multiple sizes “just in case” or return items that do not match expectations. I see this as part of a broader push to use AI not only to recommend products but to reduce the costly churn of shipping and restocking.

The merchant documentation notes that the try-on tool uses AI to analyze product images and render them on the user’s photo, which gives brands an incentive to invest in high quality, consistent photography that the system can interpret. As Using AI to bridge the gap between flat catalog shots and three dimensional bodies becomes more common, I expect more retailers to treat compatibility with tools like this as table stakes, much like they once did with mobile responsive sites or product videos.

The broader shift toward AI-native shopping

Google’s selfie-based try-on is arriving at a moment when shopping interfaces are being quietly rebuilt around AI. Recommendation feeds, visual search and conversational assistants are already shaping what people see when they look for clothes, and a realistic virtual fitting room slots neatly into that stack. Instead of just surfacing more options, the system is starting to help answer the more personal question of whether a specific item feels right on a specific body.

In that sense, the move from full-body uploads to a simple selfie is not just a usability tweak, it is a signal that AI-native shopping is maturing. By letting people generate a digital version of themselves, preview outfits in studio-quality lighting and rely on clear privacy commitments, Google is betting that virtual try-on will become a standard part of how we decide what to wear, not a side experiment tucked away in a lab. As the feature expands across more categories and brands, the line between browsing and fitting will only get thinner, one selfie at a time.

More from MorningOverview