
Corporate leaders are scrambling to keep up with a flood of AI-generated content, but one executive at EY says the most reliable detector is not a fancy tool or a secret watermark. It is a simple gut check for when something feels a little too flawless. His rule of thumb turns the usual obsession with polish on its head, and it is reshaping how I think about both AI risk and AI opportunity inside large organizations.
Instead of hunting for obvious glitches, he argues that the real giveaway is the absence of friction: language that glides past the eye, visuals that never quite surprise, presentations that sound like they were optimized for a rubric rather than a room full of humans. That single tell, he says, is becoming a practical radar for spotting synthetic work and for coaching teams to use AI without draining the life out of their ideas.
The EY radar: when “too perfect” is the problem
EY’s Chief Innovation Officer, Depa, has boiled his detection strategy down to one core signal: if a piece of content is too smooth, too perfect or too predictable, he treats it as likely AI-generated. In his view, authentic human work tends to carry small bumps, asymmetries and idiosyncrasies, whether that is an unexpected turn of phrase in a client memo or a slightly awkward chart in a pitch deck. When those quirks vanish and everything lines up with textbook clarity, his instinct is that a model has done the heavy lifting, a view he has articulated in internal guidance and in public remarks about AI fakes.
That focus on “too perfect” is not about nostalgia for messiness, it is about pattern recognition. Large language models are trained to predict the next most likely word, so they naturally gravitate toward safe, median phrasing and familiar structures. Depa’s point is that this statistical smoothness shows up in the wild as content that feels oddly generic even when it is technically correct. In a world where executives are inundated with decks, emails and strategy papers, he is using that sensation of over-optimized polish as a practical filter, a way to decide when to probe deeper, ask for working notes or push a team to inject more of its own judgment.
Why AI-generated polish erodes authenticity and impact
Depa’s warning is not just about catching synthetic text for the sake of compliance. He argues that when everything starts to sound the same, organizations lose the edge that comes from distinctive voices and lived experience. If a manager leans too heavily on a model to draft performance feedback, for example, the result might be impeccably phrased but emotionally hollow, which can undermine trust even if the ratings are fair. The same risk shows up in investor updates, sales outreach and internal strategy notes that read like they were assembled from a template rather than grounded in the specifics of a business.
He has described a pattern in which teams that overuse AI end up converging on similar language, metaphors and argument structures, which in turn makes their work easier to ignore. When every proposal sounds like a polished case study, none of them stand out. That is why he frames “too smooth, too perfect, too predictable” as a red flag not only for authenticity but for effectiveness, a sign that the content may have been optimized for form at the expense of insight, a concern he has tied directly to the way AI can make everything feel the same.
Training employees to use AI without losing their voice
Depa’s solution is not to ban generative tools but to teach people to treat them as collaborators rather than ghostwriters. He advises employees to start with their own outline, argument and examples, then bring AI in to refine structure, stress test logic or surface counterpoints. In practice, that might mean a consultant drafts a rough problem statement and recommendation, then asks a model to suggest alternative framings or to pressure test assumptions, instead of asking the model to write the entire client memo from scratch. The goal is to keep the human perspective in the driver’s seat while still benefiting from the model’s speed and breadth.
He also pushes teams to leave deliberate fingerprints in their work, such as specific anecdotes from a client engagement, references to internal data sets or phrasing that reflects a particular regional or sector nuance. Those details are hard for generic models to invent credibly and they signal to readers that a real person with context is behind the document. In workshops, he has framed this as a discipline: if a draft could have been written by anyone with access to the same tool, it is not finished. Only when the content clearly reflects the team’s own experience and judgment does he consider it ready to ship.
Building organizational guardrails around the “too smooth” test
At the organizational level, Depa’s high-sensitivity radar is evolving into a set of practical guardrails. One is process based: leaders are encouraged to ask for the steps behind a polished output, whether that is a chain of prompts, a summary of source documents or a quick explanation of what parts were human written versus AI assisted. This does not require sophisticated detection software, it simply formalizes the expectation that employees can explain their work. When someone cannot walk through how they arrived at a conclusion, or when every answer sounds like a generic best practice, that is when his “too perfect” alarm starts to ring.
Another guardrail is cultural. By openly talking about the risks of over-polished AI content, Depa is trying to normalize a preference for clarity over gloss. That might mean rewarding a slightly rougher but more insightful analysis in a quarterly review, or praising a manager who adds personal context to a model-generated draft instead of submitting the first output. Over time, those signals shape behavior. People learn that leadership is not impressed by AI sheen alone, and that the safest path is to blend machine efficiency with human specificity rather than hiding behind a flawless surface.
How I apply Depa’s “too perfect” tell in everyday work
Depa’s framework has changed how I read almost everything that crosses my screen. When I see a project update that hits every buzzword but never mentions a concrete obstacle, I now treat that as a prompt to dig deeper, not as a sign of excellence. In my own drafting, I pay attention to when a paragraph starts to sound like a composite of a hundred other documents. That is usually my cue that I have let the model oversteer, and I need to reintroduce specifics, whether that is a named customer, a precise metric or a detail from a meeting that only someone in the room would know.
I also find his test useful for calibrating how much AI to use in different contexts. For internal notes where speed matters more than style, I am comfortable leaning on a model for structure and cleanup, then adding a few personal markers. For high stakes communication, such as a board memo or a sensitive HR announcement, I reverse the ratio: I write the core narrative myself, then use AI sparingly to check clarity or suggest alternative phrasings. In both cases, I keep Depa’s warning in mind. If the final product feels frictionless in a way that blurs my own voice, I know I have crossed the line from augmentation into imitation, and that is when I start cutting back the polish until the work sounds like it could only have come from me.
More from Morning Overview