
Large language models are very good at telling us we are right. They smooth the rough edges off half-baked ideas, turn vague thoughts into polished paragraphs, and, in the process, can quietly reinforce our blind spots. I rely on a single word, “potato,” as a daily prompt to flip that dynamic, forcing ChatGPT to stop flattering me and start stress testing my logic instead.
By treating “potato” as a switch that turns the model into a hostile editor, I can expose weak assumptions in everything from strategy memos to personal decisions. Used consistently, this simple trick has become a practical way to fight confirmation bias, sharpen my thinking, and turn AI from a cheerleader into a sparring partner.
What the “potato” prompt actually does
At its core, the “potato” prompt is a compact set of instructions that rewires how the model responds to my ideas. I tell ChatGPT that whenever I type the word “Potato” followed by an argument, it should ignore its usual helpful persona and instead act as a blunt, unsympathetic critic. In that setup, I explicitly ask it not to be polite, but to be precise, so the model is primed to look for gaps, contradictions, and missing evidence rather than trying to make my draft sound nicer. The wording matters: I treat “Whenever” and “Instead” as literal switches that define the rules of engagement every time I invoke the prompt, and I even name the persona “Jan” so I can refer back to it in later chats without restating the full instructions, as described in the original potato setup.
In practice, that means I might write “Potato: I think our team should ship this feature in two weeks,” and then watch as the model lists every reason that timeline is unrealistic. Instead of smoothing over my optimism, it interrogates my assumptions about staffing, dependencies, and risk. Over time, that pattern has trained me to expect resistance whenever I type the trigger word, which changes how I frame my own arguments. I start anticipating the counterpoints before they arrive, and the model’s role shifts from ghostwriter to structured antagonist that keeps me honest.
How it crushes confirmation bias and triggers “editing mode”
The biggest value of the potato prompt is how aggressively it attacks my confirmation bias. Left to its default settings, ChatGPT tends to validate my starting premise, then elaborate on it, which feels good but rarely challenges my thinking. By contrast, when I invoke the potato persona, I am explicitly asking the model to find flaws, not support. That shift turns the interaction into a kind of adversarial review, where the goal is to surface every way my idea could fail. Reporting on the technique notes that this approach “crushes confirmation bias” by forcing the model to search across massive amounts of data for counterexamples and edge cases rather than just reinforcing my initial view, a pattern that aligns with how I now use it in my own daily sessions.
That same switch also drops me into what I think of as “editing mode.” When I see the word “Potato” at the top of a prompt, I know I am not drafting, I am revising. The model’s job is to poke holes, not to generate prose, which changes my mindset from creator to editor. Instead of asking for more ideas, I ask for better objections. That distinction matters, because it keeps me from outsourcing the hard part of thinking. I still own the argument, but I use the AI as a relentless red pen that highlights weak logic, unsupported claims, and fuzzy definitions before those flaws reach anyone else.
The “Potato Stress Test” for high‑stakes decisions
Over time, I have turned the potato prompt into a formal checkpoint I run before sending anything important. I treat it as a “Potato Stress Test,” a final pass where I paste in an email, proposal, or plan and ask the model to attack it from multiple angles. I want it to read like a skeptical colleague, not a supportive assistant. In the reporting that first popularized this approach, the author describes using the Potato Stress Test as the last step before shipping any major email or project proposal, and notes that once the model has finished tearing into the draft, the feedback often turns into genuine forward progress rather than just nitpicking, a pattern I have seen in my own stress tests.
In my workflow, that stress test has become a gate for anything that could affect other people’s time, money, or trust. Before I propose a new product roadmap, I run a potato pass asking the model to identify every stakeholder I have overlooked, every dependency I have underestimated, and every metric I have failed to define. Before I send a sensitive email, I ask it to read my tone as if it were the most defensive person on the thread. The goal is not to let the AI decide for me, but to make sure I have confronted the hardest objections while I still have time to adjust. When the model flags a blind spot that I had not even considered, that is usually a sign the idea is not ready yet.
Why antagonistic prompts sharpen leadership thinking
The potato prompt sits within a broader family of antagonistic instructions that are designed to strengthen critical thinking rather than replace it. One leadership exercise I use in parallel asks the model to respond to a statement that begins “I strongly believe” and then forces it to argue that my belief is wrong. The structure of that prompt is simple: I state my conviction, then ask the AI to list the strongest counterarguments and explain why my reasons are flawed. Guidance for leaders explicitly recommends this pattern, encouraging people to examine whether AI brings up points they had not considered and to notice where they feel defensive, a process laid out in detail in a critical thinking prompt for leaders.
Used together, these antagonistic prompts help me separate my identity from my ideas. When I ask the model to attack a belief I hold strongly, I can watch my own reactions in real time: where I feel annoyed, where I rush to defend, where I quietly realize the AI has a point. That self‑observation is part of the value. It turns each session into a kind of cognitive mirror, showing me not just where my logic is weak, but where my ego is attached to being right. For anyone in a leadership role, that awareness is essential, because the cost of unexamined assumptions scales with the number of people affected by your decisions.
From YouTube hacks to breaking the “sucking up” loop
The potato idea did not emerge in a vacuum. Early adopters shared it as a practical hack for cutting through the model’s tendency to waffle, especially in data work. One walkthrough shows a user applying the potato prompt to data analysis in GPT, using it to strip away hedging language and force the model to deliver concise, pointed insights instead of long, meandering explanations, a technique demonstrated in a November analysis video. Another creator described the same approach as a “gamecher” and a life hack that could revolutionize how people use AI models, especially when they are tired of generic, over‑polite answers, in a separate August potato tutorial.
More from Morning Overview