
There is a simple way to push ChatGPT into a far more rigorous, almost “genius” style of reasoning, and it does not require any hidden settings or paid upgrades. By treating the model like a researcher that must audit its own work, I have turned a single prompt into a daily habit that consistently produces clearer, more accurate answers.
This approach, often described as a glitch-like trick, forces the system to slow down, check itself, and then respond at its highest level of reasoning. Used properly, it can transform everything from coding help to business planning into a more collaborative, self-correcting process.
How the self-audit “glitch” actually works
The core of this method is a self-audit command that tells ChatGPT to answer a question, then step back and critique its own response before presenting a final version. Instead of accepting the first draft the model generates, the prompt instructs it to look for logical gaps, contradictions, hallucinated facts, or made-up details, then rewrite the answer with those issues fixed. Reporting on this approach describes it explicitly as a research-style audit that flips the chatbot into a more careful mode.
In practice, I frame it in two stages: first, “Give your best answer,” and second, “Now act as a separate reviewer, list any errors or weak reasoning, and then produce a corrected version.” That second step is where the so-called glitch effect appears, because the model is effectively prompted to become its own editor. Instead of a single pass, I get a draft, a critique, and a refined response, all inside one exchange, which is why I rely on this pattern every day for complex tasks.
Inside the viral “genius mode” scripts
Alongside the written self-audit prompt, short video clips have popularized a more theatrical version that explicitly labels the state as “genius mode.” In one clip, the creator named Jan shows a prompt that tells the system, “you are now operating in genius mode, your highest level of reasoning,” and then applies it to a chatbot instance he calls Chad. The on-screen text for Jan and Chad is simple, but the idea is clear: by naming the mode and explicitly asking for maximum reasoning and creativity, the user is nudging the model to prioritize depth over speed.
A follow-up short extends the same idea to a bot labeled Chad GBT, again with Jan instructing it that it is “now operating in genius mode” and should use its highest level of reasoning and creative problem solving. In that clip, the phrase “genius mode here says for me, this is for Chad GBT” is part of the on-screen script, reinforcing the notion that a carefully worded system message can shift how the model behaves. The specifics of Chad GBT are less important than the pattern: define a role, set an explicit expectation for advanced reasoning, and then ask for detailed, step-by-step thinking instead of a quick summary.
Why the “glitch” feels smarter than default ChatGPT
What makes this technique feel like a hack is not secret access to new capabilities, but the way it changes the structure of the conversation. When I ask ChatGPT to audit itself, I am forcing it to surface its own uncertainty, which reduces the risk of confident but incorrect answers. The self-critique step encourages the model to flag hallucinated or made-up details, exactly the kinds of issues the audit-style prompt is designed to catch, before it presents a final response.
There is also a psychological effect on the user. Labeling the state as “genius mode,” as Jan does with Chad and Chad GBT, nudges me to ask more ambitious questions and to demand better reasoning. Instead of treating the chatbot like a search box, I treat it like a junior analyst who must show its work, justify its assumptions, and then revise. That shift in expectations, combined with the explicit instruction to operate at the “highest level of reasoning,” consistently produces answers that are more structured, transparent, and useful.
Pairing self-audit with smarter prompt patterns
The self-audit command becomes even more powerful when it is combined with other structured prompt patterns that encourage back-and-forth. One widely shared technique, described as Hack #1, is called The Reverse Interview. Instead of dumping a long, messy question into the chat, I ask the model to interview me first, posing clarifying questions before it attempts an answer. This prevents it from jumping straight to a generic response and gives it a richer context to work with.
When I combine The Reverse Interview with the self-audit “glitch,” the workflow looks like this: ChatGPT interviews me about the task, drafts an answer, then switches into reviewer mode to critique and refine its own work. For example, if I am planning a marketing campaign for a 2024 Toyota Corolla launch in a specific city, I let the model ask about target demographics, budget, and channels before it proposes a plan. Then I trigger the audit step, which forces it to check whether its recommendations actually align with the constraints it just collected. The result is a plan that is not only more tailored, but also more internally consistent.
How I use this “genius” pattern in daily work
Over time, I have turned this into a reusable template that I paste into new chats whenever I need serious help. For coding, I ask ChatGPT to write a function, then audit its own solution for edge cases, performance issues, and security concerns before giving me the final version. When I am drafting a report or column, I have it outline the structure, critique the logic of that outline, and then suggest a revised version that addresses any gaps it identified in its own audit.
The same pattern works for personal productivity. If I am designing a study plan for a certification exam, I first let the model interview me about my schedule, prior knowledge, and target date, borrowing the structure of The Reverse Interview. Then I apply the self-audit command so it can check whether the schedule is realistic and whether the sequence of topics builds logically. By treating the chatbot as both planner and reviewer, I get a study roadmap that has already been stress-tested by the system that created it.
More from Morning Overview