Christina Morillo/Pexels

Artificial intelligence is now woven into everyday life, from search results to workplace tools, and it is very good at making hard tasks feel easy. That convenience comes with a quieter risk: if I let the software handle every step, my own ability to reason, remember, and judge can start to atrophy. A Nobel Prize winner has offered a deceptively simple way to keep that from happening, turning AI into a partner that sharpens thinking instead of replacing it.

At the core of that method is a discipline that scientists have used for decades: do the mental work first, then use the machine to test, stress, and refine it. Used this way, AI becomes less like an answer vending machine and more like a lab instrument, something I can point at my own ideas to see where they hold up and where they fall apart.

The Nobel physicist’s rule: think first, then query

The Nobel laureate’s starting point is blunt: if I let AI generate answers before I have tried to reason through a problem myself, I am training my brain to sit on the sidelines. In interviews about how to use AI without letting it do my thinking, the Nobel Prize winning physicist has warned that the technology can create a powerful illusion of understanding, especially when it responds in fluent, confident prose. The simple rule he follows is to sketch his own explanation, calculation, or plan first, even if it is rough, and only then ask the system to critique or extend it.

That sequence matters because it preserves the struggle that actually builds expertise. When I attempt a proof, outline a memo, or draft code before opening a chat window, I am forcing myself to retrieve knowledge, connect concepts, and make choices. Only after that do I bring in AI to check for gaps, errors, or alternative approaches, treating it like a second pair of eyes rather than a first brain. The Nobel laureate has framed this as a way to keep skepticism and constant error checking at the center of my process, instead of outsourcing judgment to a tool that, as he notes in his warnings about AI, can be wrong in ways that are hard to spot if I am not already engaged.

Why AI feels like learning when it is not

The same physicist has pointed out that the most dangerous thing about modern AI is not its raw power but its polish. When a system explains quantum mechanics or balance sheet analysis in clean, structured paragraphs, it can feel as if I have mastered the basics after a few prompts. He has described this as the “tricky” part of AI, because the experience of reading a clear explanation is not the same as being able to reconstruct that reasoning on my own. The brain confuses recognition with recall, and the gap only shows up when I try to solve a fresh problem without the model’s help.

That illusion is amplified when I lean on AI to fill in every missing step. If I ask for a full solution to a statistics problem, then immediately move on, I never test whether I can reproduce the logic or adapt it to a slightly different dataset. The physicist has stressed in his comments on critical thinking that real learning shows up when I can explain a concept in my own words, derive a formula from first principles, or spot when an answer “looks wrong” even before checking. AI can support that, but only if I resist the temptation to let it carry the entire cognitive load.

Turning AI into a sparring partner, not an answer engine

To keep my own reasoning active, I have to change the way I frame prompts. Instead of asking, “What is the answer to this?” I can ask, “Here is my answer, where are the flaws?” That small shift turns AI into a sparring partner. The Nobel Prize winning physicist has described using models to probe his own arguments, asking them to generate counterexamples, alternative derivations, or edge cases that might break his preferred explanation. The key is that the first move is mine, and the system’s role is to attack or refine, not to originate the entire chain of thought.

This approach works outside physics too. A product manager can draft a roadmap, then ask AI to identify risks she has missed. A lawyer can outline a brief, then request opposing arguments that might appear in court. A student can write a proof, then ask the model to find logical gaps. In each case, the human sets the frame and the AI supplies friction. The physicist has emphasized in his public guidance that this kind of structured pushback is what keeps analytical muscles from going slack, because I am still responsible for weighing the arguments and deciding what to accept.

The “no autopilot” rule from medicine and policy

Outside physics, professionals who work with high stakes data have been sounding similar alarms. Hesha Duggirala, an Epidemiologist and Policy Coordinator at the FDA, has argued that every time I hand off thinking to AI without first forming my own view, I am weakening my critical faculties. In her work on health and regulation, she has seen how tempting it is to let automated systems summarize studies or flag patterns, and how easy it becomes to stop reading the underlying evidence with the same intensity.

Her advice is to treat AI as a first draft generator or a pattern spotter, never as the final arbiter. That means reading the full paper even after an AI summary, checking the raw numbers behind a chart, and asking whether the model might be missing confounders or biases. Hesha Duggirala has framed this as a discipline of “no autopilot” in her discussion of how to protect critical thinking skills, a rule that lines up closely with the Nobel laureate’s insistence on skepticism and manual error checking before accepting any AI output.

A practical routine for students and self‑learners

For people using AI to study, the risk of mental laziness is especially acute, because the line between “help” and “substitution” is thin. One practical routine that aligns with the Nobel method starts with a strict ban on direct answers. When I am learning calculus or organic chemistry, I can first attempt problems on paper, then ask AI to show me only the next step, not the full solution. This keeps me in the driver’s seat, using the system as a hint engine rather than a cheat sheet.

Technology writers who have experimented with this approach describe setting rules for themselves: do not let the model give final answers, only explanations of concepts I already tried to apply, and ask it to quiz me with new questions where I am unclear. One account of using AI for learning without becoming lazy describes exactly this pattern, with the writer instructing the system to avoid direct solutions and instead generate practice questions and partial guidance. That strategy, detailed in a piece about how to study with AI without becoming lazy, mirrors the Nobel laureate’s insistence that the hard part of learning is the struggle, not the explanation I read afterward.

Building “manual mode” into everyday work

In the workplace, the same principles can be turned into concrete habits. Before I ask a chatbot to draft an email to a client, I can jot down three bullet points of what I want to say and the tone I need. Before I let AI summarize a 40 page report, I can skim it myself and write a one paragraph takeaway, then compare my version with the model’s. This “manual mode first” routine keeps my judgment active and gives me a baseline to evaluate whether the AI is missing nuance or overconfident about weak evidence.

Teams can formalize this by setting rules for when AI is allowed to generate content and when it is limited to critique. For example, a marketing group might require that campaign concepts originate in a human brainstorming session, with AI used only to expand or stress test the ideas. A software team might insist that engineers write function signatures and comments before asking a model to fill in boilerplate code. These practices echo the Nobel Prize winner’s method of doing the conceptual work before inviting the machine in, and they align with Hesha Duggirala’s warning that each unexamined delegation to AI chips away at the habit of independent analysis.

Training yourself to doubt fluent answers

One of the Nobel laureate’s most pointed observations is that AI’s fluency is itself a cognitive trap. When a system writes in polished, confident language, it feels authoritative even when it is fabricating details or glossing over uncertainty. To counter that, I have to train myself to treat every AI answer as a hypothesis, not a verdict. That means asking, “What would I need to see to believe this?” and then checking those specifics against other sources, my own calculations, or domain experts.

Over time, this becomes a habit. I start to notice when an explanation is too neat, when a statistical claim lacks a denominator, or when a legal summary omits key jurisdictions. The Nobel physicist has urged users to build this reflex of doubt into everyday decisions, warning in his comments on how AI can erode critical habits that the real danger is not a single bad answer but the gradual erosion of our willingness to question any answer that arrives quickly and cleanly.

Making the Nobel method a daily checklist

To keep AI from making me mentally lazy, I can turn the Nobel winner’s approach into a short checklist I run through whenever I open a model. First, I ask myself what I already know about the problem and write that down before typing a prompt. Second, I decide whether I want the system to critique, expand, or simulate, rather than simply “solve.” Third, I plan how I will verify whatever it gives me, whether by doing a back of the envelope calculation, checking a primary document, or running a small experiment.

Layered on top of that, I can borrow from Hesha Duggirala’s “no autopilot” stance and the learning routines described by technology writers, committing not to accept any AI generated output that I could not at least partially reconstruct on my own. Used this way, AI becomes a force multiplier for curiosity and rigor instead of a shortcut around them, which is exactly what the Nobel Prize winning physicist has been urging in his public comments about how to use these tools without letting them do my thinking for me.

More from MorningOverview