Morning Overview

Try an AI pre-mortem prompt before big decisions to spot risks

Teams racing toward a product launch or a major investment rarely pause to ask what could go wrong. The pre-mortem, a structured exercise that imagines failure before it happens, has been a staple of decision science for nearly two decades. Now, pairing that exercise with a well-crafted AI prompt can surface blind spots faster, but only if the output is treated as a starting point rather than an answer.

What a Pre-Mortem Actually Is

Psychologist Gary Klein first formalized the technique in a 2007 piece for Harvard Business Review. The procedure is simple: a team assumes a project has already failed, then each member independently writes down reasons that could explain the failure. By treating the disaster as a certainty rather than a possibility, participants generate a wider range of explanations than conventional brainstorming typically produces. Daniel Kahneman, the Nobel laureate who popularized the concept in his book “Thinking, Fast and Slow,” framed the core prompt this way: “The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.” In a conversation hosted by the Council on Foreign Relations, Kahneman described the pre-mortem as a way to legitimize doubt and make dissent feel “loyal” again near a decision point. That framing matters because most organizations punish skepticism once momentum builds around a plan. Research on prospective hindsight supports the approach. A study published in the Journal of Behavioral Decision Making tested how certainty and temporal perspective affect the number and type of reasons people generate for an outcome. The finding, widely cited in later work, is that imagining an event as already having occurred leads people to identify more and more varied reasons for it than simply asking what might happen, an effect documented in work on temporal perspective in explanation.

Why Confirmation Bias Needs a Structural Fix

The pre-mortem works because it targets specific cognitive failures rather than relying on general caution. A narrative review in a clinical decision-making journal found that the exercise helps teams explicitly list reasons a plan could go badly wrong, which in turn helps fight confirmation bias and groupthink. Telling people to “think critically” does not counter these biases; giving them a concrete protocol does. For critical decisions, organizations can implement the pre-mortem as a formal step in their approval process. The SQ Centre, which develops team performance tools, recommends the technique specifically because it asks team members to imagine a future project failure and then work backward to identify causes; that backward reasoning helps counter optimism bias and encourages broader thinking than forward-looking risk assessments tend to produce. These structured checks are increasingly discussed alongside broader governance frameworks for AI and analytics. Guidance from the U.S. National Institute of Standards and Technology, including its AI risk management framework, emphasizes that organizations should embed explicit risk identification and mitigation steps into the lifecycle of high-impact systems, rather than relying on ad hoc judgment at the end.

Adding AI to the Exercise

The twist that makes this technique newly relevant is the ability to run a pre-mortem prompt through a large language model. Instead of relying solely on a team’s existing knowledge, a decision-maker can describe a planned initiative to an AI system and ask it to generate a detailed failure narrative. The model draws on patterns from vast training data, which means it can surface risk categories that a small team might overlook, from supply chain disruptions to regulatory changes in unfamiliar markets. Research on AI-powered scenario planning supports this use case. A study on cognitive bias mitigation in executive decision-making, published in the journal Electronics, found that AI-powered tools can simulate multiple market entry scenarios, including options designed to counter preconceived notions about consumer behavior or competition. When an AI generates failure scenarios that contradict a team’s assumptions, it functions as a structured devil’s advocate, one that does not worry about office politics or career consequences for speaking up. A practical AI pre-mortem prompt might look like this: “Our company plans to launch [product] in [market] by [date]. Assume the launch failed badly within six months. Write a detailed account of what went wrong, including at least three causes the leadership team likely did not anticipate.” The specificity of the prompt matters. Vague inputs produce vague outputs; detailed context about the team’s plan, budget constraints, and competitive position yields sharper failure scenarios. Teams can iterate on the prompt to explore different angles: one run focused on operational breakdowns, another on reputational risk, another on regulatory exposure. Over multiple iterations, patterns emerge that can be translated into concrete mitigation steps, such as additional user testing, contingency budgets, or revised rollout sequencing.

The Hallucination Problem

Here is where much coverage of AI-assisted decision-making stops too soon. Language models do not reason about risk the way experienced executives do. They predict plausible text sequences, which means they can generate failure scenarios that sound convincing but rest on fabricated facts. OpenAI has described why language models hallucinate as a persistent failure mode: the same mechanisms that make them fluent also make them prone to inventing details when information is missing or ambiguous. This means an AI pre-mortem prompt can produce risks that do not actually apply to a given business, cite nonexistent regulations, or misstate how a particular technology works. If a leadership team treats those outputs as authoritative, the exercise can waste time or even push decisions in the wrong direction. The danger is not just false information; it is misplaced confidence in a narrative that was never checked against reality. Regulators and standards bodies have begun to emphasize this distinction between assistive use and automated decision-making. The National Institute of Standards and Technology’s broader work on information security and trustworthy systems, accessible through its computer security resources, underscores the need for human oversight, documentation, and validation whenever automated tools influence high-stakes outcomes.

Designing a Safe AI Pre-Mortem

To get the benefits of AI-augmented pre-mortems without importing new risks, organizations can follow a few design principles. First, separate generation from evaluation. One subgroup can craft the prompt and collect AI-generated failure narratives; another, ideally including domain experts, can vet each scenario. The second group’s job is to tag items as “plausible and important,” “plausible but already mitigated,” or “implausible or factually wrong.” This preserves the creative breadth of the model while filtering out hallucinations. Second, require evidence for specific claims. If the AI narrative asserts that a new data privacy regulation will block a launch, the team should ask for citations or independently verify the claim. Treat unreferenced specifics as hypotheses to investigate, not facts to act on. Third, bake the AI pre-mortem into existing governance rather than running it as a novelty workshop. For example, a product council might require a documented pre-mortem, including any AI-assisted scenarios and the team’s responses, before approving major funding. That record becomes part of the project’s risk file, available for later audits or post-mortems if things do go wrong. Finally, keep humans in the loop for value judgments. An AI system can suggest that a particular failure mode is possible, but only leaders can decide whether that risk is acceptable, which trade-offs to make, and how to communicate them. The goal is not to outsource judgment; it is to widen the field of view before judgment is exercised.

From Thought Experiment to Operating Norm

Used well, the pre-mortem is more than a clever facilitation trick. It is a structural counterweight to the optimism, social pressure, and narrow framing that often accompany major bets. Adding AI to the exercise can amplify those benefits by surfacing a broader menu of ways things might break, especially in complex or unfamiliar environments. But the same capabilities that make language models powerful brainstorming partners also make them unreliable narrators. Organizations that embrace AI-augmented pre-mortems need to pair them with disciplined evaluation, clear governance, and a culture that treats dissent as a contribution rather than a threat. When those elements are in place, imagining failure in advance becomes not an exercise in pessimism, but a practical way to give ambitious plans a better chance of succeeding. More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.