Steve Jobs built Apple’s product culture around a deceptively simple formula: spend the first 10 percent of any project setting the vision, let a talented team handle the middle 80 percent of execution, then return for the final 10 percent to refine and approve. AI was a distant dream when Jobs ran that playbook, but recent field research on generative AI tools suggests the rule is more relevant now than when he first practiced it. As one analysis of the 10-80-10 pattern argues, the reason is straightforward: as AI absorbs more of the execution layer, the human bookends of direction-setting and quality judgment become the only reliable source of competitive advantage.
How Jobs Actually Used the Rule
The clearest primary account of Jobs’s working method comes from David Kelley, founder of the design firm IDEO, who collaborated with Apple on early hardware projects. In an oral history at Stanford, Kelley recalled that “Jobs was involved in the early stage… then you’d bring him something at the end.” Between those two phases, the team operated with significant autonomy. Jobs set the emotional and functional target, disappeared during production, and came back to stress-test the result against his original intent.
This was not casual delegation. In a separate Smithsonian interview, Jobs explained why he obsessed over talent selection before handing off execution. He described the performance spread between “good” and “great” people in software as having a huge dynamic range, with a great software person performing far better than a merely good one. The logic was that if you staffed the middle 80 percent with exceptional people and gave them a clear target, you could trust the execution phase. The leader’s job was to own the boundaries: the opening vision and the closing standard.
Jobs’s pattern also depended on him being unusually clear about what he wanted. Former collaborators have described him sketching product experiences in emotional terms (how a device should feel in the hand, or what a customer should sense in the first 30 seconds of use), rather than dictating technical specifications. That left room for the team to experiment, but it also created a sharp test at the end: did the thing in front of him evoke the feeling he had described at the beginning?
AI Now Occupies the Middle 80 Percent
What Jobs delegated to elite engineers, many organizations now delegate, at least in part, to generative AI. Research by Eloundou, Manning, Mishkin, and Rock estimated that large shares of jobs are exposed to language models. Their analysis mapped the kinds of work that LLMs can accelerate or replace, and the bulk of it falls squarely into routine execution: drafting, coding, summarizing, analyzing data, and generating first versions of creative work. That is the middle 80 percent in Jobs’s framework.
Field evidence from call-center work points in the same direction. A study by Li, Brynjolfsson, and Raymond titled “Generative AI at Work” found substantial productivity gains from AI assistance, with strong heterogeneity across skill levels. Less-experienced workers saw the biggest improvements. A related summary from MIT Sloan noted that AI disproportionately helps novice and lower-skill workers, effectively raising the floor of execution quality without much input from senior leaders.
Newer experiments extend this picture beyond customer support. Recent work on AI-augmented knowledge tasks finds that generative tools speed up routine analysis, document drafting, and coding, while leaving the highest-level problem formulation largely in human hands. Across these studies, AI shows up as a force multiplier in the middle of the workflow: it makes it faster and cheaper to turn a clear brief into a decent draft, but it does not decide which brief to write.
That shift carries a direct implication for managers. If AI can bring a junior employee’s output closer to a senior employee’s baseline, then the execution gap that once required hands-on mentorship shrinks. The scarce resource moves from “who can do the work” to “who can define what the work should accomplish and whether the finished product meets that standard.”
The Risk of Skipping the Bookends
The temptation with any powerful tool is to hand over more than just execution. Some technology leaders have warned that overreliance on AI could erode the very skills that make people valuable. Their proposed safeguard echoes Jobs’s rhythm: start with 10 percent human thinking, let AI handle the middle 80 percent, and finish with 10 percent human judgment. Workers who skip the first and last steps (letting AI define the problem and accepting its output without scrutiny) risk gradually losing the critical thinking that differentiates their contributions.
Controlled experimental evidence supports this worry. A study in Science examined how access to ChatGPT affected performance on professional writing tasks. Participants with the tool produced higher-quality work and completed it faster, but the gains depended on how they engaged with the system. When people treated the model as an infallible oracle, they were more likely to accept subtle errors and less likely to practice the underlying skills themselves. The pattern resembles what happens when GPS navigation erodes a driver’s sense of direction: the tool works perfectly until the moment it does not, and by then the human competence has atrophied.
In organizational settings, that erosion shows up as a loss of institutional judgment. Teams that default to AI for brainstorming risk converging on the same generic ideas as everyone else using similar tools. Teams that rely on AI to decide which metrics to optimize may end up chasing what is easy to measure rather than what actually matters to customers. In both cases, skipping the bookends (vision and evaluation) turns a powerful amplifier into a homogenizing force.
Why Vision-Setting Cannot Be Automated
Most coverage of AI and productivity focuses on speed gains and cost savings. That framing misses the structural point Jobs identified decades ago. The first 10 percent of any project, the phase where someone defines what “done” looks like and communicates the emotional and functional target, requires taste, context, and judgment that generative models do not possess. AI can generate a hundred variations of a product description, but it cannot decide which product to build or why it matters to a specific audience at a specific moment.
Entrepreneur Dan Martell has described the first 10 percent as the phase where leaders set the vision and define “done” in language that teams can act on. That description maps directly onto Jobs’s behavior at Apple, where he would sketch the experience he wanted a product to deliver and then let the team figure out how to get there. The “how” is increasingly AI-assisted. The “what” and “why” remain stubbornly human.
Vision-setting also involves trade-offs across time horizons and stakeholders that current models are not equipped to own. Choosing to prioritize long-term brand trust over short-term click-through, for example, requires a view of relationships, reputation, and risk that goes beyond pattern prediction. Even when AI can simulate arguments for different options, someone has to choose, and be accountable for, the path.
Reclaiming the Final 10 Percent
If the first 10 percent belongs to human vision and the middle 80 percent is increasingly automated, the last 10 percent (the phase Jobs reserved for himself) becomes the critical leverage point. In an AI-heavy workflow, that closing segment should include three disciplines: ruthless editing, explicit learning, and feedback into the next brief.
Ruthless editing means treating AI output as raw material, not a finished product. That involves checking facts, aligning tone with the original intent, and pruning anything that feels generic. Explicit learning means noticing where the model’s suggestions surprised you, where they fell flat, and what that says about your own assumptions. Feeding those insights back into the next project’s opening brief tightens the loop between human intent and machine execution.
Jobs’s 10-80-10 rule was never just a time allocation; it was a philosophy about where human attention matters most. In an era when AI can increasingly fill the middle, the leaders and teams who win will be those who double down on the bookends, crafting sharper visions at the start and applying more discerning judgment at the end. The machines will get better at doing what they are told. The hard, irreplaceable work will be deciding what is worth doing in the first place, and knowing, with conviction, when it is finally good enough to ship.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.