Software developers who spent years mastering their craft are now watching AI coding assistants rewrite the rules of their profession, and the evidence on whether those tools actually help is far from settled. A randomized controlled trial with 96 Google engineers found AI features cut task time by roughly 21%, while a separate experiment with experienced open-source developers found AI access actually slowed them down by 19%. That contradiction sits at the heart of a growing professional anxiety: if the tools are inconsistent, how should a coder plan a career around them?
The Speed Gains That Started the Panic
The fear that coding skills are losing value did not emerge from speculation. It grew from real productivity numbers that made AI assistants look like they could replace years of training overnight. In a controlled experiment where developers were asked to build an HTTP server in JavaScript, those using GitHub Copilot completed the task 55.8% faster than the control group. That kind of result, on a straightforward greenfield task, is exactly the type of headline number that makes a mid-career programmer question whether their debugging instincts and architectural knowledge still carry weight.
A separate enterprise-scale experiment reinforced the narrative, though with less dramatic results. A randomized controlled trial involving 96 full-time Google software engineers estimated that AI features shortened time-on-task by approximately 21% on a complex enterprise-grade assignment. The confidence interval was large, meaning the true benefit could be significantly higher or lower for any individual engineer. Still, even a modest average gain across a workforce of that size translates into real pressure on hiring decisions, promotion criteria, and how companies value deep technical expertise versus the ability to prompt an AI tool effectively.
When AI Made Experienced Developers Slower
The identity crisis sharpens when the same class of tools produces the opposite outcome under different conditions. A randomized controlled trial tracked 16 experienced open-source developers across 246 real tasks, randomly assigning each task to either an AI-allowed or AI-disallowed condition. The developers used Cursor Pro with Claude 3.5 and Claude 3.7 Sonnet, tools widely considered among the best available. Despite predicting and estimating that AI would speed them up, these developers took 19% longer to finish tasks when AI was permitted.
That result deserves careful interpretation rather than dismissal. These were not novices struggling with unfamiliar tools. They were experienced contributors working on their own open-source projects, codebases they already knew well. The slowdown suggests that for developers with deep familiarity with a project, the overhead of reviewing, correcting, and integrating AI-generated code can outweigh any drafting speed advantage. The finding directly challenges the assumption that AI tools deliver universal productivity gains regardless of context, skill level, or task complexity, and the detailed paper on these contributors underscores how consistently that slowdown appeared across varied tasks.
Why the Data Keeps Contradicting Itself
METR, the research organization behind the open-source developer experiment, published an update acknowledging several design complications that affected its results. Selection effects played a role: some developers were unwilling to participate if it meant working without AI, which may have skewed the sample toward people already dependent on these tools. A pay-rate change during the study and measurement issues tied to concurrent AI-agent usage further muddied the picture. These are not minor footnotes. They point to a structural problem in AI productivity research: the people most eager to use the tools may not be representative of the broader developer population, and measuring “time on task” when an AI agent runs in the background raises questions about what counts as work.
The Google trial and the Copilot experiment, by contrast, tested more contained scenarios. Building an HTTP server from scratch or completing a defined enterprise task is a bounded problem with clear start and end points. Real-world software development rarely works that way. Codebases accumulate years of decisions, edge cases, and undocumented behavior. When an AI assistant generates plausible-looking code that misses a subtle project-specific constraint, the developer spends time diagnosing a problem that would not have existed without the tool. The gap between lab-condition speed gains and messy real-world outcomes is where the identity crisis lives, and it explains why some engineers feel faster with AI on toy problems yet slower when the stakes and complexity rise.
What Developers Actually Feel About AI
Population-level sentiment data adds texture to these experimental results. The 2025 Stack Overflow Developer Survey, conducted with a dedicated methodology section covering its sampling and analysis approach, captured shifting attitudes toward AI adoption across a broad developer population. The survey provides quantified baselines on who uses AI tools, how much developers trust their output, and how the workforce feels about its own future. The signals point to widespread adoption paired with persistent skepticism about accuracy, a combination that mirrors the contradictory experimental findings.
That mix of adoption and doubt is itself a defining feature of the current moment. Developers are not rejecting AI tools. They are using them while quietly worrying that the tools are training their replacements, or that management will interpret AI-assisted speed gains as evidence that fewer engineers are needed. The survey data suggests the anxiety is not concentrated among junior developers who fear being outpaced. It runs through the profession, touching experienced practitioners who see their judgment and taste being reduced to a prompt-engineering exercise. For many, the concern is less “Will I be replaced by AI?” and more “Will my role be reshaped into something narrower, more surveilled, and less creative than the craft I signed up for?”
Skills Are Not Obsolete, but the Job Is Changing
The most defensible reading of the available evidence is that AI accelerates routine coding tasks, particularly greenfield work and well-defined assignments, while introducing friction in complex, context-heavy projects where deep system knowledge matters most. The 55.8% speed gain on a standalone JavaScript task and the 21% improvement on enterprise work both involved relatively bounded problems. The 19% slowdown among experienced open-source contributors occurred on real tasks within codebases those developers already understood. The pattern suggests that AI tools are most useful precisely where developer expertise matters least, and least useful where it matters most.
That distinction has direct career implications. Developers whose primary value comes from writing boilerplate or stitching together standard components are closest to the tasks where AI shines. Their work is the easiest to automate, or at least to compress into fewer roles. By contrast, engineers who specialize in understanding legacy systems, navigating cross-team dependencies, or making judgment calls under uncertainty are operating in the zones where AI is most likely to hallucinate, miss edge cases, or overlook social and organizational context. For them, AI becomes less a replacement and more a fallible collaborator whose suggestions must be filtered through hard-won experience.
Planning a career around that reality means treating AI fluency as necessary but not sufficient. The evidence does not support the idea that traditional skills (debugging, architecture, careful code review) are obsolete. It suggests those skills are being redeployed. Instead of hand-writing every function, senior developers may spend more time designing interfaces, specifying constraints, and stress-testing AI-generated patches against nonobvious failure modes. Juniors, meanwhile, may be expected to ramp up faster by leaning on assistants but will still need to learn how to detect subtle bugs and misleading suggestions those tools introduce. The contradiction in the research is not a sign that software development is disappearing. It is a sign that the profession is being reorganized around a new division of labor between humans and machines.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.