
Corporate leaders are racing to showcase artificial intelligence as a productivity engine, promising faster workflows, leaner teams, and new revenue streams. Inside many of those same companies, however, workers are discovering that the tools meant to “augment” their roles can also strip away judgment, autonomy, and craft, leaving them as supervisors of systems they barely control.
I see a widening gap between the upbeat language in earnings calls and the more ambivalent reality on the ground, where some employees are being nudged into lower-skill, more tightly monitored tasks even as their employers celebrate AI-driven gains. That tension, and the risk that automation quietly hollows out expertise instead of elevating it, is now one of the central questions in the workplace AI boom.
CEOs promise an AI productivity revolution
Across sectors, chief executives are presenting AI as a straightforward efficiency upgrade, a way to do more with the same or smaller headcount. In earnings briefings and investor presentations, they highlight copilots that summarize documents, chatbots that handle customer queries, and recommendation engines that fine-tune pricing or logistics, all framed as levers for higher margins and faster growth. The message is consistent: AI is not just a technology experiment, it is a core business strategy meant to reshape how work gets done and how value is extracted from data.
Those claims are backed by concrete deployments, from generative tools embedded in office suites to specialized models tuned for sectors like finance, retail, and healthcare, where executives describe measurable gains in throughput and response times. In many cases, leaders emphasize that these systems are already influencing decisions about staffing and investment, with AI pilots moving quickly into production once they show even modest improvements in key metrics such as ticket resolution time or sales conversion, a pattern reflected in multiple enterprise AI rollouts and adoption surveys.
Inside the new AI-powered workflows
Behind the upbeat forecasts, the texture of day-to-day work is changing in ways that are more complicated than a simple productivity boost. In many offices, AI tools now sit between workers and the tasks they used to perform directly, generating draft emails, code snippets, marketing copy, or support responses that employees are expected to review and approve. That shift turns a growing share of knowledge work into a kind of quality control, where the human role is to correct or lightly edit machine output rather than originate it from scratch.
Early field studies of generative systems in customer support and software development show that these tools can speed up routine tasks, especially for less experienced staff, but they also encourage standardization and reliance on templates. Workers describe spending more time checking AI suggestions for errors or bias and less time exercising their own judgment, a pattern that appears in research on AI-assisted call centers and code generation tools, where performance gains are often largest for novices while experts see smaller benefits and sometimes even friction.
Why some experts call this “deskilling”
For critics, the risk is not simply that AI will replace jobs outright, but that it will erode the skills embedded in the jobs that remain. When complex tasks are broken into smaller, more scripted steps that can be partially automated, the people left in the loop may handle narrower slices of the work, with less opportunity to practice the full craft. Over time, that can weaken both individual expertise and the organization’s capacity to handle unusual or high-stakes situations that fall outside the patterns the system has learned.
Scholars of labor and technology have long used the term “deskilling” to describe this process, and they see echoes of earlier automation waves in the way AI is being deployed today. Studies of algorithmic management in warehouses, ride-hailing, and content moderation show how software can centralize decision-making while pushing workers into more repetitive, tightly monitored roles, a dynamic that newer platform work research and analyses of algorithmic management now connect to AI-driven tools that script and score white-collar tasks as well.
Customer service: faster responses, thinner skills
Customer support is one of the clearest test beds for this tension between efficiency and expertise. Many companies now route incoming queries through AI systems that suggest answers, surface relevant documentation, or even draft full replies that agents can send with minimal edits. On paper, that setup boosts productivity, since a single worker can handle more tickets per hour and maintain consistent tone and policy adherence across interactions.
Yet the same systems can narrow the skill set required for the job, especially when performance metrics reward speed and adherence to suggested scripts. Research on AI-guided call centers finds that while less experienced agents often see large jumps in measured productivity, they also lean heavily on the model’s recommendations, which can reduce the incentive to deeply understand the product or develop nuanced problem-solving skills, a pattern documented in field experiments where AI guidance reshaped how agents approached calls and how managers evaluated their performance.
Software development: copilots and code comprehension
In software engineering, AI coding assistants promise to eliminate boilerplate and accelerate routine tasks, letting developers focus on architecture and design. Tools that autocomplete functions, generate tests, or translate between languages are now woven into popular integrated development environments, and many engineering leaders report that teams feel faster and more responsive when they adopt these systems. The narrative is that AI will handle the grunt work while humans tackle the hard problems.
However, there is growing concern that heavy reliance on generated code can weaken developers’ understanding of the systems they maintain, especially for junior engineers who are still building foundational skills. Studies of AI-assisted programming show that while these tools can increase output, they also encourage copy-paste patterns and can introduce subtle bugs or security issues that are hard to catch without deep comprehension, as highlighted in evaluations of code generation models and analyses of developer behavior with copilots that track how often suggestions are accepted without thorough review.
Data work and the hidden labor behind AI
Far from the spotlight of executive keynotes, a large share of AI’s apparent intelligence depends on people performing repetitive labeling, moderation, and review tasks. These workers tag images, transcribe audio, rate chatbot responses, and filter harmful content so that models can be trained and refined. Their jobs are often fragmented into micro-tasks, with pay tied to piecework and little room for skill development beyond learning platform quirks and speed-optimizing tricks.
Researchers who study this “ghost work” argue that it represents a form of extreme deskilling, where human judgment is sliced into tiny, standardized decisions that can be priced and managed at scale. Investigations into crowdwork platforms and content moderation pipelines show how these roles are essential to AI performance yet structurally designed to be interchangeable, with limited training, high turnover, and few pathways into more stable, higher-skill positions inside the companies that benefit from their labor.
AI as a management tool, not just a co-worker
Even when AI is framed as a digital assistant, it often doubles as a management system that tracks and scores how people use it. In many workplaces, the same tools that generate suggestions also log keystrokes, measure response times, and feed dashboards that supervisors use to compare employees against one another. That data can shape performance reviews, promotion decisions, and even terminations, tightening managerial control over how tasks are performed.
Studies of algorithmic management in logistics, delivery, and office environments show that these systems can reduce discretion on the front lines, since workers know their every action is being recorded and evaluated by metrics they did not help design. Analyses of AI-driven monitoring and workplace analytics describe how this shift can make jobs feel more like executing instructions for a machine than collaborating with a tool, reinforcing the sense that expertise matters less than compliance with the system’s expectations.
Who benefits most from AI augmentation
One of the more nuanced findings in early workplace AI research is that the benefits of augmentation are unevenly distributed. In several field experiments, less experienced workers see the largest productivity gains when given AI tools, because the systems effectively encode the tacit knowledge of top performers and make it available on demand. That can narrow performance gaps and help newer employees ramp up faster, which is part of what excites executives about deploying these tools at scale.
At the same time, those same studies suggest that highly skilled workers may gain less and sometimes even feel constrained by AI systems that push them toward standardized approaches. In the call center experiment on AI guidance, for example, the biggest improvements came from agents with less tenure, while veterans saw smaller changes, raising questions about whether the technology is primarily a ladder for novices or a ceiling on expert autonomy. Similar patterns appear in research on AI coding tools, where junior developers often lean heavily on suggestions while senior engineers spend more time correcting or working around them.
Safeguards that can protect skills, not just jobs
If AI is going to reshape work without hollowing out expertise, companies will need to design safeguards that focus on preserving and growing skills, not only on preventing layoffs. That starts with how tools are integrated into workflows: instead of routing every task through an automated system by default, organizations can deliberately reserve complex or ambiguous cases for human-led handling, using AI as a reference rather than a gatekeeper. Training programs can also be reoriented so that workers learn not just how to operate the tools, but how to question their outputs and understand their limits.
Policy debates are beginning to reflect this shift from job counts to job quality, with proposals that would require transparency about algorithmic management, give workers a say in how monitoring data is used, and support continuous learning as roles evolve. Reports on AI at work and algorithmic oversight argue that preserving human judgment in the loop is not only a fairness issue but also a resilience strategy, since organizations that deskill too aggressively may find themselves vulnerable when systems fail or conditions change in ways the models did not anticipate.
Rethinking what “AI success” looks like
For now, many executives still measure AI success primarily in terms of speed, volume, and cost savings, metrics that naturally favor automation and standardization. If those are the only numbers that matter, the quiet erosion of skills inside AI-mediated jobs can look like progress, at least on a spreadsheet. The challenge is that such gains may be fragile, especially in environments where trust, creativity, and deep domain knowledge are essential to long-term performance.
A more durable approach would treat AI as a tool for amplifying human capability rather than compressing it, with success metrics that track not just output but also learning, autonomy, and the ability to handle edge cases. Emerging research on AI adoption in offices and enterprise strategies suggests that organizations that invest in upskilling, involve workers in tool design, and keep humans visibly responsible for key decisions are better positioned to capture the upside of AI without hollowing out the very expertise they depend on.
More from MorningOverview