Image Credit: TechCrunch - CC BY 2.0/Wiki Commons

Engineers inside one of the world’s most closely watched AI labs are quietly warning that the systems they are building could hollow out the very jobs that launched their own careers. Their internal research, paired with increasingly blunt public comments from Anthropic’s leadership, sketches a future in which entry-level white-collar work is radically reshaped, and in many cases simply removed, by tools like Claude.

I see a widening gap between the upbeat marketing of productivity gains and the more anxious reality inside the labs, where Anthropic’s own staff describe delegating chunks of their work to AI while their executives talk openly about eliminating a large share of junior roles. That tension, between efficiency and displacement, is now at the center of the company’s story and of the broader debate over how far and how fast AI should be allowed to transform work.

Inside Anthropic’s experiment on its own engineers

The most revealing window into this shift comes from Anthropic’s decision to turn its workforce into a kind of living laboratory. The company ran a structured survey of its own technical staff, asking how AI was changing their day-to-day work and how much of their output they could already hand off to Claude. By treating its engineers as research subjects rather than just builders, Anthropic effectively turned internal anxiety about automation into data.

According to that internal research, the startup gathered responses from 132 engineers through a survey and 53 detailed interviews, then cross-checked those views against usage logs for Claud to see how often staff were actually leaning on the system. The Anthropic employees reported that they could “fully delegate” between 0 and 20 percent of their work to Claude, especially the more routine or easily specified tasks that once filled junior engineers’ days, a pattern that hints at how quickly entry-level responsibilities can be carved away when a capable assistant is always on call.

Engineers say Claude is already taking over the easy work

What stands out in the internal findings is not just that engineers are using AI, but which parts of their jobs they are most comfortable handing off. The Anthropic staff described Claude as particularly effective at boilerplate coding, documentation, and first-draft analysis, the kinds of assignments that traditionally go to new hires who are still learning the ropes. When those tasks are automated, the ladder into the profession starts to look much shorter and steeper.

In the report, The Anthropic engineers said they could fully delegate 0-20% of their workload to Claude, especially “easily specified” tasks that do not require much context or creativity. They also described the system as a kind of always-available colleague, one that never gets tired of reviewing code or drafting variations on the same memo, which makes it an obvious substitute for the repetitive assignments that once justified hiring extra junior staff in the first place.

The CEO who keeps warning about entry-level jobs

Inside the company, those findings are not being treated as a curiosity, they are feeding into a much broader warning campaign from the top. The CEO of Anthropic has been unusually direct about the risk that AI will gut the bottom rungs of white-collar work, arguing that the same tools that make his engineers more productive will also make it easier for employers to do more with fewer people. His message is that this is not a distant scenario, it is already starting to unfold.

Earlier this year, The CEO of Anthropic warned that AI could cut entry-level jobs in law, finance, and consulting, and that the impact would fall heavily on US white-collar workers who rely on those first roles to build careers. In a separate set of remarks, the same leader went further, with Anthropic CEO Dario Amodei saying AI could eliminate half of all entry-level white-collar jobs and urging policymakers to act now to protect the nation from a shock to its professional class.

A $183 Billion startup that says it must “warn the world”

Those comments carry extra weight because of the scale of the company making them. Anthropic is no longer a scrappy research outfit; it is one of the most valuable AI startups on the planet, with its flagship Claude models embedded in everything from customer service chatbots to office productivity suites. When a firm of that size says its own products are likely to replace jobs, it is effectively sounding an alarm about the trajectory of the entire industry.

In a widely discussed interview, The CEO of a $183 Billion AI Startup Says There is a Need to Warn the World About AI Taking Jobs, framing job loss not as a side effect but as a central risk of the technology. The CEO of stressed that as systems become more capable, they will not just assist workers but in many cases substitute for them, especially in roles that revolve around predictable analysis, drafting, or pattern recognition, which is exactly the kind of work Claude is already handling for Anthropic’s own teams.

Anthropic’s research shows productivity gains and rising anxiety

From my perspective, what makes Anthropic’s internal study so striking is the mix of enthusiasm and unease it captures. Engineers report that AI tools make them faster and more effective, yet they also worry that the same tools could erode their skills or make their roles redundant. That duality, excitement about what Claude can do and fear about what it might replace, runs through the findings.

The company’s survey of 132 engineers and 53 interviews found that staff were not only using Claud heavily but also voicing concern about losing skills and jobs as they leaned on the system more. Many described a subtle shift in their work, where they now orchestrate and review AI-generated output instead of crafting everything from scratch, a change that boosts short-term productivity but raises longer-term questions about how new engineers will ever get the practice they need to become experts.

Why entry-level white-collar roles are in the crosshairs

When I look across these data points and executive warnings, a clear pattern emerges: entry-level white-collar roles are structurally exposed to automation in a way that senior positions are not. Junior staff in law, finance, and consulting spend much of their time on research, drafting, and standardized analysis, all of which can be broken into discrete tasks that Claude-style systems handle well. As those tasks are automated, the economic case for hiring large cohorts of new graduates weakens.

That is why Anthropic CEO Dario Amodei’s warning that AI could eliminate half of all entry-level white-collar jobs lands with such force. It is not a generic fear about “the future of work,” it is a targeted prediction about the first rung on the professional ladder. If that rung is sawed off, the impact will ripple outward, affecting who gets trained, who advances, and how companies think about building talent pipelines in the first place.

Not everyone agrees AI job loss will be so severe

There is, however, a sharp debate inside the tech industry about how dire these forecasts really are. Some leaders argue that focusing on job destruction misses the potential for AI to create new categories of work and to amplify human capability rather than replace it. They see tools like Claude as the next generation of calculators or spreadsheets, disruptive but ultimately additive.

That tension surfaced publicly when Nvidia CEO Jensen Huang criticized Anthropic over its grim job loss predictions, suggesting that its leadership “thinks AI is so scary” and pushing back on the idea that widespread displacement is inevitable. His argument reflects a more optimistic camp that expects AI to unlock demand for new products and services, which in turn could generate roles that do not exist yet, even if the path from here to there is far from clear.

The 2030 crossroads Anthropic’s chief scientist sees coming

Inside Anthropic, the concern about jobs is tied to a broader unease about where rapidly advancing AI systems might lead by the end of the decade. The company’s chief scientist has described a looming decision point, arguing that by around 2030, societies will need to choose how far they are willing to go in deploying AI that is “much smarter” than today’s models. That choice will shape not just employment but the balance of power between humans and the systems they build.

In a recent warning, Anthropic’s chief scientist, Jared Kaplan, said that by 2030 humans will have to decide how to handle AI that is much smarter, calling it “the biggest decision.” Kaplan described the process as “a kind of scary” journey where You do not know exactly where You end up, a framing that links the immediate disruption of entry-level jobs to a much larger question about how deeply AI should be woven into economic and social life.

Anthropic’s public mission versus its internal realities

All of this plays out against the backdrop of Anthropic’s carefully crafted public mission. On its own site, the company presents itself as an AI safety–focused lab, emphasizing its commitment to building systems that are helpful, honest, and harmless. That branding is designed to reassure regulators, customers, and the broader public that its technology will be deployed responsibly.

The contrast between that mission and the internal findings is striking. While Anthropic describes its work as centered on safety and alignment, its own engineers are already delegating up to a fifth of their tasks to Claude and its CEO is warning that half of all entry-level white-collar jobs could disappear. From my vantage point, that combination suggests a company that understands the disruptive power of what it is building but has not yet fully reconciled its promise to protect society with the economic shock its tools may unleash.

What it means when AI builders fear for their own jobs

When the people closest to a technology start to worry that it could erase the kind of work they do, the rest of us should pay attention. Anthropic’s engineers are not abstracting about some distant automation wave; they are already using Claude to offload the same kinds of tasks that once justified hiring more junior colleagues. Their leaders, in turn, are telling anyone who will listen that entry-level white-collar roles are in the firing line.

I see that as a signal that the AI transition is moving faster than most institutions are prepared for. The combination of internal surveys, blunt executive warnings, and public pushback from figures like Nvidia CEO Jensen Huang shows an industry that is divided about the destination but united on one point: the status quo in professional work is not going to hold. Whether that shift ends up empowering workers or sidelining them will depend on choices made now, long before 2030 arrives and the “biggest decision” that Kaplan describes can no longer be postponed.

More from MorningOverview