
Inside the companies racing to build ever more powerful artificial intelligence, the loudest warnings are no longer coming from outside critics. They are coming from the engineers, researchers, and executives who helped create the systems and now say they are afraid of where the technology is heading. Their concern is not abstract unease about science fiction scenarios, but a growing conviction that the incentives driving AI are out of sync with the risks.
As investment surges and political leaders celebrate AI as a strategic asset, these insiders describe a culture that rewards speed over safety and scale over control. I see a widening gap between the public story of innovation and the private reality of people who say they are building tools that could destabilize economies, supercharge propaganda, and, in the most extreme scenarios, threaten human survival.
The insiders who broke ranks
The most striking shift in the AI debate is that some of the sharpest alarms now come from people who helped design the systems they fear. A group of current and former employees from OpenAI and Google DeepMind has publicly warned that their own industry is creating artificial general intelligence that could pose “serious risks” to society, and that internal channels are not enough to address those dangers. These insiders describe a pattern in which concerns about safety, misuse, and long term impacts are acknowledged in theory but sidelined when they collide with product timelines and competitive pressure from rivals like Google.
Several of these employees say they felt compelled to speak out only after realizing how little leverage they had inside their own organizations. According to their account, some workers who tried to raise red flags about advanced models were constrained by strict confidentiality rules and non disparagement agreements that limited what they could tell regulators or the public. The open letter they backed framed their intervention as a last resort, a way to “warn the public of potential dangers” from systems that are being deployed at scale even as their creators admit they do not fully understand their behavior, a concern reflected in the decision by a group of current and former employees to go public.
From quiet unease to explicit fear
What stands out in these testimonies is not just caution, but explicit fear. Some insiders now attach concrete probabilities to catastrophic outcomes, arguing that the chance of AI systems causing societal collapse or worse is not a remote outlier but a scenario that must be taken seriously. One OpenAI insider has gone so far as to estimate a 70 percent likelihood that advanced AI will “destroy or severely harm” humanity, a figure that would sound outlandish if it were not coming from someone close to the work. That estimate surfaced after former and current OpenAI employees released an open letter saying they were being silenced about safety issues, even as the company and its competitors continued to scale up more capable models.
In that account, the insider describes a culture in which executives are fully aware of the potential for extreme downside, yet still “barreling ahead” with ambitious deployment plans. The phrase “Getting Warner” captures the sense that internal warnings are treated as a communications problem rather than a reason to slow down or change course. When someone inside a leading lab says there is a 70 percent chance of doom and the response is to keep shipping products, it crystallizes the disconnect between the rhetoric of responsibility and the reality of a race dynamic, a tension laid bare in the description of an OpenAI insider estimating 70 percent doom.
“Move fast and fix later” as a safety strategy
Behind these fears is a familiar Silicon Valley playbook that treats real world harms as bugs to be patched after launch. William Saunders, a research engineer who left OpenAI in February, has described a culture in which the company was willing to release powerful systems, see what breaks, and then try to fix the damage afterward. That mindset, which helped social media platforms grow at breakneck speed, is now being applied to technologies that can write code, generate persuasive propaganda, and potentially act autonomously in digital environments. When someone like William Saunders tells the New York Times that the approach is essentially to “see what happens and fix them afterward,” it signals that the industry’s default setting is still experimentation first, governance later.
That attitude is not confined to one lab. The broader AI ecosystem is shaped by a handful of tech giants and well funded startups that are locked in a race to dominate the next computing platform. In that context, the idea of pausing to fully understand emergent behaviors or long term impacts can feel like a luxury competitors cannot afford. The result, as insiders like William Saunders suggest, is a system where the people closest to the technology are asked to trust that any problems can be patched on the fly, even when they suspect that some failures, once unleashed, will not be reversible, a concern that surfaces in accounts of William Saunders leaving OpenAI in February.
Apocalypse talk versus lived anxiety
Public debate about AI safety is often caricatured as a clash between doomsayers and cheerleaders, but insiders describe something more complicated. Some argue that apocalyptic rhetoric can be counterproductive, turning legitimate concerns into punchlines and distracting from the concrete ways AI is already reshaping power and risk. One analysis notes that the discourse is “often dominated by apocalyptic rhetoric” that is “peddled” by certain voices, even as the real story is that AI is quietly embedding itself into critical infrastructure, finance, and information systems. The people building these tools may roll their eyes at the most theatrical predictions, yet still admit in private that they are “living in fear” of what they are creating.
That tension is captured in reporting that describes AI industry insiders who are both skeptical of the loudest catastrophists and deeply uneasy about their own work. They worry less about a single Hollywood style superintelligence and more about a messy accumulation of failures, from biased decision systems to runaway automation that hollows out middle class jobs. The phrase “Either we panic or shrug” captures how the public conversation often swings between extremes, while the “urgent, messy reality” is that AI is already destabilizing parts of the world we take for granted, a point underscored in arguments that Either we panic or shrug and miss the real stakes.
Jobs, livelihoods, and the 50% warning
For many workers, the most immediate fear is not extinction but unemployment. Some of the people leading AI companies now openly predict that their products will wipe out a large share of entry level roles, especially in white collar fields that once felt insulated from automation. In one widely discussed conversation, The AI Show Episode 151 highlighted a claim that an Anthropic CEO expects AI to destroy 50% of entry level jobs, a figure that would represent a seismic shock to labor markets if it bears out. That same discussion pointed to tools like Veo 3, which can generate “scary lifelike videos,” and to plans by Meta to “fully automate ads,” illustrating how quickly creative and marketing work is being handed to algorithms.
When I talk to people in advertising, customer support, and software testing, they describe a creeping sense that the ladder they climbed is being pulled up behind them. Entry level copywriters now compete with text generators, junior video editors with automated editing suites, and new graduates in data analysis with tools that promise to do their work in seconds. The prediction that AI will destroy 50% of entry level jobs is not just a headline, it is a business plan for companies that see labor costs as a line item to be minimized, a strategy reflected in the way The AI Show Episode 151 framed an Anthropic CEO predicting 50% job loss.
Inside the labs: fear, burnout, and moral injury
Behind the polished demos and keynote presentations, many AI workers describe a more fraught emotional landscape. Some say they are proud of the breakthroughs they help deliver, yet also feel a gnawing dread about how those capabilities might be misused or spiral beyond human control. Reporting on AI industry insiders “living in fear of what they are creating” captures this duality, noting that the people closest to the models are often the ones most aware of their failure modes, from hallucinated facts to subtle biases that can skew decisions in policing, lending, or hiring. The result is a kind of moral injury, where engineers feel responsible for harms they cannot fully prevent.
Others talk about burnout driven by the pace of change and the pressure to keep up with competitors. When every quarter brings a new model release and every product team is told to “AI everything,” there is little time to step back and ask whether the direction of travel is wise. Some insiders say they stay in these roles partly out of fear that if they leave, their seat will be filled by someone less cautious. That logic traps them in a system they privately distrust, a dynamic reflected in accounts of AI industry insiders living in fear even as they continue to build.
Big Tech’s superintelligence race
The corporate backdrop to all of this is a race among tech giants to build what they openly describe as “superintelligence.” Meta, for example, has launched an aggressive push to develop systems that go far beyond today’s chatbots, a strategy that has sparked internal tension and even threats of desertion inside its sprawling AI operations. Some employees reportedly worry that the company is prioritizing scale and ambition over careful evaluation of risks, especially as it pours resources into models that could be deployed across billions of users on Facebook, Instagram, and WhatsApp. The phrase “superintelligence push” is not a marketing slogan but an internal rallying cry that raises the stakes of any safety missteps.
At the same time, Meta’s leadership, including Mark Zuckerberg, has signaled a willingness to invest staggering sums in AI infrastructure. Since the company rebranded and pivoted toward the metaverse and AI, Meta and other tech rivals have ramped up spending on data centers and specialized chips, with Mark Zuckerberg pledging to invest “hundreds of billions” of dollars in the hardware needed to train and run these models. That level of financial commitment creates its own momentum, making it harder for executives to slow down or change course even if internal critics raise alarms, a tension visible in reports that Meta’s superintelligence push sparks tension and threats of desertion among its own staff.
The trillion dollar bubble and political power
All of this unfolds against a financial backdrop that looks increasingly like a bubble. Investors are pouring capital into AI startups and infrastructure on the assumption that these systems will unlock a new wave of productivity and profit. Analysts now warn of a potential “trillion dollar AI bubble,” pointing to sky high valuations and a scramble to secure access to chips, data centers, and talent. Since the first wave of generative AI hype, other tech rivals have ramped up spending, with Meta and Mark Zuckerberg in particular pledging to invest hundreds of billions in data centers to stay ahead. That kind of capital inflow can distort priorities, rewarding companies that promise the most aggressive growth rather than the most responsible deployment.
The political context is just as fraught. Donald Trump, now in the White House, has embraced AI as a strategic asset in competition with China and as a driver of domestic economic growth. At the same time, tech billionaires who bankroll AI labs wield enormous influence over how the technology is regulated, often arguing that heavy handed rules would cede advantage to foreign rivals. Critics warn that this combination of presidential power, concentrated wealth, and lightly regulated AI development is “a recipe for disaster,” especially when some of the people inside these companies are already sounding the alarm. The concern is that the same forces inflating a potential trillion dollar AI bubble are also shaping the rules meant to keep the technology in check.
Warnings from veterans and the specter of 2027
Some of the starkest warnings now come from veterans who helped build the early internet and then watched it morph into something far more extractive and destabilizing than they expected. An ex Google insider, speaking in a widely shared video, has told audiences bluntly, “You are not prepared for 2027,” sketching a near future in which AI driven automation reshapes the price of food, manufacturing, and even the value of human labor. In that vision, the cost of producing goods drops sharply, but so does the bargaining power of workers whose skills can be replicated by machines, creating a world where abundance and precarity coexist.
When I listen to those warnings, what stands out is not just the technical forecast but the moral urgency. The ex Google insider is not simply predicting cheaper products, but questioning what happens to societies when the “price of my life generally drops” in economic terms. That framing echoes the fears of many AI workers who worry that their creations will be used primarily to cut costs and concentrate power, rather than to expand opportunity. It also aligns with the anxiety of people outside the industry who sense that something big is coming by 2027 but feel they have little say in how it unfolds, a concern amplified in the Ex Google insider warning that you are not prepared for 2027.
“AI Hiroshima” and the search for a wake up call
Even some of the biggest names in tech now admit they are afraid of what AI could unleash if it is not handled carefully. In one televised segment, industry leaders spoke candidly about their fears, with one voice saying they did not want to “see an AI Hiroshima.” That phrase, aired in a report introduced By Scott Budman and carried by NBC Universal, captures a grim hope that the world will not need a singular, devastating disaster to take AI safety seriously. The reference to Hiroshima is not a literal prediction of nuclear scale destruction, but a metaphor for an irreversible event that forces a reckoning too late.
For insiders, that metaphor lands differently. They know how often complex systems fail in unexpected ways, from algorithmic trading glitches to content recommendation engines that radicalize users. When someone compares AI to the risk of an “AI Hiroshima,” they are pointing to the possibility that a misaligned or misused system could trigger cascading harms before anyone fully understands what has gone wrong. The fact that such language now appears in mainstream coverage of tech leaders’ own fears, in a segment that cited “56” in its timing details and was framed as an update on growing concern, shows how far the conversation has shifted from uncritical hype to uneasy anticipation, a shift captured in the report where By Scott Budman, Published January, Updated by NBC Universal on fears of an “AI Hiroshima.”
Living with the technology its creators fear
For the rest of us, the unsettling reality is that we are already living with systems that many of their creators do not fully trust. AI models now draft legal documents, screen job applications, recommend medical treatments, and generate news like this, even as the people who build them warn that they can hallucinate, embed bias, and be repurposed for surveillance or manipulation. Some insiders argue that the real danger is not a single catastrophic failure, but a slow erosion of human agency as decisions that shape our lives are outsourced to opaque algorithms. That concern is echoed in commentary that describes AI as “already destabilising our world,” not in a distant future but in the choices being made today.
At the same time, there is a risk that constant alarm will numb the public rather than mobilize it. If every new model is framed as either a miracle or a menace, people may tune out and leave the decisions to the very companies and political leaders whose incentives are under scrutiny. The insiders who have stepped forward, from William Saunders to the ex Google veteran warning about 2027, are effectively asking for a different kind of engagement, one that treats AI as a political and economic choice rather than an inevitable force of nature. Their fear is not just about what the technology can do, but about what happens if the only people steering it are those with the most to gain financially, a worry that also surfaces in parallel coverage of AI insiders living in fear of the processing done by computers.
More from MorningOverview