Morning Overview

Sam Altman admits they accidentally made the new ChatGPT update worse

OpenAI built its reputation on making each new ChatGPT release smarter and more capable than the last. With the latest update, that narrative broke. Users began complaining that the chatbot suddenly felt clumsier at writing, and now chief executive Sam Altman has publicly conceded that the company itself caused the slide.

Instead of quietly tweaking the model and moving on, Sam Altman has said in plain language that OpenAI “screwed up” the new version’s balance of skills. The admission turns a routine product regression into a revealing moment about how frontier AI is built, what gets prioritized, and who pays the price when an upgrade goes sideways.

Altman’s rare admission: ‘we just screwed that up’

The core of the controversy is simple: OpenAI’s latest ChatGPT model, tied to the GPT‑5.2 update, is objectively better at some technical tasks yet worse at the kind of human‑sounding writing that made the chatbot famous. In a recent town hall, OpenAI CEO Sam Altman acknowledged that writing performance had “regressed” and bluntly said “we just screwed that up.” That is not the kind of language tech leaders usually use about flagship products, especially when they are racing competitors to define the future of artificial intelligence.

Altman’s comments line up with a wave of user reports that the new ChatGPT feels more stilted, less creative, and more prone to formulaic phrasing when asked to draft essays, marketing copy, or long‑form content. One analysis framed it as a case of Regression in the model’s human‑language performance, with Sam Altma’s own remarks used as Exhibit A. For a system that millions rely on to write everything from résumés to research summaries, that kind of regression is not a minor annoyance, it is a direct hit to the product’s core value.

How GPT‑5.2 traded prose for problem‑solving

Altman has not tried to hide what went wrong under the hood. According to his explanation, the GPT‑5.2 training run leaned heavily into improving reasoning depth, coding accuracy, and engineering‑style problem solving. In his words, According to Altman, OpenAI has “limited bandwidth” and sometimes has to “prioritise certain features over others,” which in this case meant boosting math and code at the expense of narrative flow.

That focus is reflected in outside assessments that describe GPT‑5.2 as “overtraining Math, Coding” while letting stylistic nuance slip. OpenAI itself has said that ChatGPT’s writing “worsened” because of this emphasis on technical domains, a rare instance of a company publicly tying a degraded user experience to a specific training choice. For developers who use ChatGPT as a coding assistant, the trade‑off may feel like a win. For writers, marketers, and students, it feels like a downgrade disguised as progress.

Users notice the drop, and OpenAI’s own blog backs them up

What makes this episode stand out is how closely user complaints match the company’s internal assessment. After the ChatGPT‑5.2 update, a company Blog noted that “After the” rollout, users had noticed a decrease in writing quality, and Sam Altman confirmed that those assumptions were correct. He even contrasted the new model with earlier versions, reportedly saying that the previous GPT‑4.5 was stronger at certain kinds of prose, a striking admission for a firm that usually markets each release as a clear step forward.

Independent coverage has described how this regression shows up in everyday use: more generic drafts, weaker long‑form structure, and a tendency to flatten tone across different prompts. One detailed breakdown by Shalabh Singh highlighted how the new model struggles more with nuanced drafts and long‑form content, even as it improves at structured tasks. When the company’s own messaging and outside testing converge like this, it becomes hard to dismiss the backlash as anecdotal.

‘Screwed up’ is becoming a pattern in OpenAI’s launch playbook

Altman’s candor about GPT‑5.2 is not an isolated moment. In Aug of last year, he publicly reflected on the earlier GPT‑5 rollout and said OpenAI had “totally screwed up” that launch as well. In that case, the criticism focused less on model quality and more on how the release was handled, from marketing to expectations management, even as he talked about spending trillions of dollars on data centers to support future models.

That earlier mea culpa was dissected in Aug in a widely shared video that described the GPT‑5 rollout as “a bit of a train wreck” regardless of the underlying model. Taken together, the GPT‑5 and GPT‑5.2 episodes suggest a company that is moving so quickly at the frontier of AI that it is willing to accept public missteps as the cost of speed. That may be tolerable for early adopters, but it is a riskier proposition for businesses and institutions that are starting to treat ChatGPT as critical infrastructure.

Technical gains, human costs

From a purely engineering perspective, the GPT‑5.2 shift is defensible. Altman and his team have argued that deeper reasoning and more reliable code generation are essential if AI is going to handle complex workflows, from debugging large software projects to assisting with scientific analysis. Reports on GPT‑5.2 note that the release came with a huge emphasis on technical tasks like coding and formatting, and that the model is better at staying consistent when confronted with the “noise of reality.” For power users who treat ChatGPT as a programmable tool, those are real gains.

The cost is that the system now feels less like a natural conversational partner. Analyses of the new model describe a kind of Mean regression in its human‑language performance, a step back from the fluidity that defined earlier versions. For journalists, novelists, and everyday users who rely on ChatGPT to brainstorm or polish text, that shift can feel like the product is being optimized for someone else. It is a reminder that every training decision encodes a set of priorities about whose needs matter most.

More from Morning Overview