OpenAI is telling every Mac user running its ChatGPT or Codex desktop app to update right now. The urgency traces back to a supply-chain attack on a popular open-source JavaScript toolkit called TanStack, which compromised two OpenAI employee devices and triggered a chain reaction that ultimately forced the company to replace the cryptographic certificates it uses to sign its macOS software.
That certificate rotation is the key detail. It means older versions of both apps are no longer trusted by macOS, and users who delay the update may find that macOS Gatekeeper eventually blocks the apps from launching, because the operating system treats builds signed with a revoked or replaced certificate as unverified.
What happened and what OpenAI has confirmed
TanStack is a collection of widely used open-source libraries for building web applications. Attackers managed to inject malicious code into one of its npm packages, the kind of pre-built software component that developers pull into projects thousands of times a day. That tainted package made its way onto two OpenAI employee machines, according to reporting from The Hacker News and corroborated by The Record. It is worth noting that the two-device figure comes from secondary reporting; OpenAI’s own disclosure does not foreground a specific number of compromised machines.
In a public disclosure posted in May 2026, OpenAI confirmed the breach and outlined its response. The company said it isolated the affected systems, revoked active user sessions, rotated internal credentials, and “thoroughly scrutinized user and credential activity” across its infrastructure.
The most disruptive step was rotating the code-signing certificates. On macOS, these certificates are what Gatekeeper checks to verify that an application is legitimate and untampered. When a company replaces them, every previously signed build becomes invalid. That single action is why Mac users now need to download fresh copies of both apps. OpenAI has not disclosed how many macOS users run ChatGPT or Codex, so the total number of people affected is unclear.
Why two compromised devices triggered a company-wide response
A code-signing key is the credential that tells every Mac on the planet “this software is safe to run.” If attackers had any opportunity to extract or misuse that key, the potential damage would extend far beyond two employee machines. A stolen signing certificate could be used to distribute malware disguised as a legitimate OpenAI app.
By rotating preemptively, OpenAI chose short-term disruption for its entire macOS user base over the possibility that a compromised key could be weaponized later. The alternative, waiting for evidence of key misuse before acting, leaves a window that attackers can exploit. Aggressive containment is generally considered best practice in supply-chain incidents, though no named security researcher has commented publicly on OpenAI’s specific response as of late May 2026.
This is not the first time OpenAI has dealt with a security incident. In early 2024, the company disclosed that a threat actor gained access to an internal messaging system, and in separate episodes, bugs in the ChatGPT web application briefly exposed user chat histories and payment information. None of those earlier events involved code-signing infrastructure, but the pattern underscores that high-profile AI companies are frequent targets.
What users should do right now
If you run the ChatGPT or Codex desktop app on a Mac, open the app and check for an available update. Install it immediately. OpenAI has not asked users to change passwords or take additional steps, but because the company revoked active sessions during its response, you may need to log back in after updating.
For most users, the Mac App Store and Gatekeeper handle certificate verification automatically. Enterprise security teams that whitelist applications by certificate hash will need to update their allowlists manually, though OpenAI has not yet published the new certificate fingerprints to streamline that process.
What OpenAI has not said
Several important questions remain unanswered as of late May 2026. OpenAI has not disclosed which specific TanStack package or version carried the malicious payload, making it harder for other organizations that depend on TanStack to assess their own exposure. The company has also not published version numbers for the new, re-signed macOS builds.
Notably absent from OpenAI’s disclosure is a definitive statement about customer data. The company described scrutinizing credential activity, which suggests it looked for unauthorized access, but it stopped short of saying “no customer data was compromised.” For users who store sensitive conversations or use API keys through the desktop apps, that omission is worth watching.
The timeline of the attack is also incomplete. OpenAI has not said when the compromised npm package first entered its build pipeline, how long the two employee devices were exposed before detection, or what telemetry flagged the intrusion. Dwell time (the gap between initial compromise and discovery) is one of the strongest indicators of how much damage an attacker could have done. Without that number, outside observers cannot fully assess the severity.
No reporting in the current cycle has attributed the TanStack compromise to a specific threat actor or nation-state group. Any claims to that effect circulating on social media remain unverified.
A supply-chain problem that extends well beyond OpenAI
The TanStack attack did not target OpenAI specifically. It poisoned a package in npm, the world’s largest software registry, which serves as the backbone for JavaScript development across virtually every major tech company. OpenAI happened to be a downstream consumer. Other organizations that pulled the same compromised package may not yet know they were affected, and the full scope of the TanStack incident is still being mapped by the security community.
For the AI industry in particular, the episode highlights a growing vulnerability. Companies like OpenAI ship desktop software to millions of users while simultaneously relying on sprawling open-source dependency trees that no single team can fully audit. A single poisoned library can cascade from a developer’s laptop into signing infrastructure and, from there, into every customer’s device. OpenAI’s decision to treat two compromised machines as grounds for a full certificate rotation sets a precedent that other large-scale software distributors, whether they build AI tools or not, may soon feel pressure to follow.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.