Morning Overview

LiteLLM just fell to a full-chain Pwn2Own exploit combining SSRF and code injection — researchers took full system control

A team of security researchers chained two vulnerabilities in LiteLLM, the popular open-source proxy that routes enterprise traffic to large language model providers, and walked away with arbitrary command execution on the target system. The attack, reportedly demonstrated in a Pwn2Own competition category, paired a known server-side request forgery flaw (CVE-2024-6587) with a code injection technique that escalated network-level trickery into remote code execution on the underlying host.

For the thousands of companies that rely on LiteLLM as a single gateway to AI APIs, the result is a stark reminder: the proxy that holds all your keys can also hand them to an attacker.

The SSRF entry point

The first link in the chain is well documented. CVE-2024-6587, recorded in the National Vulnerability Database maintained by the National Institute of Standards and Technology, describes a server-side request forgery issue in LiteLLM. The NVD assigns it a CVSS base score of 7.5 (High) and lists affected versions of the BerriAI LiteLLM package prior to the fix. In practical terms, an attacker can trick the proxy into firing HTTP requests at destinations the developer never intended: internal services, cloud metadata endpoints, credential stores, or other resources that should be invisible from the public internet.

Because LiteLLM is designed to sit between application code and multiple AI providers, it already holds privileged network access and stored API keys. According to LiteLLM’s own documentation, the proxy manages credentials for providers such as OpenAI, Anthropic, and Azure-hosted models in a centralized configuration. An SSRF flaw in that position is not just a theoretical concern. It gives an attacker a way to reach inward from a component that was built to reach outward.

The CVE assignment confirms that the flaw went through coordinated disclosure, was accepted by a CVE Numbering Authority, and is now visible to automated vulnerability scanners, compliance audits, and procurement reviews across government and industry. Any organization running an affected version of LiteLLM should already see it flagged during routine assessments aligned with NIST’s vulnerability management program.

From SSRF to remote code execution

On its own, an SSRF bug might be classified as a medium-severity issue: useful for internal scanning or leaking metadata, but not an immediate path to owning a machine. The researchers changed that calculus by chaining CVE-2024-6587 with a second injection primitive that turned crafted HTTP responses into code execution on the host operating system.

The combined effect, as described in available reporting, was remote code execution (RCE) on the LiteLLM host. With a shell running at the privilege level of the LiteLLM process, an attacker could read environment variables, extract stored API keys, alter logging to cover tracks, tamper with traffic flowing between applications and AI providers, or pivot deeper into the internal network. The demonstration showed that a single proxy weakness can collapse the isolation that is supposed to separate external traffic from internal infrastructure.

What is not yet confirmed

Important details remain unverified through primary sources as of June 2026. The specific code injection technique has not been assigned its own CVE, and no public advisory describes the exact payload or vulnerable code path that allowed command execution after the SSRF step succeeded. Without that second identifier or a detailed technical write-up, independent security teams cannot fully reproduce the chain or determine whether similar injection paths exist in adjacent components.

The identity of the researchers, the specific Pwn2Own event or category, and any prize awarded have not been confirmed through organizer statements from Trend Micro’s Zero Day Initiative, which runs the competition and typically publishes results and credits after each round. A review of ZDI’s published results for recent Pwn2Own events (including Pwn2Own Ireland 2024 and Pwn2Own Automotive 2025) did not surface a matching LiteLLM entry. Until official results appear that corroborate the claim, the Pwn2Own attribution should be treated as unverified and based on secondary accounts.

It is also unclear whether LiteLLM’s maintainers have released a patch that addresses both links in the chain. The NVD entry tracks the SSRF component, but remediation status for the injection vector is not reflected in any public advisory or GitHub changelog reviewed for this report. Organizations that already patched CVE-2024-6587 may or may not be protected against the broader chain, depending on whether the injection primitive lives in the same code path or in a separate module untouched by the SSRF fix.

What defenders should do now

The SSRF half of the chain is actionable today. Security teams should review the NVD entry for CVE-2024-6587, identify whether their deployed version falls within the affected range, and apply available patches. Beyond patching, several architectural controls can blunt the impact of any residual SSRF behavior or a future injection variant:

  • Egress filtering: Restrict where LiteLLM can send outbound requests. Allow-list only the specific AI provider endpoints the proxy needs to reach.
  • Metadata service protection: Block access from application hosts to cloud metadata endpoints (169.254.169.254 on AWS, equivalent on GCP and Azure) unless explicitly required.
  • Internal service segmentation: Place LiteLLM in a network zone that limits lateral movement. It should not have broad access to internal databases, admin panels, or other infrastructure.
  • Key rotation: Rotate API keys stored in LiteLLM’s configuration as a precaution, especially if the proxy was exposed to untrusted input before the SSRF patch was applied.
  • Monitor for follow-up disclosures: Watch for results from the Zero Day Initiative, updates from LiteLLM’s maintainers on GitHub, and any new CVE assignment covering the injection vector.

Why the AI proxy layer is now a top-tier target

LiteLLM is not the only AI proxy in production, but its popularity and its role as a credential hub make it a high-value target. It terminates TLS, manages authentication, and normalizes requests across multiple vendors. A weakness in that central hub can undermine every segmentation effort downstream.

For organizations moving fast with generative AI, this exploit chain is a concrete example of a pattern security teams have warned about for months: the glue code and orchestration layers are as critical to secure as the models themselves. Until the full technical details of the reported demonstration are published and a comprehensive patch is confirmed, treating any SSRF-capable proxy in the AI stack as a high-priority risk is the defensible call.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.