A supply chain attack compromised LiteLLM, the AI proxy present in 36 percent of cloud environments. Mercor, a $10 billion AI startup, lost 4TB of data. Security researchers estimate 500,000 machines affected. The attack used an AI agent operationally for the first time.
We warned about this. In January, we published an analysis of LiteLLM as a supply chain single point of failure for agentic commerce. The argument was straightforward: LiteLLM sits between AI applications and every foundation model they call. It holds the API keys. Compromise LiteLLM, and you compromise the credentials to every AI service an organisation uses.
On March 24, that is exactly what happened.
The LiteLLM supply chain attack is the most significant compromise of AI infrastructure to date. It validates what security researchers have been saying for months: the proxy layer is the skeleton key.
What Happened
A threat actor called TeamPCP published two malicious versions of the litellm package (1.82.7 and 1.82.8) on PyPI, Python's official package registry. The compromised packages were live for roughly 40 minutes before PyPI quarantined them.
Forty minutes does not sound like much. But LiteLLM is downloaded approximately 3.4 million times per day. Even a brief window creates enormous blast radius when a package is that widely deployed.
The attack chain was sophisticated. TeamPCP did not go after LiteLLM directly. They first compromised Trivy, an open-source security scanner maintained by Aqua Security, via a misconfigured pull_request_target workflow in late February. That gave them a stolen Personal Access Token belonging to aqua-bot. From there, they pivoted into LiteLLM's CI/CD pipeline and pushed the backdoored packages to PyPI.
Read that again. They compromised a security scanner to compromise an AI proxy. The irony writes itself.
The Payload
The backdoor was not subtle. Trend Micro documented a three-stage payload that moved fast and grabbed everything.
Stage one: credential harvesting. SSH keys, cloud tokens, Kubernetes secrets, .env files. Anything that looked like it could authenticate somewhere got swept up and exfiltrated.
Stage two: Kubernetes lateral movement. The malware sought out privileged pods and used them to spread across clusters. In cloud-native environments, this is how you turn one compromised dependency into full infrastructure access.
Stage three: a persistent systemd backdoor. Even if you caught the bad package and reverted, the backdoor stayed. It was designed to survive package updates.
The stolen credentials were validated and used for follow-on attacks almost immediately. This was not a smash-and-grab. It was a pipeline.
Mercor: The Headline Casualty
Mercor, the AI hiring startup valued at $10 billion after a $350 million round led by Felicis Ventures in October 2025, confirmed it was hit by the attack. The company described itself as "one of thousands" of organisations affected.
Here is the thing. Mercor is not just any startup. It recruits human experts who produce training data for Anthropic, OpenAI, and Meta. The data it holds is extraordinarily sensitive.
The Lapsus$ extortion group claims to possess 4TB of stolen Mercor data including candidate profiles, personally identifiable information, employer data, user credentials, video interviews, source code, API keys and secrets, and TailScale VPN configuration data. Lapsus$ is auctioning the data on its leak site.
A company that supplies training data to the three largest AI labs just had its entire data estate stolen. The downstream implications of that are still being calculated.
The Scale Nobody Expected
Security researchers estimate up to 500,000 machines and over 1,000 SaaS environments were compromised. LiteLLM is estimated to be present in 36 percent of cloud environments. That number should alarm anyone running AI workloads in production.
The reason is simple. LiteLLM became the default abstraction layer for teams that want to swap between GPT-4, Claude, Gemini, and open-source models without rewriting their application code. It is genuinely useful software. That utility is precisely what made it dangerous. When a single package becomes the universal adapter for AI model access, it becomes the universal target too.
We have covered this pattern before. The agentic AI stack concentrates trust in a small number of infrastructure components. LiteLLM was always the most obvious candidate for exactly this kind of attack.
The AI Agent in the Room
There is one detail in this breach that deserves its own section.
Aikido researchers documented a component within the attack tooling called hackerbot-claw. It uses an AI agent, openclaw, for automated attack targeting. The agent helped identify and prioritise targets for the stolen credentials.
This is one of the first documented cases of an AI agent being used operationally in a supply chain attack. Not theoretically. Not in a research paper. In the wild, against real infrastructure.
We published evidence that AI agents would be weaponised for exactly this purpose. The argument was that AI lowers the cost of offensive operations, making attacks more scalable and harder to attribute. TeamPCP just proved it.
The attack on LiteLLM was not just a supply chain compromise. It was a proof of concept for AI-assisted offensive operations at scale. Every security team in payments and fintech needs to absorb that.
What This Means for Payments and Commerce
The payments industry has been aggressively adopting agentic architectures. AI agents that negotiate prices, execute transactions, and manage disputes are moving from demos into production. Many of those agents use LiteLLM or similar proxy layers to interact with foundation models.
If your payment processing agent's LLM proxy gets compromised, the attacker does not just get your model API keys. They get the transaction data flowing through those agents. They get the authentication tokens for downstream payment services. They get the customer data that agents need to do their jobs.
This is what makes AI supply chain attacks categorically different from traditional software supply chain attacks. The proxy layer does not just run code. It brokers trust between your application and the most sensitive APIs you operate.
Three things need to happen:
First, pin your dependencies. Do not auto-update AI infrastructure packages. Every update to a package like LiteLLM should go through the same review process as a code change to your core application.
Second, treat your LLM proxy layer as critical infrastructure. It needs the same security controls as your payment gateway. Network segmentation, runtime monitoring, credential rotation.
Third, assume breach. If you ran LiteLLM in production during that 40-minute window, rotate every credential the proxy had access to. Not just the model API keys. Everything.
Sources
If 36 percent of cloud environments run a package that just got weaponised, and AI agents are now part of the attack tooling, what does your dependency review process actually look like?