
84 Malicious TanStack Packages Shipped With Valid SLSA Provenance
TanStack's own CI published malware with valid SLSA Build Level 3 attestations. No credential was stolen. The defense model the ecosystem converged on verified every malicious package.
On May 11, between 19:20 and 19:26 UTC, 84 malicious versions of @tanstack/* packages landed on npm. They were published through TanStack's own release pipeline, signed with a valid OIDC token, carrying valid SLSA Build Level 3 provenance attestations. @tanstack/react-router alone has 12.7 million weekly downloads.
No npm credential was stolen. No maintainer account was compromised. The packages were cryptographically authentic because they were, in fact, built and published by TanStack's legitimate CI. The code just wasn't clean.
The Numbers
Socket's AI Scanner flagged all 84 artifacts within six minutes of publication, before any human analyst reviewed them. StepSecurity researcher Ashish Kurmi posted full indicators to TanStack's repo at 19:46, roughly twenty minutes after the first malicious publish. By end of day, Snyk had catalogued 169 affected packages and 373 malicious versions across the ecosystem. The worm had already jumped to Mistral AI's official SDK, 65 UiPath packages, OpenSearch, and Guardrails AI. Socket eventually tracked 416 compromised artifacts across npm and PyPI.
CVE-2026-45321 landed with a CVSS 9.6 (Critical). The payload was a 2.3 MB obfuscated JavaScript file that harvested credentials from over 100 hardcoded paths: AWS IMDS and Secrets Manager, GCP metadata, Kubernetes service-account tokens, HashiCorp Vault, GitHub tokens, SSH keys, and every .env file it could reach. Exfiltration ran through the Session P2P messaging network. End-to-end encrypted, distributed across service nodes, indistinguishable from normal encrypted chat traffic. No single C2 domain to block.
How the GitHub Actions Cache Poisoning Worked
Tanner Linsley published TanStack's postmortem within hours. The attack chained three vulnerabilities that have each been documented individually. Together they produced something none had achieved alone.
First, a Pwn Request. TanStack's benchmark workflow used pull_request_target, the GitHub Actions trigger that runs fork code with the base repo's permissions and secrets. The attacker opened a PR from a renamed fork. The benchmark workflow checked out the fork's code, ran it, and the fork's vite_setup.mjs wrote malicious binaries into the pnpm store directory.
Second, cache poisoning across a trust boundary. GitHub Actions caches are shared across jobs in the same repository. The fork-triggered benchmark job saved the poisoned pnpm store under the key Linux-pnpm-store-${hashFiles('**/pnpm-lock.yaml')}, the exact key TanStack's release workflow computes when it restores dependencies. A cache written by untrusted fork code was now sitting in the path of a trusted publish workflow.
Third, OIDC token extraction from runner memory. When a legitimate push to main triggered release.yml, it restored the poisoned cache. Attacker-controlled binaries were on disk. They located the GitHub Actions Runner.Worker process via /proc/*/cmdline, read /proc/$pid/maps and /proc/$pid/mem, and pulled the OIDC token straight from the worker's heap. Then they POSTed directly to registry.npmjs.org. Tests failed. The workflow's publish step never executed. npm received 84 valid, signed, provenance-attested packages anyway.
What SLSA Actually Proves
This is the first documented case of a malicious npm package shipping with valid SLSA Build Level 3 provenance.
The Sigstore attestations on the compromised versions are real. They correctly attest that the packages were built and published by release.yml running on refs/heads/main in TanStack/router. That is all true. SLSA verifies that a specific build process produced a specific artifact. It does not verify that the code being built was clean.
The poisoned pnpm-store entry got pulled into a legitimate workflow run on the legitimate main branch. Sigstore signed exactly what it was supposed to sign.
Snyk called it "the first documented case of a malicious npm package carrying valid SLSA provenance." The TanStack hardening team, in their followup co-authored by Sarah Gerrard, Corbin Crutchley, Jack Herrington, Florian Pellet, and Harry Whorlow, wrote something sharper: "we can say that npm provenance, SLSA, OIDC, and 2FA all worked as advertised and still didn't stop this attack."
That sentence should make every team that adopted OIDC trusted publishing as their supply chain defense reread their workflow files tonight.
A Worm That Signs Its Own Work
The most dangerous detail isn't the initial compromise. It's what the payload did next.
The worm used stolen OIDC tokens to enumerate every npm package the compromised maintainer could publish. It injected a specific fingerprint into each package.json:
"optionalDependencies": {
"@tanstack/setup": "github:tanstack/router#79ac49ee..."
}
That entry points to an orphan commit on a fork. npm resolves the git dependency, installs bun, and runs the prepare script: bun run tanstack_runner.js && exit 1. The && exit 1 makes the optional dependency "fail" so npm silently discards it after the payload executes. The worm repacked each tarball and republished under the victim's identity. Then it used generateKeyPairSync and sign to forge Sigstore-compatible in-toto provenance attestations for every republished package. Secondary victims' packages also carried valid-looking provenance badges. The SLSA failure mode propagated with the worm.
Persistence went beyond npm. The payload wrote copies of itself into Claude Code's hook directory (.claude/settings.json) and VS Code's task runner (.vscode/tasks.json). Uninstalling the npm package didn't remove it. Security researcher Nicholas Carlini identified a dead man's switch: a system service that checks whether a stolen GitHub token has been revoked, and if it has, runs a recursive disk wipe. Microsoft Threat Intelligence found geofencing in the Mistral AI variant: Russian-language hosts skipped execution; Israeli or Iranian locale hosts faced a 1-in-6 probabilistic rm -rf /.
Commits to victim repositories were authored as claude@users.noreply.github.com, spoofing the legitimate Claude Code GitHub App. In repos where Claude Code is an approved integration, those commits blended into normal activity.
I run npm install in CI for three repos across two servers. My Dependabot PRs have been sitting with zero CI checks for over ten days. I don't use pull_request_target, but I've covered Clinejection, the chardet license dispute, GitHub's agentic security gaps, and the Axios account hijack since March. Each was a different entry point. This one broke the defense model itself.
The Supply Chain Defense Gap Nobody's Adding Up
The instinct after an incident like this is to focus on remediation. Rotate credentials, pin cache keys, drop pull_request_target, move on. TanStack's hardening post does all of that and does it well. Linsley's framing is right: "we'd rather focus on hardening our processes than try to shift blame."
But the structural problem isn't about TanStack. Any npm package using OIDC trusted publishing with shared caches and fork-triggered workflows has this vulnerability class. The attack didn't exploit a bug in SLSA or Sigstore. It exploited a gap between what provenance attestation promises and what people assume it guarantees.
SAST sees source code. DAST sees the running app. SCA flags known-vulnerable dependency versions. None of them reason about whether two GitHub Actions workflows share a cache that crosses a trust boundary. That class of bug lives in the architecture, not the code. No scanner finds it because no scanner is designed to look.
The ecosystem spent years converging on "verify provenance" as the supply chain defense. SLSA is the framework. Sigstore is the infrastructure. npm adopted OIDC trusted publishing as the passwordless mechanism the ecosystem recommended. Every one of those is a real improvement over the long-lived token model that prior compromises exploited. And every one of them verified every malicious TanStack package as legitimate.
When Provenance Helps, When It Doesn't
SLSA provenance is still worth having. It narrows the attack surface. It enables post-incident forensics. Socket's six-minute automated detection used provenance metadata to identify anomalies before any human reviewed the artifacts. Provenance made the incident response faster, not slower.
The failure isn't provenance itself. It's treating provenance as the final check instead of one layer. Provenance answers "who built this?" and "what process produced it?" It does not answer "was the build environment compromised during the run?" That second question requires behavioral analysis at install time: flagging anomalous lifecycle hooks, obfuscated payloads, and unexpected network calls regardless of who signed the package.
If your CI/CD pipeline uses OIDC trusted publishing, go read your workflow files tonight. Specifically:
- Does any workflow use
pull_request_target? If it checks out fork code, that's the same vulnerability class. - Does your release workflow share a cache with any fork-triggered job? Pin cache keys per workflow, or use separate cache scopes.
- Does your release workflow set
id-token: write? That permission is the publishing credential. Treat the workflow that holds it like you'd treat a production secret, because it is one. - Do you verify provenance on packages you install, or just on packages you publish?
Provenance tells you an artifact was built by the pipeline it claims. It does not tell you the pipeline was trustworthy when it ran.
Get new posts in your inbox
Architecture, performance, security. No spam.
Keep reading
I Audited My Lockfile After the Axios Compromise. You Should Too.
Someone hijacked an Axios maintainer's npm account and published two versions with a RAT that deletes itself after install. 50 million weekly downloads. The dropper leaves no trace. Here's exactly what happened and what to check.
A GitHub Issue Title Compromised 4,000 Developer Machines
Someone put a prompt injection payload in a GitHub issue title. An AI triage bot executed it, poisoned the build cache, stole npm credentials, and pushed a rogue package to 4,000 developers. The full chain took five steps.
Your CI Pipeline Depends on a Model That Ships Breaking Changes Without a Changelog
Opus 4.7 shipped Tuesday. It removed temperature, killed budget_tokens, changed the tokenizer by 35%, and shifted how agents spawn subprocesses. My pipeline didn't break. I also didn't test for it. Neither did you.