What Happened

On March 31, 2026, Anthropic accidentally published the complete source code of Claude Code — its most commercially significant product — to a public software registry called npm.

npm is a global package registry where developers publish software libraries and applications for others to download and use. In what should have been a routine package release, Anthropic included a file that was never meant for production distribution.

That file was a source map — a debug artifact automatically generated by the Bun build tool. Source maps are internal development aids. Their purpose is simple: they map minified or compressed production code back to the original readable source so developers can debug problems efficiently.

In this case, the source map should never have been present in the published npm package. But it was. And inside that file was a direct link to a ZIP archive hosted in Anthropic's cloud storage.

That archive contained the full Claude Code codebase: roughly 512,000 lines of code across 1,900 files, exposed without protection. Anyone who downloaded the public package could follow the link, retrieve the archive, and inspect Anthropic's proprietary source code in full.

Important Distinction

This was not a production runtime breach. It was a release integrity failure. The code was not hacked out of Anthropic's environment. It was packaged, published, and exposed through the organisation's own release process.

The Immediate Technical Cause

The immediate root cause was a missing rule in .npmignore — the file that tells the packaging process what must be excluded from a public npm release.

Nobody added *.map to the exclusion list. Bun generated the source map automatically. The release process failed to detect it. The package was published. And the debug artifact carried a live path to a cloud-hosted archive containing the entire source tree.

That is the technical sequence. But stopping at the technical sequence misses the real problem.

What a Hardened Production Release Should Look Like

A mature production deployment pipeline should not rely on one file or one human memory point to prevent exposure. It should enforce layered controls across commit, build, packaging, publication, and monitoring.

Required Pipeline Controls
Developer pushes code
Pre-commit hooks (Gitleaks, lint .npmignore)
CI: SAST + Dependency Scan
CI: Build with hardened config (source maps disabled)
CI: Package content inspection (OPA / custom script)
CI: Unpack artifact + secrets scan on output
IaC policy check (bucket ACLs, storage permissions)
Publish to registry
Post-publish: registry diff monitor + SIEM alert

If these controls existed in full and were correctly scoped, this artifact should have been blocked long before public publication.

The Uncomfortable Reality

Anthropic is not a small company improvising its engineering controls. At its scale and public profile, it is reasonable to assume the organisation already had substantial security tooling in place — SAST in CI/CD, dependency scanning, secrets scanning such as TruffleHog or GitLeaks, and infrastructure-as-code policy frameworks.

The issue was likely not tool absence. The issue was control boundary definition. The control scope missed this artifact type. That distinction matters.

Key Insight

Tool presence is not control effectiveness. A mature stack can still fail if policy scope, pipeline rules, and ownership boundaries do not cover the actual exposure path.

Why This Is a Governance Failure

This class of exposure had reportedly happened before. An identical category of issue occurred at Anthropic in early 2025. It was fixed as a one-time incident, but no enduring policy was written, no automated guardrail was introduced, and no formal risk treatment appears to have been embedded into a lasting control objective.

That means the risk was addressed tactically and then allowed to persist structurally.

A one-time patch is not a control. A recurring risk that returns through the same class of failure was never actually governed.

What Was Really Lost

No customer data was reportedly exposed. But that does not reduce the seriousness of the incident. What was exposed appears to have included:

  • Anthropic's proprietary application logic
  • Internal system prompts
  • Architectural patterns and implementation details
  • 44 unreleased product features
  • Signals about product roadmap direction and strategic intent

That is not just code loss. That is competitive intelligence loss, product strategy exposure, security research acceleration, and brand credibility erosion. Security researchers reportedly identified exploitable vulnerabilities from the exposed code within days.

The GRC Root Cause

This is a governance failure masquerading as a technical miss. The technical trigger was small. The control design weakness behind it was not.

  1. Was there a secure release checklist? If yes, why did it not require artifact content review? If no, the organisation had a policy gap in release governance.
  2. Was the 2025 similar incident captured in the risk register? If it was logged but not tied to permanent remediation, then residual risk was effectively tolerated without durable treatment. That is a risk ownership failure.
  3. Who owned npm release pipeline security? Without explicit RACI clarity, the control falls between teams. That is a governance accountability gap.
  4. Was IaC-as-policy scoped to cover object storage references and build artifact leakage paths? Probably not — suggesting the policy framework did not actually cover the full attack surface.
GRC Principle

If a risk has already happened once and the organisation responds only with a local patch instead of a policy, automation, ownership assignment, and assurance evidence — the risk remains open no matter what the incident ticket says.