Anthropic, a San Francisco-based AI company, has inadvertently disclosed the complete source code of its AI coding tool, Claude Code. The exposure was a result of a packaging error, as reported by NDTV.
Security researcher Chaofan Shau discovered that a 60MB source file map bundled within Claude Code’s npm package allowed the recreation of the original TypeScript code. This incident, considered unusual for a finalized software product, occurred on Tuesday, according to the report.
The leaked source code unveiled the inner mechanisms of the AI platform, granting developers insights into its features and internal architecture, previously known only to Anthropic’s engineers. While the breach did not compromise user data or the core AI systems, it raised concerns about the tool’s security practices.
Although the exposed code does not directly jeopardize user privacy, it reveals details about the tool’s construction, operational processes, and security protocols. This incident underscores the importance of robust security measures in AI development and software packaging.
Source: mint – technology