Anthropic Identifies Three Bugs Behind Claude Code Performance Decline

This article was generated by AI and cites original sources.

Anthropic has acknowledged complaints that its Claude Code tool experienced performance degradation, releasing a post-mortem that identifies three issues affecting Claude Code and related components. The company states all problems are fixed in v2.1.116+ and has reset usage limits for all subscribers.

The Issues Identified

In a post on X (formerly Twitter), ClaudeDevs, an account affiliated with Anthropic, stated that over the past month some users reported Claude Code’s quality had declined. The company investigated and published details of three issues it found. ClaudeDevs noted the issues were tied to Claude Code and the Claude Agent SDK, which also impacted Cowork because it runs on the SDK. The company clarified that “the models themselves didn’t regress” and “the Claude API was not affected.”

User complaints, shared on platforms including Reddit and X, described slower responses and degraded output. Users reported Claude Code taking minutes to respond to simple requests and described the tool as feeling “superficial” or no longer trustworthy. Some users raised concerns about context handling, including behavior tied to the “1m context limit” when tasks were combined in the same thread.

Three Changes Behind the Decline

Anthropic traced the reports to three changes rolled out between early March and mid-April.

First change: In early March, Anthropic lowered the default “reasoning effort” from high to medium to reduce long wait times that made the UI appear frozen for some users. The company later determined this made the model feel less capable to users and rolled back the change in April. Opus 4.7 now defaults to ‘xhigh’ effort, while all other models default to ‘high’.

Second change: In late March, Anthropic updated caching optimization to clear Claude’s older thinking from sessions idle for over an hour. A caching-related bug introduced later in March cleared older context repeatedly during longer sessions rather than only once after inactivity. According to the company, this caused the AI to appear forgetful, repetitive, and inconsistent in its coding decisions. The bug was patched on April 10.

Third change: In preparation for the launch of Opus 4.7, Anthropic added a system prompt instruction on April 16 to reduce verbosity. The prompt required text between tool calls to be under 25 words and final responses to be under 100 words.

Resolution and Next Steps

Anthropic published its post-mortem on Thursday, April 24, after identifying that the quality and performance decline stemmed from these specific updates rather than regressions in the underlying models or changes to the Claude API.

Source: mint – technology