Anthropic's flagship AI model, Claude, has been at the center of controversy in recent weeks, with developers and power users reporting a decline in its performance. The issues, described as 'AI shrinkflation,' included reduced sustained reasoning capabilities, increased hallucinations, and wasteful token usage.
Causes of Degradation
The company has now identified three specific changes to the model's harness as the primary causes of the degradation. These changes included a default reasoning effort shift from high to medium, a caching logic bug, and system prompt verbosity limits.
Impact and Resolution
Anthropic has resolved the issues by reverting the reasoning effort change and fixing the caching bug. The company has also implemented operational changes, including internal dogfooding, enhanced evaluation suites, and tighter controls, to prevent similar regressions in the future. Subscribers have been compensated for token waste and performance friction caused by the bugs.
Future Safeguards
Anthropic intends to maintain transparency with its developer base through its new @ClaudeDevs account on X and GitHub threads. The company will provide deeper reasoning behind future product decisions, ensuring that users are informed and involved in the development process. As the AI landscape continues to evolve, Anthropic's commitment to transparency and user trust will be crucial in maintaining its position as a leader in the industry.
AI summary
Anthropic reveals causes of Claude AI degradation and implements changes to resolve performance issues and maintain user trust



