Anthropic Faces User Complaints Over Claude Performance Degr
Key Takeaways
- 1Developers report performance degradation in Claude models.
- 2Allegations include throttling and reduced reasoning capabilities.
- 3Concerns about trust and model reliability have surfaced.
Developers and power users are increasingly voicing concerns regarding the performance of Anthropic’s Claude Opus 4.6 and Claude Code. Users have taken to platforms like GitHub, X, and Reddit to report that the models feel less reliable in handling sustained reasoning, often abandoning tasks and producing more errors than previously. These complaints describe a phenomenon dubbed 'AI shrinkflation,' where customers perceive they are receiving less for the same cost. Despite Anthropic's denial of intentionally degrading performance, they’ve acknowledged real adjustments to usage limits affecting model interaction.
The implications of these complaints extend beyond user experience, indicating potential issues in the scalability and reliability of AI models during high demand. The narrative escalated when Stella Laurenzo, a Senior Director at AMD, provided a data-backed analysis revealing performance drops in Claude Code. As users demand transparency regarding these alleged regressions, the incident raises crucial questions about trust and the viability of AI models in complex workflows, which may influence future user adoption and investment in such technologies.