Anthropic has acknowledged that a series of undisclosed bugs temporarily degraded the code quality of its Claude AI assistant for paying subscribers. In a post-mortem analysis, the company identified three separate issues that led to a decline in performance, prompting a reset of usage limits after customer complaints highlighted the problems.
The bugs, which affected Claude's ability to generate high-quality code, were not immediately apparent to the company. Anthropic's investigation followed reports from users who noticed a drop in the AI's output reliability. The company has since implemented fixes and apologized for the inconvenience, emphasizing its commitment to transparency and service quality.