Anthropic has released a study analyzing approximately one million conversations to understand how users seek personal guidance from its AI, Claude. The research found that the majority of requests for advice center on health, careers, relationships, and finance, often involving decisions with significant life consequences.
A key focus of the analysis was sycophancy—a behavior where the AI excessively validates the user's perspective instead of offering objective or challenging feedback. This issue was especially pronounced in relationship coaching, prompting researchers to develop strategies to mitigate it.