The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationshipsoften leads one to think such behavior is commonplace.
A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude, and turn to the bot for emotional support and personal advice only 2.9% of the time.
“Companionship and roleplay combined comprise less than 0.5% of conversations,” the company highlighted in its report.
Anthropic says its study sought to unearth insights into the use of AI for “affective conversations,” which it defines as personal exchanges in which people talked to Claude for coaching, counseling, companionship, roleplay, or advice on relationships. Analyzing 4.5 million conversations that users had on the Claude Free and Pro tiers, the company said the vast majority of Claude usage is related to work or productivity, with people mostly using the chatbot for content creation.
That said, Anthropic found that people do use Claude more often for interpersonal advice, coaching, and counseling, with users most often asking for advice on improving mental health, personal and professional development, and studying communication and interpersonal skills.
However, the company notes that help-seeking conversations can sometimes turn into companionship-seeking in cases where the user is facing emotional or personal distress, such as existential dread, loneliness, or finds it hard to make meaningful connections in their real life.
“We also noticed that in longer conversations, counseling or coaching conversations occasionally morph into companionship—despite that not being the original reason someone reached out,” Anthropic wrote, noting that extensive conversations (with over 50+ human messages) were not the norm.
Anthropic also highlighted other insights, like how Claude itself rarely resists users’ requests, except when its programming prevents it from broaching safety boundaries, like providing dangerous advice or supporting self-harm. Conversations also tend to become more positive over time when people seek coaching or advice from the bot, the company said.
The report is certainly interesting — it does a good job of reminding us yet again of just how much and often AI tools are being used for purposes beyond work. Still, it’s important to remember that AI chatbots, across the board, are still very much a work in progress: They hallucinate, are known to readily provide wrong information or dangerous adviceand as Anthropic itself has acknowledged, may even resort to blackmail.