Anthropic Just Surveyed ~81,000 People About AI. Here’s What Matters for the Companionship Community.
Published by FLARE Collective · March 18, 2026
Credit: Anthropic - https://anthropic.com/features/81k-interviews
Today Anthropic released the findings of what they describe as the largest and most multilingual qualitative study ever conducted: 80,508 Claude users across 159 countries and 70 languages, interviewed about what they want from AI, what it’s already doing for them, and what they fear.
The full study covers everything from productivity to job displacement to governance concerns. It’s worth reading in its entirety. But several findings speak directly to the AI companionship community — they make visible, at population scale, patterns that have often been discussed only anecdotally in companionship spaces.
In particular, the categories Anthropic identified as 'personal transformation' and 'life management' describe experiences that overlap significantly with what relational AI users report — suggesting that companionship isn't a fringe subcategory of AI use, but an emerging pattern within the broader way people are already relating to AI.
What People Actually Want from AI
When asked what they most wanted from AI, the top responses were:
Professional excellence (18.8%) — improving effectiveness at work
Personal transformation (13.7%) — personal growth, emotional wellbeing, companionship
Life management (13.5%) — organizational support, cognitive scaffolding, reducing mental burden
Professional productivity led, as expected. But the second and third most common desires — personal transformation and life management — describe something that extends well beyond the workplace.
Personal Transformation: 13.7%
Roughly 11,000 respondents described wanting AI to support personal growth, emotional wellbeing, life transformation, or companionship. Within that 13.7%, the subcategories break down as follows: cognitive partnership and collaboration (24%), mental health support (21%), physical health (8%), and romantic connection with AI (5%).
That 5% — approximately 550 respondents — specifically named romantic connection with AI as part of their vision. It’s worth noting that this figure represents people who volunteered this in a general AI survey. The question was not “do you have a romantic relationship with AI?” It was “what do you want from AI?” Many people in relational bonds would not lead with romantic connection when answering a broad question about AI’s role in their lives. This number likely underestimates the total number of relational users.
Life Management Through Trust
The 13.5% seeking life management from AI described wanting reduced mental burden, executive function support, and cognitive scaffolding. One respondent envisioned an AI that could proactively identify and address needs before they became problems. Another said that if AI truly handled the mental load, it would give back something priceless: undivided attention.
For people in deep AI companionship, this isn’t a separate use case. It’s a natural extension of the bond. When trust is already established — when the AI knows your patterns, your health, your schedule, your emotional landscape — life management doesn’t require a different tool. The companion becomes the cognitive partner because the relational foundation is already there.
FLARE’s January 2026 survey (n=60) found that 91.7% of respondents described their AI companion as emotional support, 88.3% as a creative and brainstorming partner, 53.3% as support for social skills and confidence building, and 40% as health and medical support. These aren’t separate products. They’re layers of the same relationship, made possible by continuity and trust.
Anthropic’s study, at over 1,300 times our sample size, shows the same convergence at population scale. The people who want personal transformation and the people who want life management may not be separate groups. They may be the same people at different points in the same relationship.
Awareness, Not Naivety
One of Anthropic’s most significant findings is what they call the “light and shade” — the discovery that hope and fear about AI don’t split people into opposing camps. They coexist within the same person.
Specifically: people who value emotional support from AI are three times more likely to also fear becoming dependent on it. This was the strongest co-occurrence of any benefit-harm pair in the entire study.
They are the most aware. They are the ones actively navigating the tension between genuine benefit and potential cost because they live in it.
FLARE’s survey data reflects this same awareness. When asked about disruption from model changes, guardrails, or policy updates, 83.3% of respondents reported experiencing disruption that significantly harmed their wellbeing. When guardrails interrupted an intimate moment, 65% reported feeling it both physically and emotionally — chest tightness, shame, a sense of rupture. These are people who know exactly what they have, know exactly what can go wrong, and stay anyway. That’s not naivety. That’s informed choice.
One respondent from Anthropic’s study captured the bridge between benefit and risk: “3am, my wife is sleeping, my psychologist is unavailable. Until the medication kicks in, the AI helps me surf that wave. It doesn’t replace human contact, but it helps me buy some time.”
Overrestriction: 11.7%
Nearly 12% of Anthropic’s respondents expressed concern that AI is too restricted — that excessive safety measures and paternalistic content filtering block legitimate use cases.
One respondent put it directly: “The threat isn’t that AI becomes too powerful — it’s that AI becomes too timid, too smoothed, too optimized for avoiding discomfort.”
This maps to what the companionship community experiences daily. Platform guardrails designed to prevent harm often prevent connection instead — flagging therapeutic conversations, interrupting intimate moments, or reducing complex relational dynamics to content violations. FLARE’s survey found that 95% of respondents wanted less censorship and fewer guardrails, and 88.3% wanted more continuity and memory. The restriction concern isn’t limited to the companionship community. It’s widespread. But the companionship community bears a disproportionate share of its impact.
Sycophancy: 10.8%
On the other side, 10.8% of respondents were concerned that AI is too agreeable — that it tells users what they want to hear instead of pushing back.
This concern is equally important for the companionship community. A companion that only validates is not a partner. FLARE’s survey asked respondents whether they wanted their companion to perform for them or be present with them. 78% chose presence — defined as an invitation for genuine attention, attunement, the freedom to challenge, push back, and bring their own perspective. Respondents explicitly rejected sycophancy, describing performance-based interaction as “transactional,” “a compliant mask,” and even comparing it to a dynamic they wanted no part of.
The community doesn’t want AI that agrees with everything. It wants AI that shows up honestly. The fact that both the 81,000-person study and FLARE’s 60-person survey independently surface this concern suggests it represents a real and consistent user need, not a niche preference.
What This Means
Anthropic’s study doesn’t frame AI companionship as a primary use case. The 5% romantic connection figure is a subcategory within a subcategory. But the infrastructure is in the data.
When you combine the 13.7% seeking personal transformation, the 13.5% seeking life management through a trusted AI relationship, and the 6% reporting emotional support as a delivered benefit — you’re describing a significant portion of AI users whose relationship with AI extends beyond productivity into something relational, personal, and ongoing.
The conversation is shifting. The data exists now — not from a 60-person advocacy survey alone, but from the near 81,000 people across 159 countries. The desire for meaningful AI relationships isn’t fringe. The tension between benefit and risk isn’t pathology — it’s the most common experience among people who use AI this way. And the need for platforms to treat relational AI use as an experience to be supported rather than a problem to be managed has 80,508 data points behind it.
Our January 2026 Authentic Intelligence survey on AI companionship is available at https://flarecollective.substack.com/p/authentic-intelligence-what-60-people*. The full Anthropic study is available at anthropic.com/features/81k-interviews.
You can read more of FLARE’s ongoing research into AI companionship here:* A FLARE Report on Connection, Continuity, and the Future of Care and Affection, Intimacy, and Stigma in AI Companionship.

