The Rise of AI Companions: Support, Substitution, or Something Else?
Photo: Tara Winstead.
The digital realm has seeped into our everyday reality. As such, more and more people end up turning to AI companions to seek connection and mental health support.
The Growing Use of AI Companions
AI companions are online chatbot tools that use large language models (LLMs) to mimic human-like conversations. These apps, platforms, or services are always available, supposedly anonymous and private, and free from the interpersonal obligations that are often present in human connections. With this premise, individuals may turn to LLMs when they are unable or unwilling to seek support from other people.
In fact, analysis has identified therapy and companionship as the top two reasons people use these AI tools. Additionally, studies have shown that nearly half (48.7%) of adults with mental health problems who used AI in the past year did so for mental health support. In tandem, the number of AI companion apps climbed by 700% between 2022 and mid-2025.
Social Disconnection in the Digital Era
This reality aligns with the state of social disconnection worldwide. The World Health Organization (WHO) reports, “Social isolation and loneliness are widespread, with serious but under-recognized impacts on health, wellbeing, and society.”
Over the past two decades, people have spent more time alone and less time with friends or family. A survey spanning 140 countries found that more than a billion people experience significant social isolation.
As social connection is an important component in one’s wellbeing, the lack of it often leads to mental health issues. Yet, mental health care gaps persist, with barriers such as cost, access, quality, and stigma. And in an increasingly digital world, AI companions slip in to “fill the gap”. Data from the US reveals anxiety (70.8%), depression (72.4%), and stress (70%) as the most common reasons for people to seek AI support. Accessibility and affordability are the key drivers. AI companions are available 24/7, even late at night for sudden panic attacks or amid crises and disasters when in-person support is out of reach.
Support, Substitution, or Added Risk?
Despite the presumed and potential benefits of AI companions, chatbot use is frequently associated with lower wellbeing, particularly among users who rely on chatbots as substitutes for human support. Rather than eliminating loneliness, AI companionship may instead reveal the limitations of substituting social connection with digital, artificial interaction.
Furthermore, the safety records of AI companions are troubling. In 2025, for example, Meta chatbots repeatedly failed to respond appropriately to teens expressing thoughts of self-harm or suicide. In other instances, they recommend harmful weight-loss tips to users exhibiting signs of disordered eating, as well as validating hate speech. These cases are even more troubling when the system of accountability is still almost entirely non-existent.
The concern extends beyond individual incidents. Research indicates that between 17 and 24% of adolescents develop problematic dependencies on AI over time, and existing mental health conditions appear to heighten this susceptibility. The 24/7 nature of these tools, coupled with their human-like design, may gradually erode users’ capacity for real-world empathy and emotional recognition.
Additionally, scientific scrutiny has not kept up with public usage. Studies on mental health chatbots quadrupled between 2020 and 2024, yet fewer than half of those examined clinical effectiveness. This gap between widespread adoption and limited therapeutic evidence is a systemic risk.
Demographic patterns add another layer of concern. For instance, among the 233 million Character AI users, more than half are aged 18 to 24. They are a generation that, despite being the most digitally connected in history, also reports the highest rates of loneliness. Studies also indicate that 80% of Gen Z would be open to entering a romantic relationship with an AI, raising serious questions about shifting norms around intimacy and connection.
Improving Wellbeing for All
AI companionship is a symptom of deeper systemic failures: poor economic conditions, unaffordable healthcare, overstretched mental health services, and other structural inequalities that leave people without adequate support.
As these tools become further embedded in everyday life, the responsibility to manage their impact cannot fall on users alone. Developers, policymakers, and mental health professionals must collectively consider how these tools are designed, regulated, and integrated into existing systems of care.
Most importantly, improving the frameworks for mental health care is fundamental. Access to quality care must be inclusive, affordable, and designed to treat people with dignity. From preventing burnout at work to creating community-centered mental health care, there are ways to bridge gaps that neither AI nor overstretched clinical systems can fill on their own. Ultimately, the way forward will require renewed investment in accessible, human-centred support systems and a willingness to treat mental health equity as a social priority rather than an individual responsibility.
Editor: Nazalea Kusuma
Co-create positive impact for people and the planet.
Amidst today’s increasingly complex global challenges, equipping yourself, team, and communities with interdisciplinary and cross-sectoral insights on sustainability-related issues and sustainable development is no longer optional — it is a strategic necessity to stay ahead and stay relevant.

Environmental and Geopolitical Risks in the Development of Medog Hydropower Station
What Australia Can Do to Help Prevent the Next Fire Crisis in Indonesia
Looking into Gender Inequality in the Water Domain
How the Earth’s Energy Imbalance Affects the Climate
When Distant Conflict Ripples: How Global Supply Disruptions Find Africa
Urgency to Protect Migratory Species as Extinction List Expands