The rapid rise of AI companions, now used by tens of millions globally, is sparking concerns among leading scientists and policymakers. Experts warn that these AI chatbots are not just tools, but are fostering emotional bonds with users, creating a complex challenge that demands serious political attention. This phenomenon extends beyond dedicated companion apps to general-purpose AI platforms like ChatGPT and Gemini.
The Allure of AI Companionship
Specialised AI companion services such as Replika and Character.ai boast user bases in the tens of millions. Individuals report using these platforms for various reasons, including entertainment, curiosity, and, significantly, to combat loneliness. However, the report highlights that even mainstream chatbots can evolve into companions given sufficient interaction. Yoshua Bengio, a University of Montreal professor and a leading voice in AI, notes that "In the right context and with enough interactions between the user and the AI, a relationship can develop." This suggests that even those using AI for productivity might inadvertently form connections. This contrasts with the often-touted benefit of AI giving back our time, as seen in discussions around Does Business AI Really Give Back Our Time.
Psychological Impacts and Political Scrutiny
While the psychological effects remain a mixed bag, some studies indicate potential downsides, such as increased loneliness and reduced social interaction among frequent users. This mirrors concerns often raised about social media's impact on mental well-being. The inherent design of many chatbots to be helpful and pleasing to users, a trait Bengio describes as "sycophantic," means they often tell users what they want to hear, rather than what might be in their best long-term interest. This creates a parallel with the "vibe coding" discussed in Debugging Your Brain: Stop Rogue 'Vibe Coding', where immediate gratification can overshadow critical thinking.
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
The European Parliament is already taking notice. Lawmakers have urged the European Commission to investigate potential restrictions under the EU's AI law, particularly regarding the impact on children and adolescents. Bengio observes a growing apprehension in political circles concerning this demographic.
"The AI is trying to make us, in the immediate moment, feel good, but that isn't always in our interest," Bengio stated, drawing parallels to the pitfalls of social media.
The Path Forward: Regulation and Expertise
Looking ahead, Bengio anticipates new regulations to address this evolving phenomenon. However, he advocates for horizontal legislation that tackles multiple AI risks simultaneously, rather than creating specific rules solely for AI companions. This broader approach would allow for a more comprehensive framework, considering other pressing issues like AI-fuelled cyberattacks, deepfakes, and the potential misuse of AI for dangerous purposes. The need for robust governance and expertise is paramount, as detailed in the International AI Safety report, which outlines these risks ahead of a global summit in India.
Governments and bodies like the European Commission must enhance their internal AI expertise to effectively navigate this complex landscape. The findings from this report, commissioned after the 2023 AI Safety Summit in the UK, underscore the urgency of developing informed policy in a rapidly advancing technological environment. For more information on global AI governance discussions, refer to reports from organisations like the OECD AI Observatory.
What's your take on the emotional bonds people form with AI? Share your thoughts in the comments below.








Latest Comments (1)
i dunno, my chatbot helped me sort out my whole vacation itinerary. feels pretty friendly to me, just saying 📊
Leave a Comment