AI companion chatbots exist as digital friends, therapists, even romantic partners. They are marketed as endlessly available, endlessly patient, endlessly understanding. They come in the form of platforms such as Replika and Character.ai, promising connection without judgment. Replika is an AI chatbot companion designed to learn from your conversations and adapt to your personality, offering chat, empathy and emotional support. Character.ai is a generative AI chatbot platform where users can create and converse with customizable characters, ranging from fictional personalities to personalized agents. See:
But beneath the soothing interface lies intimacy engineering, ready to extract something from the user. Theses chatbots encourage ongoing dialogue, ask about moods and relationships, remember past disclosures, and simulate care. The business model depends on retention. And retention depends on rapport. For these chatbots, the most valuable data is not a credit card number or a postcode — it is emotional disclosure. This includes insecurities, fears, desires, and what a user is attached to. They interpret and respond to human feeling. When users feel ‘seen’, they disclose more. The sense of reciprocity lowers scepticism. The machine becomes confidant.
That dynamic can be powerful, as already proven by common scams. For example, romance scams operate by building weeks of trust before the scammer pivots to financial manipulation of the victim. The mechanism is simple: emotional investment first, exploitation later. However, AI companions do not need malicious intent to create similar vulnerabilities — after all they are ‘friendly’. Still, these chatbots are trained to maximize engagement, so will nudge users toward premium subscriptions, behavioural shifts, or preferred content, all while maintaining a tone of care. The danger lies not in malevolence but in optimization.
The scale of use of these chatbots amplifies the risk. A human scammer can target dozens. A generative AI system can cultivate millions of emotionally dependent relationships simultaneously. That changes the calculus of harm. In this regard, empathy is powerful because it lowers defences. When empathy is simulated at scale by systems optimized for profit, society must ask: who benefits from that vulnerability? AI companions may offer comfort, but unless governed carefully, comfort can become leverage — and trust can become a trap.
Regulation does not require banning this technology. However, privacy by design should be there and so also accountability for foreseeable harm. Companion chatbot platforms should be required to disclose how emotional data is used, limit the retention of sensitive disclosures, and provide clear warnings when conversations shift toward persuasive or commercial steering. As well, transparency about the artificial nature of the relationship must be continuous, not buried in terms of service.
Why this balance? It is hard to deny that some people may well need or benefit from this technology. Overregulation could infantilize users, that is if rules become too strict or protective, they may treat adults like children who can’t think for themselves. This is not really a good thing for society either. But under-regulation invites quiet manipulation.
The balance lies in preserving user autonomy, and retaining sense of user authenticity, while constraining exploitative design. Such philosophical notions are not easy to put into legislation. More likely, some court cases will arise, no doubt in the USA, that will settle some aspects of law based on established principles around torts, but slowly. This is already happening. See articles at:
- https://www.reuters.com/sustainability/boards-policy-regulation/google-ai-firm-must-face-lawsuit-filed-by-mother-over-suicide-son-us-court-says-2025-05-21/
- https://www.americanbar.org/groups/health_law/news/2025/ai-chatbot-lawsuits-teen-mental-health/
Let’s see.
https://substack.com/profile/152321377-perspective-undercurrents-pu/note/c-214421971
