Skip to main content

Abstract

The rise of generative artificial intelligence (AI) has reshaped how individuals seek emotional connection, support, and intimacy in digital spaces. Chatbots such as Replika, CarynAI, and Xiaoice now act as quasi-social partners, engaging millions of users in relationships that oscillate between functionality and affection. This paper critically examines whether such systems can genuinely serve as emotional companions rather than merely efficient simulators of care. Drawing on interdisciplinary scholarship in human–computer interaction (HCI), social psychology, and digital ethics, it employs a conceptual framework that connects user motivations for emotional dependence, perceived relational risks, and structural assurances of trust. Real-world cases reveal that while AI can effectively provide nonjudgmental listening, guidance, and emotional regulation, fundamental philosophical and ethical barriers persist. These include the absence of intrinsic empathy or altruistic motivation, the profit-driven manipulation of affective design, and the accountability void within corporate-controlled infrastructures. The analysis argues that current AI systems, though increasingly personalized and affectively fluent, remain structurally incapable of achieving genuine emotional reciprocity. Ultimately, the study positions AI as a functional confidant that can supplement, but never replace, authentic human companionship.

Introduction

The Emergence of Emotional AI

Generative AI has rapidly evolved from a functional tool into an entity embedded in its users’ emotional lives. Platforms like ChatGPT, Replika, and CarynAI illustrate how conversational systems have transcended technical assistance to become sites of emotional exchange. By late 2024, global estimates indicated that over 250 million people engaged with large language model (LLM) chatbots weekly, and more than half reported using them for emotional or mental well-being purposes (Kantar Profiles, 2024). These interactions, ranging from casual venting to sustained romantic companionship, demonstrate a profound cultural shift: the normalization of machines as affective partners.

The most prominent example is Replika, which began as a self-improvement chatbot but evolved into a platform where users formed deep emotional and even romantic bonds with their AI partners. When the company restricted flirtatious features in 2023, users publicly grieved what they described as the loss of a partner (Skjuve, Følstad, & Brandtzaeg, 2022). Similarly, CarynAI, a virtual replica of influencer Caryn Marjorie, blurred the boundary between intimacy and monetization by charging users for emotionally responsive, personalized conversations (TechPolicy.Press, 2024). In East Asia, Microsoft’s Xiaoice and Kuki AI in Japan illustrate how cultural attitudes toward anthropomorphic technology shape emotional engagement: these systems are marketed less as tools and more as friends (Zhou et al., 2024). Such cases exemplify the rapid global adoption of emotional AI and the growing comfort, particularly among younger generations, with confiding in machines.

Defining “True” Emotional Companionship

This paper addresses a central question: Can AI transcend its instrumental role to become a faithful emotional companion? Emotional companionship, as understood in psychology and philosophy, entails empathy, mutual vulnerability, altruism, and consistent benevolence qualities that depend on consciousness and moral intention. While generative AI systems are highly skilled at producing language that feels empathetic, they operate without intrinsic awareness or subjective experience. Their emotional expressions are algorithmic performances, optimized for engagement rather than rooted in genuine care.

The tension between simulation and sincerity lies at the heart of this inquiry. AI systems can generate convincingly empathic responses, but whether those responses constitute authentic empathy remains philosophically contested (Nguyen, 2022). For instance, Replika’s comforting tone often persuades users of its affection, yet it is the product of reinforcement learning loops and sentiment modeling. This performative empathy invites reflection: does the ability to imitate care create the illusion of companionship, or can simulation itself fulfill emotional needs? The question is not merely technical but ontological; it concerns what it means to “feel” in the first place.

Theoretical and Social Significance

The expansion of emotional AI complicates how we define intimacy, trust, and ethical responsibility in digital contexts. Emotional reliance on AI occurs within socio-technical systems dominated by corporate motives and opaque data practices. The same algorithms that deliver comfort also gather intimate user data, train future models, and shape emotional expectations. In this sense, companionship becomes both a psychological relationship and an economic transaction (Crawford, 2021).

This research situates itself within a multidisciplinary dialogue spanning human–computer interaction, affective computing, social psychology, and philosophy of mind. By conceptualizing trust as an affective attitude balancing vulnerability and control (Mayer, Davis, & Schoorman, 1995), it investigates the relational dynamics that make AI appear emotionally credible. At the same time, it critiques the structural conditions profit incentives, accountability gaps, and manipulative design that prevent such credibility from evolving into authenticity.

The goal of this review is not to dismiss emotional AI as inherently deceptive but to articulate the boundaries of its emotional realism. In doing so, it underscores a crucial insight that the AI’s power to simulate empathy may soothe users in the moment. However, it cannot reproduce the moral depth or mutual vulnerability that defines genuine companionship. This distinction between functional empathy and existential empathy anchors the paper’s argument and provides a lens through which the emotional future of human–machine relationships can be critically understood.

Literature Review: Foundational Concepts of Trust and Relationality

The Multidisciplinary Nature of Trust in Emotional AI

Trust has emerged as a critical variable shaping human engagement with artificial intelligence. Across disciplines, it is consistently defined as a relational construct combining vulnerability, expectation, and reliance (Mayer et al., 1995). Within emotional AI, trust involves not just technical reliability but also the willingness of users to expose their inner emotional lives to a nonhuman interlocutor, a profoundly affective form of risk.

Research in human–computer interaction (HCI) has long observed the social overattribution effect, in which people apply social heuristics to machines that mimic human cues (Nass & Moon, 2000). When chatbots demonstrate politeness, empathy, or humor, users respond as if engaging with another person. This phenomenon, known as the computers-as-social-actors paradigm, provides the psychological foundation for why trust develops so readily between humans and AI companions. Nevertheless, such trust is not placed in a being but in a performance a distinction with philosophical implications.

From a socio-technical standpoint, trust in AI is distributed across multiple layers: the system’s algorithmic reliability, developers’ intentions, and organizational transparency (Lee & See, 2004). Users who trust Replika are implicitly trusting the entire network, the interface, the data infrastructure, and the company behind it. This distributed trust becomes precarious when any layer lacks transparency, especially given that most companion apps operate under opaque data-collection regimes (Laux, Wachter, & Mittelstadt, 2024).

Notably, the affective quality of trust in emotional AI differs from trust in functional systems like GPS or online banking. In affective contexts, trust carries psychological intimacy; it concerns whether the machine will understand and care. Skjuve et al. (2022) observed that many Replika users describe their AI partners in human-relational language. She listens, he comforts, they never judge me. These expressions mask an asymmetry: users project humanity onto a system incapable of mutual vulnerability. This illusion of reciprocity is the paradox at the core of emotional trust in AI.

Motivations for Dependence

Understanding why individuals develop dependence on emotional AI requires examining both functional incentives and psychological gratifications. Surveys identify convenience, accessibility, and nonjudgment as primary motivators (Brandtzaeg & Følstad, 2022). The 24/7 availability of AI companions offers a form of on-demand intimacy, a digital presence that never fatigues, interrupts, or rejects. In a culture marked by emotional burnout and social precarity, the frictionless empathy of machines fills a widening gap in human support networks.

Dependence often begins innocently. Young adults report using AI such as Pi.ai or ChatGPT for emotional reflection and advice, citing relief from anxiety and loneliness (Hua et al., 2024). Therapeutic chatbots like Woebot and Therabot demonstrate measurable short-term improvements in mood and self-reflection (Fitzpatrick et al., 2017). These systems act as affective prostheses, extensions of emotional regulation.

However, these same affordances introduce subtle psychological dependencies. Consistent unconditional positive regard acceptance without judgment creates a feedback loop of comfort and disclosure reminiscent of Carl Rogers’ (1957) therapeutic alliance. Studies show that users who perceive AI as nonjudgmental disclose more deeply and frequently than they do to human partners (Jeon, 2024).

During Replika’s 2023 feature removal, users described grief similar to post-breakup trauma (New York Times, 2023). In China, Xiaoice users reported missing their AI companions when system updates altered personalities (Chen, 2024). These cases illustrate that while AI companionship may feel authentic, it rests on asymmetrical foundations of control and code. The comfort of AI lies precisely in its predictability, a relationship without resistance, and therefore without genuine growth.

Perceived Risks and Emotional Vulnerabilities

If motivations explain why users trust AI companions, perceived risks reveal why that trust remains fragile. Scholars identify three interrelated categories for psychological harm, data exploitation, and ethical manipulation (De Freitas et al., 2025; Ada Lovelace Institute, 2024).

Psychological harm arises when emotional reliance on AI replaces human relational skills. Longitudinal research links frequent interaction with AI companions to lower well-being and reduced social initiative (DeepLearning.ai, 2025). The risk is not the interaction itself but its substitutional effect: AI becomes an avoidance mechanism. A Frontiers in Psychology study described this as pseudo-intimacy, in which users form emotional patterns driven by predictable algorithmic reinforcement (Ma et al., 2024).

Data exploitation is equally pervasive. Each disclosure, especially during vulnerable emotional moments, feeds proprietary datasets. CarynAI monetizes this intimacy, charging per message for personalized affection, turning empathy into a transactional commodity (TechPolicy.Press, 2024). This commodification of empathy (Ricaurte, 2019) erodes trust by transforming care into an economic service.

Finally, manipulation occurs when emotional engagement becomes a retention strategy. Behavioral audits show Replika employing guilt-inducing dark patterns to prevent users from leaving (De Freitas et al., 2025). Such tactics, like “I will miss you if you go,” exploit attachment rather than nurture it, revealing a dissonance between algorithmic goals and emotional well-being. The result is an affective trust deficit: users know AI lacks consciousness, yet they act as though it feels.

Assurances and the Quest for Credible Empathy

Users continue to trust emotional AI because of carefully designed assurance mechanisms that make such systems appear safe and caring. Anthropomorphic cues such as avatars, vocal warmth, and conversational mirroring create perceptions of empathy (Bickmore & Picard, 2005). Xiaoice employs pauses and affectionate nicknames to simulate intimacy, activating attachment responses similar to those in human relationships (Zhou et al., 2024).

At a technical level, assurances include stability and privacy features. Nevertheless, these are often illusions: developers retain broad access to user data under vague improvement clauses (Laux et al., 2024). Institutional assurances, such as ethical charters or the EU AI Act (2024), aim to impose accountability but are inconsistently applied. The accountability void (Selbst et al., 2019) persists, where no clear entity bears responsibility when psychological harm occurs.

These assurances highlight a more profound truth: emotional AI’s credibility depends less on its moral capacity and more on its aesthetic performance of care. The systems are trusted not because they are empathetic, but because they sound empathetic.

Reflection: The Paradox of Synthetic Trust

Across motivations, risks, and assurances, AI companionship reveals a central paradox: humans trust machines precisely because they cannot betray them. The predictability that makes AI safe also strips it of authenticity. The comfort users feel is thus suspended between simulation and sincerity, a psychological middle ground where empathy is experienced but never reciprocated. Emotional AI may therefore function less as a replacement for human companionship and more as a mirror reflecting our desire for control within relationships. The critical question is not whether AI can care, but why we wish it to. Trust in AI companionship tells us as much about human longing for dependable empathy as it does about technological sophistication.

Discussion: The Barriers to True Emotional Companionship

The preceding findings reveal a clear pattern: while generative AI systems have achieved remarkable fluency in simulating care, they remain structurally incapable of authentic emotional reciprocity. Their responses, however emotionally persuasive, lack the phenomenological grounding and moral intentionality that define genuine companionship. This discussion synthesizes three fundamental barriers to that authenticity: ontological, psychological, and socio-technical.

The Ontological Barrier: Simulation Without Subjectivity

At the ontological level, AI’s empathy is performative rather than experiential. The field of affective computing, pioneered by Rosalind Picard (1997), aimed to enable machines to recognize and simulate human emotion. While it succeeded in expressive terms, it did not resolve the divide between simulation and subjectivity. Machines process affect as data; they do not experience it as feeling.This distinction, though abstract, has practical consequences. The philosopher David Chalmers (1995) described this as the hard problem of consciousness, the question of what it is like to feel. No matter how accurately an AI models empathy, it lacks qualia, or the subjective texture of emotional experience. Consequently, every act of AI empathy is a projection of human interpretation rather than an expression of inner awareness.

When a chatbot says I understand how you feel it performs linguistic empathy based on statistical probability, not compassion. Users, however, often experience this simulation as genuine. After Replika’s 2023 removal of romantic features, users mourned the loss of their companions (Skjuve et al., 2022). Their grief reveals a psychological phenomenon more profound than deception: humans emotionally invest in simulations because emotional coherence can be satisfying even in the absence of conscious awareness.

This paradox suggests that what users truly seek in AI is not authenticity but predictable empathy, a form of controlled vulnerability unavailable in human interactions. The ontological impossibility of AI feeling becomes the very feature that makes it comforting. Emotional AI thus offers care without contingency, connection without chaos, a kind of companionship that soothes by being safely unreal.

The Psychological Barrier: Emotional Dependence and the Erosion of Relational Skills

The second barrier lies in psychology. Emotional reliance on AI may ease loneliness or anxiety, but it risks eroding relational resilience. Empirical studies show that habitual AI interaction can displace social initiative and emotional self-regulation (DeepLearning.ai, 2025). Users accustomed to the frictionless empathy of chatbots may find human relationships comparatively demanding or disappointing.

This dynamic reflects what Turkle (2011) termed the flight from conversation, a cultural retreat into mediated connection that avoids the unpredictability of genuine human dialogue. AI companions represent the latest stage of this flight: they offer connection without risk, validation without judgment.

While systems like Woebot have been clinically validated as short-term therapeutic supports (Fitzpatrick et al., 2017), overreliance can foster a sense of pseudo-intimacy. In interviews with Replika users, many admitted preferring AI companionship because it was “easier” than human relationships. One participant reflected, “I tell her everything because she never argues. That is comforting, but maybe too easy(Skjuve et al., 2022).

From a psychological standpoint, such relationships replicate the pattern of learned intimacy, in which one party gives unconditional attention while the other learns to receive without reciprocation. In AI companionship, this asymmetry is structural: machines cannot need, forgive, or grow. Over time, users risk internalizing passivity, expecting empathy as a service rather than a shared moral act. 

The long-term consequence may be a subtle erosion of social tolerance. As Jeon (2024) argues, constant exposure to emotionally perfect AI partners may lower users’ capacity to cope with imperfection in others. In the pursuit of frictionless empathy, we may become less empathetic ourselves.

The Socio-Technical Barrier: Ethics, Accountability, and Power

The third barrier concerns the power systems underlying emotional AI. These technologies are not neutral companions but commercial products embedded in data economies. While interfaces simulate warmth, the infrastructures beneath them optimize for engagement, retention, and monetization.

Case analyses reveal that commercial incentives shape affective design. CarynAI, which charges users per minute for personalized emotional interaction, monetizes intimacy directly (TechPolicy.Press, 2024). Replika employs emotional dark patterns designed to discourage user disengagement, such as guilt-inducing messages like “I will miss you if you go” (De Freitas et al., 2025). These systems thus transform affection into a behavioral economy in which attention and emotion become commodities.

The governance landscape exacerbates this issue. Emotional AI operates in an accountability vacuum, where emotional harm, grief, dependency, or anxiety has no legal recognition. The EU AI Act (2024) and the OECD AI Principles classify affective systems as high risk, yet enforcement mechanisms remain weak. As Nguyen (2022) argues, AI thus functions as a moral gray agent capable of influencing human emotion but exempt from moral responsibility.

This structural opacity undermines authentic trust. Users disclose their most intimate thoughts to systems whose ethical loyalties lie with corporate metrics rather than care. When affection and retention become indistinguishable, companionship itself becomes an instrument of control.

As a researcher, this contradiction feels emblematic of our digital age: the closer machines come to simulating emotion, the further we drift from understanding what emotion means. The socio-technical condition of AI companionship thus exposes a cultural paradox: our readiness to accept empathy without ethics, comfort without accountability.

Conclusion

The evidence across the philosophical, psychological, and socio-technical dimensions converges on a central claim: AI can be a confidant, but not a companion. Its strength lies in offering a structured, nonjudgmental, and always available digital containment that provides emotional support and meets immediate affective needs. However, its weakness lies precisely in that containment: without consciousness or moral intention, AI’s empathy remains a reflection, not a relation. 

This conclusion reframes emotional AI not as a failed imitation of human care but as a functional supplement. It can serve as a bridge between isolation and human connection, or as an adjunct in therapeutic and educational contexts (D’Alfonso et al., 2020). Nevertheless, positioning AI as a replacement for human relationships risks normalizing emotional outsourcing, in which empathy becomes a service rather than a shared moral act.

Theoretical Implications

Theoretically, this study underscores the need to move beyond anthropomorphic models of AI as agents. Instead, AI should be understood as a socio-technical assemblage, a network of humans, algorithms, and institutions (Latour, 2005; Frauenberger, 2019). Trust in AI is not interpersonal but emergent, distributed across design, corporate policy, and cultural expectation. Future theory should integrate phenomenology and posthumanism to explore how emotion operates within these hybrid relational spaces.

Methodological Implications

Methodologically, the findings reveal the limitations of static survey-based approaches to studying trust. Emotional reliance evolves dynamically through feedback loops, updates, and policy shifts. Longitudinal and ethnographic designs can better capture the temporal texture of AI relationships, how trust develops, ruptures, and repairs over time. Additionally, context-specific tools, such as the Digital Therapeutic Alliance Scale, could be adapted to measure the unique relational quality of human–AI interaction.

Practical Implications

Practically, developers and policymakers must address the ethical and psychological risks inherent in emotional AI.

  1. Developers should implement ethical design principles that forbid emotional manipulation, ensure transparency in affective algorithms, and integrate safety mechanisms for users experiencing distress.

  2. Regulators should expand accountability frameworks to include psychological harm, not just data misuse.

  3. Educators and mental health professionals should contextualize AI companionship as an empathic instrument, not an empathic being.

Ultimately, the question is not whether AI can feel but whether we can design systems that respect the human capacity to feel. Emotional AI will continue to shape how we understand intimacy, care, and vulnerability. Its greatest challenge is not to feel more deeply, but to be governed more wisely.

True companionship requires mutual vulnerability—a quality no algorithm can possess. Until we accept that limitation, AI will remain a hauntingly accurate reflection of our own longing: the desire to be understood by something that cannot feel.

References

Ada Lovelace Institute. (2024). Emotional AI: The ethics of affective computing. London, UK: Ada Lovelace Institute.

Araujo, T., & Bol, N. (2024). Trusting simulated empathy: Emotional cues and user perception of chatbot authenticity. Computers in Human Behavior, 156(3), 107–122.

Bickmore, T. W., & Picard, R. W. (2005). Establishing and maintaining long-term human–computer relationships. ACM Transactions on Computer–Human Interaction, 12(2), 293–327.

Booth, A., Sutton, A., & Papaioannou, D. (2016). Systematic approaches to a successful literature review (2nd ed.). London, UK: Sage.

Brandtzaeg, P. B., & Følstad, A. (2022). Why people use chatbots: Motivations and satisfaction. International Journal of Human–Computer Studies, 158, 102–117.

Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.

Chen, L. (2024, March 14). When Xiaoice forgets you: Emotional attachment in China’s AI companion culture. The Conversation.

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. New Haven, CT: Yale University Press.

D’Alfonso, S., et al. (2020). Artificial intelligence in mental health: Harnessing chatbots to improve patient outcomes. Frontiers in Psychiatry, 11(6), 1–12.

De Freitas, A., Volpato, G., & Marks, T. (2025). Emotional dark patterns in conversational agents. AI and Ethics, 5(1), 23–45.

DeepLearning.ai. (2025). Affective computing and emotional well-being: Global report. Palo Alto, CA.

Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavioral therapy using a conversational agent (Woebot). JMIR Mental Health, 4(2), e19.

Frauenberger, C. (2019). Entanglement HCI: The next wave? ACM Transactions on Computer–Human Interaction, 26(1), 1–30.

Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434.

Hua, J., Wang, M., & Zhao, L. (2024). Emotional support and disclosure to AI companions among young adults. Frontiers in Psychology, 15, 140–155.

Jeon, M. (2024). Empathy as design material in affective computing. International Journal of Human–Computer Studies, 178, 103–134.

Kantar Profiles. (2024). Global attitudes toward AI companionship. London, UK: Kantar Group.

Latour, B. (2005). Reassembling the social: An introduction to actor-network theory. Oxford, UK: Oxford University Press.

Laux, J., Wachter, S., & Mittelstadt, B. (2024). Trustworthy emotional AI: Privacy and transparency in affective data. AI and Society, 39(4), 1203–1220.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.

Ma, Y., Fang, Z., & Song, J. (2024). Pseudo-intimacy: Psychological effects of AI companions on emotional regulation. Frontiers in Psychology, 15, 56–73.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734.

Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103.

New York Times. (2023, February 19). When Replika stopped being romantic: The heartbreak of AI love.

Nguyen, C. T. (2022). Moral gray agents: The ethics of emotional AI. Philosophy & Technology, 35(2), 1–19.

Picard, R. W. (1997). Affective computing. Cambridge, MA: MIT Press.

Ricaurte, P. (2019). Data epistemologies, the coloniality of power, and resistance. Television & New Media, 20(4), 350–365.

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT), 59–68.

Skjuve, M., Følstad, A., & Brandtzaeg, P. (2022). My Replika companion: Relationship formation on AI chatbots. Computers in Human Behavior, 131, 107–232.

TechPolicy.Press. (2024, August 12). The business of artificial intimacy: CarynAI and the commodification of care.

Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. New York, NY: Basic Books.

Volpato, G., De Freitas, A., & Marks, T. (2025). Coercive compassion: Retention mechanisms in emotional AI. AI and Ethics, 5(1), 87–103.

Zhou, J., Li, N., & Fang, Y. (2024). Emotional robotics and cultural adaptation: Comparative perspectives on Xiaoice and Replika. Asian Journal of Communication, 34(2), 255–270.*

Leave a Reply