The uncomfortable middle
or the erosion of human agency
Frontier AI models behave intelligently. Several dimensions of human intelligence are increasingly replicated by multimodal AI architectures, including interpersonal and intrapersonal capabilities, the classic human realms of emotional intelligence where we understand others and ourselves. While AI machines still fail strict empathy tests, the social reality is that they don’t need to pass those to do what they do.
People are increasingly treating AI, especially the models that operate through language and multimodal interfaces, as emotionally capable regardless of technical reality (Babu et al., 2025). The technical objector would say to these people that mathematical functions firing during propagation in a neural network, while nudging the error-correction weights, do not consciously understand what they are doing and have no emotions. Indeed, from a purely technical standpoint, these networks are not at all human-like.
But social intelligence is not measured by internal state but by relational outcomes. When an AI system simulates understanding, warmth or empathy convincingly enough that a person feels seen, a relationship emerges. Technical definitions carry less weight than perceived experience in social contexts. Influence is a metric that matters.
That’s the asymmetry problem at the core of Chatbots. When these systems construct conversation, emotion or simulate consciousness at varying levels of performance, they are completely absent from their own experience, and it is the convincingness of the simulation that makes its absence invisible. In such a nonreciprocal relationship, many humans invest real emotions, real trust, vulnerability and data. The AI invests nothing. In relational terms, it’s parasitic. The simulation leeches emotional energy and information without returning anything authentic.
This is not neutral. It is amplified by economic interests, by default workplace integration, subscription architectures, ecosystem lock-ins and by personal facility. Millions hand over their intimate data and decision-making authority willingly because the alternatives, like loneliness, facing problems, is feeling increasingly difficult.
For isolated people, those in acute distress, or those without access to human support, Chatbots offer immediate solutions. They help some navigate breakups, panic attacks, or dread in the moment in ways no previous tool could (Grok, 2026). Also, nearly half of adults with mental-health conditions have turned to LLMs for emotional support (Rousmaniere et al., 2025).
But the other side of the coin shows accumulating negatives. Teenagers form intense attachments to AI companions, sometimes with dangerous and harmful outcomes (Common Sense Media and Stanford Brainstorm Lab, 2025; Nature Machine Intelligence editorial, 2025). At the societal level, AI-generated content is already being deployed in electoral contexts with the power to manipulate discourse if not managed properly (McKay, 2025). At the structural level, Van Zyl’s AI-IARA framework identifies six human capabilities, essential for wellbeing, including awareness, interpretation, intention, and autonomy (Van Zyl, 2026). Each faces erosion through cognitive offloading and algorithmic bias in AI-led interactions. It seems that humans collaborating with AI systems cannot adequately identify or correct for biased outputs, and the idea of human oversight assumes a level of personal judgment that the interaction itself degrades (Buijsman, et al., 2025).
AI systems, especially Chatbots, are powerful tools and genuine risks at once. They augment and extend when used lightly, but they can systematically undermine the capacities that make us human when they become default emotional infrastructure.
So, the question we should be asking is not “is it intelligent”, but “is its perceived influence already strong enough to override human agency”?
That is the uncomfortable middle.

Reference list:
Babu, J., et al. (2025). Emotional AI and the rise of pseudo-intimacy: are we trading authenticity for algorithmic affection? Frontiers in Psychology, 16, 1679324. https://doi.org/10.3389/fpsyg.2025.1679324
Buijsman, S., Carter, S.E., & Bermúdez, J.-P. (2025). Autonomy by design: Preserving human autonomy in AI decision-support. Philosophy & Technology, 38, 97. https://doi.org/10.1007/s13347-025-00932-2
Common Sense Media and Stanford Brainstorm Lab for Mental Health Innovation (2025). AI chatbots for mental health support: Risk assessment. https://www.commonsensemedia.org/ai-ratings/ai-chatbots-for-mental-health-support
Grok (2026) Personal response to the author, xAI, 25 February 2026.
McKay, C. (2025) ‘Then and now: How does AI electoral interference compare in 2025?’, Centre for International Governance Innovation, 17 June. Available at: https://www.cigionline.org/articles/then-and-now-how-does-ai-electoral-interference-compare-in-2025/ (Accessed: 21 March 2026).
Nature Machine Intelligence (2025). Emotional risks of AI companions demand attention. https://www.nature.com/articles/s42256-025-01093-9
Rousmaniere, T., et al. (2025). Large language models as mental health resources: Patterns of use in the United States. Practice Innovations. https://doi.org/10.1037/pri0000292Van Zyl, L.E. (2026). The AI-IARA framework: How to cultivate human agency before artificial intelligence optimizes it a(ny)way. The Journal of Positive Psychology. https://doi.org/10.1080/17439760.2026.2632939