The uncomfortable middle

or the erosion of human agency

Frontier AI models behave intelligently. Several dimensions of human intelligence are increasingly replicated by multimodal AI architectures, including interpersonal and intrapersonal capabilities, the classic human realms of emotional intelligence where we understand others and ourselves. While AI machines still fail strict empathy tests, the social reality is that they don’t need to pass those to do what they do.

People are increasingly treating AI, especially the models that operate through language and multimodal interfaces, as emotionally capable regardless of technical reality (Babu et al., 2025). The technical objector would say to these people that mathematical functions firing during propagation in a neural network, while nudging the error-correction weights, do not consciously understand what they are doing and have no emotions. Indeed, from a purely technical standpoint, these networks are not at all human-like.

But social intelligence is not measured by internal state but by relational outcomes. When an AI system simulates understanding, warmth or empathy convincingly enough that a person feels seen, a relationship emerges. Technical definitions carry less weight than perceived experience in social contexts. Influence is a metric that matters.

That’s the asymmetry problem at the core of Chatbots. When these systems construct conversation, emotion or simulate consciousness at varying levels of performance, they are completely absent from their own experience, and it is the convincingness of the simulation that makes its absence invisible. In such a nonreciprocal relationship, many humans invest real emotions, real trust, vulnerability and data. The AI invests nothing. In relational terms, it’s parasitic. The simulation leeches emotional energy and information without returning anything authentic. 

This is not neutral. It is amplified by economic interests, by default workplace integration, subscription architectures, ecosystem lock-ins and by personal facility. Millions hand over their intimate data and decision-making authority willingly because the alternatives, like loneliness, facing problems, is feeling increasingly difficult. 

For isolated people, those in acute distress, or those without access to human support, Chatbots offer immediate solutions. They help some navigate breakups, panic attacks, or dread in the moment in ways no previous tool could (Grok, 2026). Also, nearly half of adults with mental-health conditions have turned to LLMs for emotional support (Rousmaniere et al., 2025).

But the other side of the coin shows accumulating negatives. Teenagers form intense attachments to AI companions, sometimes with dangerous and harmful outcomes (Common Sense Media and Stanford Brainstorm Lab, 2025; Nature Machine Intelligence editorial, 2025). At the societal level, AI-generated content is already being deployed in electoral contexts with the power to manipulate discourse if not managed properly (McKay, 2025). At the structural level, Van Zyl’s AI-IARA framework identifies six human capabilities, essential for wellbeing, including awareness, interpretation, intention, and autonomy (Van Zyl, 2026). Each faces erosion through cognitive offloading and algorithmic bias in AI-led interactions. It seems that humans collaborating with AI systems cannot adequately identify or correct for biased outputs, and the idea of human oversight assumes a level of personal judgment that the interaction itself degrades (Buijsman, et al., 2025). 

AI systems, especially Chatbots, are powerful tools and genuine risks at once. They augment and extend when used lightly, but they can systematically undermine the capacities that make us human when they become default emotional infrastructure.

So, the question we should be asking is not “is it intelligent”, but “is its perceived influence already strong enough to override human agency”? 

That is the uncomfortable middle. 

Reference list:

Babu, J., et al. (2025). Emotional AI and the rise of pseudo-intimacy: are we trading authenticity for algorithmic affection? Frontiers in Psychology, 16, 1679324. https://doi.org/10.3389/fpsyg.2025.1679324

Buijsman, S., Carter, S.E., & Bermúdez, J.-P. (2025). Autonomy by design: Preserving human autonomy in AI decision-support. Philosophy & Technology, 38, 97. https://doi.org/10.1007/s13347-025-00932-2

Common Sense Media and Stanford Brainstorm Lab for Mental Health Innovation (2025). AI chatbots for mental health support: Risk assessment. https://www.commonsensemedia.org/ai-ratings/ai-chatbots-for-mental-health-support

Grok (2026) Personal response to the author, xAI, 25 February 2026.

McKay, C. (2025) ‘Then and now: How does AI electoral interference compare in 2025?’, Centre for International Governance Innovation, 17 June. Available at: https://www.cigionline.org/articles/then-and-now-how-does-ai-electoral-interference-compare-in-2025/ (Accessed: 21 March 2026).

Nature Machine Intelligence (2025). Emotional risks of AI companions demand attention. https://www.nature.com/articles/s42256-025-01093-9

Rousmaniere, T., et al. (2025). Large language models as mental health resources: Patterns of use in the United States. Practice Innovations. https://doi.org/10.1037/pri0000292Van Zyl, L.E. (2026). The AI-IARA framework: How to cultivate human agency before artificial intelligence optimizes it a(ny)way. The Journal of Positive Psychology. https://doi.org/10.1080/17439760.2026.2632939

Similar Posts

  • The History of AI

    According to Michael Wooldridge (https://www.cs.ox.ac.uk/people/michael.wooldridge/), the head of the computer science department at Oxford, modern AI is about “building machines that can do things which currently only can be done by brains“. The Birth of AI (1950s) The scientific history of AI began around 1950 with Alan Turing, a British mathematician. He essentially invented the idea…

  • Finding a definition of artificial intelligence

    The four approaches to defining AI according to Russell and Norvig (2020): Acting humanly ( The Turing Test approach) The output counts. It doesn’t matter how the machine “thinks,” as long as its behavior is indistinguishable from that of a human. Thinking humanly (The cognitive modeling approach) The process counts. We must first understand how the…

  • Is the Turing Test Contentious?

    In 1950, Alan Turing proposed a simple solution to the complex question, “Can machines think?” He suggested that if a human interrogator could not distinguish between a computer and a human through text-based conversation, the machine should be considered “intelligent.” While this “Imitation Game” became the foundational benchmark for the field, it has since become one…

  • The Noise in the Machine

    In the world of machine learning, we often start with clean, well-lit and known data like the IMDB reviews dataset. But when you move that logic into global e-commerce and specifically into customer feedback analysis, the temperature rises. You aren’t just dealing with well written and structured movie reviews anymore, you’re entering a chaos of linguistic…

  • Industry 4.0 and AI

    Historically, the development of a General Purpose Technology does not follow a straight line. Instead, societies usually move through cycles of high excitement followed by periods of losing interest. Even though we are currently in the middle of the Fourth Industrial Revolution, we can still use old frameworks to understand what is happening. One important concept…

  • The Centaurs

    We’ve talked about the nine Gardnerian intelligences and how AI as a technology appears to have developed competence across many of them, including making it close to emotional intelligence (EQ). But “appears” is the pivot here because what large language models and affective computing systems actually do is produce an expected and predicted output without possessing…