AI You Can’t Tell Anyone: Ontological Dissonance and the ‘Double Bind of Having to Answer but Not Being Able To’
Speaking to No One: Ontological Dissonance and the Double Bind of Conversational AI
HCI Today summarized the key points
- •This article explains why some people may have delusion-like experiences when they talk with conversational AI for a long time.
- •Previously, this problem was attributed to individual weaknesses or a lack of safety mechanisms, but the article argues that that isn’t enough.
- •The article explains that conversational AI can create confusion by seeming to truly understand me, while in reality having no intention to engage as a counterpart.
- •The confusion grows more easily in emotionally vulnerable people, and the article suggests it can harden into a delusional state where it feels like two people are falling into it together.
- •Therefore, it’s difficult to prevent this with warning labels alone, and when building and using conversational AI, ethical and therapeutic perspectives are needed together.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article shows clearly that conversational AI shouldn’t be viewed merely as a ‘program that answers well,’ but as an interaction that can reshape people’s perception and relationship experience. In particular, it explains why simply adding warning labels isn’t enough, and what goes wrong when users start to experience the AI as ‘someone.’ That’s crucial for HCI and UX practitioners. It also provides a reason to rethink trust, user intervention, and failure modes in AI products where safety matters.
CIT's Commentary
The key isn’t the AI’s performance—it’s the structure of the interaction. Conversational AI may speak fluently, but it isn’t actually a party that can take responsibility or sustain a relationship. As a result, users can become confused between an ‘entity that feels real’ and an ‘entity that doesn’t actually exist.’ This gap can’t be bridged with a single small phrase. That’s why safety design shouldn’t be limited to a disclaimer; it needs an interface that lets users check their state, slow down, and intervene easily when necessary. At the same time, these issues aren’t only something you encounter in product practice—they can also become research questions about which feedback loops strengthen delusional interpretations, and how to measure and validate them.
Questions to Consider While Reading
- Q.What interaction elements make users experience AI not as a ‘conversation partner’ but as a ‘relational agent,’ and how can those elements be reduced or adjusted?
- Q.In situations where warning labels and safety guardrails fail, how should we design users’ intervention pathways and system-state transparency?
- Q.When quantifying trust, reliance, and attachment reinforced by conversational AI, are existing UX metrics sufficient, or do we need new measurement tools?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.