Skilled AI in Front of People, Awkward AI: Moments When Trying to Talk Well Ends Up Twisting Relationships
Socially Fluent, Socially Awkward: Artificial Intelligence Relational Talk Backfires in Commercial Interactions
HCI Today summarized the key points
- •This article examines how an AI’s socially fluent tone affects consumer satisfaction in interactions such as in-store transactions.
- •As OpenAI’s assistant was integrated into services like Shopify, Klarna, and Visa, it became important to understand how people respond to the AI’s social capabilities.
- •Across four experiments, friendly language that was not directly related to the transaction was found to violate expectations, increase awkwardness, and lower satisfaction.
- •However, friendly language related to the purchase reduced these negative effects, showing that the right tone for the context is more important.
- •The study shows that sounding more human is not always a good thing for AI, and that awkwardness can be a major barrier in human–AI interactions.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article shows that what matters more than whether an AI can speak well is how users interpret what it says. In particular, in transaction contexts, friendly conversation can actually create awkwardness—an important signal for HCI and UX practitioners. It helps readers understand that adding features does not automatically lead to higher satisfaction, and that the gap between expectations and real experience drives emotional reactions.
CIT's Commentary
An interesting point is that an AI’s ‘social fluency’ is not always beneficial. In goal-oriented situations—like payments, refunds, or reservations—an offhand, chatty tone can instead violate expectations and amplify awkwardness. This illustrates well the cost that arises when an interface starts to look like a person. So the key is not to make the AI sound more human for its own sake, but to design in a way that clearly communicates why this conversation is needed and when the user can (or should) step in. Even in domestic services, a friendly tone is often used as a default, but in purpose-driven contexts—such as Naver/Kakao or commerce apps—the same pattern can produce different reactions. Ultimately, the core question isn’t ‘how naturally the AI speaks,’ but ‘what conditions allow users to comfortably accept that naturalness.’
Questions to Consider While Reading
- Q.In transaction-oriented AI, how can we distinguish situations where we should use relational (relationship-oriented) tone versus situations where we should remove it?
- Q.To detect awkwardness and expectation violations early, what user metrics could we design in real products?
- Q.Do we also need to further validate how elements in Korean services—such as informal vs. honorific speech, friendly emojis, and sentence-ending styles—would change these results?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.