When robots and AI try to deceive like humans—how far should it go? A framework for levels of deception
Towards A Framework for Levels of Anthropomorphic Deception in Robots and AI
HCI Today summarized the key points
- •This article is a research study that organizes the kinds of deception that occur when robots and AI appear or speak like humans.
- •The researchers explain that people’s tendency to easily feel emotions toward machines works together with the level of risk.
- •Based on this, they propose a four-level framework using three criteria: appearing human, acting on one’s own, and pretending to have a self.
- •The higher the level, the stronger the deception; in some cases, it is allowed only when clear consent is present.
- •At its core, this framework aims to support discussions about designing AI and robots more transparently and responsibly.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article treats how humanlike robots or AI should look and speak not as a matter of mere taste, but as a question of user experience and trust design—making it meaningful for both HCI practitioners and researchers. In particular, it organizes how ‘designing to look human’ can provide convenience while also creating misunderstanding and overtrust, level by level. That helps readers think about the balance between transparency and persuasiveness in real services. This perspective is especially useful for systems where safety is critical.
CIT's Commentary
A key strength of this piece is that it reframes ‘designing to be humanlike’ not as a choice that simply makes things look appealing, but as an interaction problem that includes what users end up believing. In particular, the framework that separates humanlikeness, agency, and selfhood can be applied directly to chatbots and AI agents. In real products, however, these three elements move together and can strongly sway trust and expectations. That said, in practice, the more direct design question is less about ‘how humanlike it should look’ and more about ‘when the system state should be revealed without hiding, and where users should be able to intervene.’ In Korea’s messenger and platform environment—where conversation is familiar—anthropomorphic design may be more readily accepted, which also suggests a strong need for transparency mechanisms that are stricter than global norms. Ultimately, this framework reads less like a list of prohibitions and more like a practical checklist for checking the boundary between persuasion and misunderstanding that fits a product’s goals.
Questions to Consider While Reading
- Q.In ambiguous situations like Level 2, what is the minimum transparency mechanism needed so users don’t feel ‘deceived’?
- Q.In real products, the benefits of making systems look human can conflict with the benefits of reducing misunderstandings—under what circumstances should you prioritize one over the other?
- Q.In environments where conversational interfaces are familiar—such as Korean AI services—how should this framework be applied differently?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.