Designing Medical Chatbots where Accuracy and Acceptability Collide: An Exploratory, Vignette-based Study in Urban India
Designing Medical Chatbots where Accuracy and Acceptability are in Conflict: An Exploratory, Vignette-based Study in Urban India
HCI Today summarized the key points
- •This study examines how users make judgments when accuracy and acceptability for medical chatbots come into conflict in urban India.
- •The research team conducted step-by-step surveys and interviews with 200 participants using vignette cases of common conditions—such as the common cold, diarrhea, and headaches—where clinical guidelines and local practices diverge.
- •In the first stage, participants showed a tendency to prefer Max, which offers prescriptions that feel familiar, over Verity, which follows the guidelines; preferences also varied by education level.
- •Participants trusted local practices more than the guidelines, using rationales such as visible interventions like medication prescriptions, consistency with prior experiences, and a doctor-like tone.
- •In the second stage, once local practices were acknowledged and a context-aware nudge was added to explain the guidelines, preference for Clarity increased substantially—showing that design is important for improving acceptability.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article clearly demonstrates from an HCI perspective that user acceptability is not guaranteed by accuracy alone in medical chatbots. In particular, it empirically examines how users’ experiential expectations, medical practices, and perceptions of authority in an urban Indian context shape interpretation—making it highly meaningful for UX practitioners and researchers. It also serves as an important reference point when thinking about designing fair medical AI in the Global South.
CIT's Commentary
From a CIT perspective, the core of this study is that when ‘the correct answer’ and ‘the answer that is accepted’ conflict, the interface functions not merely as a transmitter but as a mediator of meaning. A context-aware nudge can be read less as a device for persuading users and more as a design strategy that makes unfamiliar, normative recommendations interpretable by linking them to existing care experiences. However, while this approach may increase acceptability, it also carries the risk of reinforcing incorrect local practices. In HCI, this makes it crucial to design the boundary between the friendliness of information provision and clinical legitimacy. The fact that responses differ by education level suggests that refining explanation strategies and checking comprehensibility should be carried out in parallel.
Questions to Consider While Reading
- Q.How can context-aware nudges reduce users’ misunderstandings in real clinical situations while maintaining clinical accuracy—and what methods could verify this over the long term?
- Q.How can we set design criteria to balance framing that respects users’ existing medical practices with framing that does not reinforce harmful practices?
- Q.To reduce acceptability differences by education level, how should we adapt the explanation style or interaction structure of medical chatbots?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.