Not on Top, Beside It: AI-Mediated Communication for Patient–Provider Care Relationships
In the Middle, Not on Top: AI-Mediated Communication for Patient-Provider Care Relationships
HCI Today summarized the key points
- •This article discusses how AI can be used in clinical settings as a middle mediator between doctors and patients.
- •The research team explains that AI should be positioned not to replace judgment, but to support the conversation as a ‘middle role.’
- •The CLEAR messaging system helps patients ask questions easily and helps clinicians quickly check only the essentials.
- •This approach reduces confusion and embarrassment due to lack of understanding, and also eases the burden of explanations before and after visits, as well as confusion caused by broken continuity of care.
- •However, because summaries may appear overly definitive and because privacy could be recorded more broadly, careful design is necessary.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article is important for HCI because it looks at how AI changes the conversation between patients and clinicians—not as a tool that simply ‘gets the answers right.’ It addresses real-world constraints such as short appointment times, low levels of medical literacy, and broken continuity of care, while showing how an AI that helps from the middle—by explaining, organizing, and preparing questions—affects relationships and trust. For practitioners, it suggests that interaction structure should be designed before specific features. For researchers, it provides a strong case for examining both the benefits and risks of intermediary AI.
CIT's Commentary
The core message of this piece is that AI can be more than a substitute for medical judgment; it can become a ‘middle layer’ that connects understanding between people. What stands out is how, during brief consultations, it helps patients revisit and organize questions they hesitated to ask, while enabling clinicians to quickly grasp only what matters. However, this kind of mediation brings along a related issue: ‘what gets recorded.’ If summaries look too polished, uncertainty or context can be erased—and in real products, that may increase misunderstandings rather than build trust. So, more than AI performance, the interface matters: when users can intervene and what they can edit or keep private.
Questions to Consider While Reading
- Q.When AI acts as an intermediary between patients and clinicians, what information should be summarized and what must remain in the original wording?
- Q.What kind of interface is needed so that summarized messages look ‘factual’ while still revealing uncertainty sufficiently?
- Q.If we design this kind of intermediary AI for Korea’s short appointment environment and high mobile usability, what would need to change the most?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.