“When I see Jodie, I feel relaxed.” — Examining the Impact of a Virtual Supporter in Remote Psychotherapy
"When I see Jodie, I feel relaxed": Examining the Impact of a Virtual Supporter in Remote Psychotherapy
HCI Today summarized the key points
- •This article reports on a study examining the effects of introducing a virtual supporter, Jodie, into remote psychotherapy.
- •First, the research team interviewed nine therapists to understand the supporter’s role and boundary-related issues, and then designed Jodie.
- •Fourteen participants kept emotion logs for one week and took part in Zoom-based therapy sessions, and Jodie helped reduce anxiety and increase feelings of safety.
- •However, Jodie’s conversations were sometimes unnatural because they followed many rules, and users wanted responses that felt more human and more flexible.
- •In the end, a virtual supporter may have the potential to help therapy, but it’s important to maintain boundaries and protect privacy so it does not replace human care.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article shows that AI is not just a simple automation tool, but an interaction design that helps users feel safe and opens the door to speaking up. In sensitive, safety-critical contexts like remote psychotherapy, it’s possible to read how screen layout, pacing of responses, and intervention pathways can either build or break trust. For HCI/UX practitioners, it offers a sense of designing the boundaries of an experience beyond mere functionality; for researchers, it raises questions about how to validate human–in-the-loop intervention structures.
CIT's Commentary
The most interesting point is that the system aims for a ‘less disruptive companion’ rather than a ‘smarter AI.’ During therapy, it speaks almost not at all, and only briefly intervenes before and after sessions—an approach that clearly demonstrates how, in safety-critical systems, the interface itself becomes a safety mechanism. However, rule-based dialogue comes with a trade-off: instead of protecting boundaries, it can reduce the flexibility of empathy that users expect. In a real product, striking this balance is the core challenge. It can be applied right away to remote counseling in Korea, healthcare apps, and conversational agents, but in platform environments like those of Naver or Kakao, transparency design—such as how privacy notices are presented and where user-intervention controls are placed—will likely be even more strongly required. Interestingly, the next research questions raised here are less about how much LLM capability to include, and more about how therapeutic boundaries are maintained even with LLMs, and how the system recovers when things go wrong.
Questions to Consider While Reading
- Q.A structure that is nearly silent during therapy can make real users feel stable, but it may also prevent deeper empathetic experiences. What interaction principles could be used to design this balance?
- Q.Rule-based dialogue is safe, but it can easily feel rigid. Even if you introduce some LLM capabilities, what kinds of failure-recovery mechanisms are needed to reduce boundary violations and misunderstandings while still achieving naturalness?
- Q.How do the moments when users accept an AI supporter as a ‘friend’ versus when they see it as a ‘tool’ differ? How should we measure the impact of that difference on trust, reliance, and therapeutic outcomes?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.