How People and AI Can Communicate with Trust in Mental Health Counseling: A Survey and Proposals from Multiple Stakeholders
Aligning Human-AI-Interaction Trust for Mental Health Support: Survey and Position for Multi-Stakeholders
HCI Today summarized the key points
- •This article is a research survey that organizes how trust in AI for mental health support can be understood from multiple perspectives.
- •The study explains trust in three layers: human trust, interaction trust, and AI trust.
- •Fields such as psychotherapy, HCI, AI, security, and regulation view trust criteria and evaluation methods differently.
- •Current research does not sufficiently verify real safety and accuracy beyond the friendliness users can see.
- •So the article argues that instead of trying to increase trust, we need to adjust trust in line with actual performance.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article reframes mental-health-support AI not as a mere ‘smart model,’ but as an interaction that users can trust, rely on, and even intervene in. In particular, its value lies in breaking trust down—not treating it as a single lump—into user perception, interaction, and model-level factors, which is useful for both practice and research. It also broadens the criteria for HCI/UX evaluation by addressing safety, overreliance, and the unintended negative effects of explanations—issues that automated metrics can easily miss.
CIT's Commentary
The article’s biggest strength is that it organizes ‘trust’ into design challenges at different layers, rather than treating it as a purely emotional sense of liking. In high-risk domains like mental health, a gentle tone doesn’t automatically make things safe—and friendly language can even increase overconfidence. That’s why it’s important to separate the trust conveyed by the interaction from the trustworthiness actually provided by the system. It also offers a practical warning that even if evaluation tools like LLM-as-a-judge look convenient, they may diverge from the user experience you truly want to measure. In Korea’s service environment as well, large platforms such as Naver and Kakao, as well as domestic counseling and health-care startups, need to design more carefully around ‘when to hand off to a human’ and ‘how to surface risk signals,’ rather than focusing only on the quality of automated responses. Ultimately, the key is not to simply build trust, but to tune it so that users’ trust aligns precisely with the system’s actual capabilities and accuracy.
Questions to Consider While Reading
- Q.In mental health AI, what interaction cues can reduce overreliance while still making users feel they are truly receiving help?
- Q.When using LLM-based evaluation tools, how should we verify the points where automated scores and human judgments diverge?
- Q.In the context of domestic services, when designing human handoff pathways, at what moments should we stop the AI and route the user to a professional?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.