“I can’t be there for you”: What AI-enabled encouragement between peers means for emotional labor and responsibility
"I'm Not Able to Be There for You": Emotional Labour, Responsibility, and AI in Peer Support
HCI Today summarized the key points
- •This article examines how responsibility is divided in mental-health peer support and how AI could be used for that work.
- •The research team interviewed 20 trained peer supporters in Singapore and found that when roles are ambiguous, responsibility tends to fall on individuals.
- •Participants felt empathy and a sense of fulfillment, but many also experienced burnout—often by taking on boundary-setting, emotion regulation, and crisis judgment themselves.
- •AI was evaluated more positively when it was seen as an auxiliary tool that reduces burden—such as organizing conversation or suggesting sentences—rather than as a substitute for humans.
- •The study suggests that to expand peer support, we should design responsibility sharing and organizational support before focusing on AI performance.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This piece is important for HCI because it does not treat AI as merely a “more intelligent counseling tool.” Instead, it asks who takes responsibility and who ends up getting exhausted. In particular, when conversational AI is involved, what matters more than the naturalness of responses is how boundaries are set, how crisis signals are detected, and how escalation paths hand the situation back to people. For practitioners, it provides criteria for feature design; for researchers, it raises new questions about how to measure responsibility and emotional labor.
CIT's Commentary
What stands out is that—more than AI performance—the allocation of responsibility emerges as a major design variable. This is not just a matter of building a better chatbot; it is an interaction design problem that determines how far the system should support human judgment and where it should stop. Especially in crisis situations, safety depends less on the accuracy of answers and more on who notices failure, how they intervene, and where they can route the user. The paper also focuses less on whether AI “understands emotions” and more on whether it amplifies emotional labor. As a result, when developing future LLM-based support tools, it encourages evaluating state transparency, escalation pathways, and the possibility of user intervention before explainability. This line of inquiry is especially relevant in Korea’s digital counseling and community contexts.
Questions to Consider While Reading
- Q.When an AI-suggested response is wrong, how should interfaces be designed so that users and organizations can notice immediately?
- Q.Where can we set the criteria to reduce emotional labor for peer supporters while ensuring AI does not take on responsibility on their behalf?
- Q.In counseling environments based on Korean schools, communities, and platforms, what escalation design is most realistic?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.