GroupEnvoy: An AI that speaks on behalf of the “outgroup” through conversation to improve intergroup relations
GroupEnvoy: A Conversational Agent Speaking for the Outgroup to Foster Intergroup Relations
HCI Today summarized the key points
- •This study examines whether AI can convey the thoughts of a foreign group on behalf of others to reduce conflict.
- •The research team developed a conversational AI, GroupEnvoy, that relays the opinions of Chinese exchange students to Japanese students.
- •Japanese students who conversed with the AI showed a tendency toward lower anxiety and a better understanding of the other side’s viewpoints.
- •Although the group that read the content as text also showed some change, the effect felt less like a real conversation than in the AI group.
- •The study suggests that AI can serve as a preparatory step to lower the barriers between different groups, but it cannot fully replace real encounters.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article is highly meaningful for both HCI practitioners and researchers because it frames AI not as a mere ‘smart answer machine,’ but as an interaction medium that can reduce misunderstandings and tension between people. It shows how the same information can produce different user experiences depending on whether it is consumed as ‘reading a document’ or exchanged through ‘conversation.’ It also highlights key UX factors such as trust, anxiety, and the possibility of intervention. In particular, it prompts readers to consider how interfaces can change outcomes in social interactions where safety is especially important.
CIT's Commentary
What’s interesting is that the effect hinged less on what the model does well and more on how users experienced it—as if it were a particular kind of counterpart. Even when the information was the same outgroup content, anxiety decreased more and the sense of interaction felt stronger when users engaged with a conversational agent rather than reading a static document. This suggests that the challenge is not simply building an AI that ‘delivers’ information, but designing interactions that shape users’ psychological distance and behavioral intentions. That said, the results also show a critical nuance: as comfort increases, users may end up handing over reasoning to the agent. This point is important. A good AI should not be a substitute that thinks for users; it should instead provide a foundation that helps users understand and judge for themselves. From this perspective, future research needs to become more precise in measuring not just ‘how persuasive’ the system is, but ‘how far users directly intervened and where they stopped.’
Questions to Consider While Reading
- Q.What interaction mechanisms would allow a conversational agent to reduce users’ anxiety while still preserving users’ efforts to form their own perspectives?
- Q.If this approach is applied outside the Japan–China college student context—such as within Korea’s Naver/Kakao services or domestic community environments—what different failure modes or design challenges might emerge?
- Q.If we build a UX tool that uses LLMs to measure the relationship-improving effects, how should we validate both ‘comfort’ and ‘active, self-directed participation’ together?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.