Recreating Users’ Voices with AI: Building an LLM-Based Multi-Agent UX Platform
사용자의 목소리를 AI로 재현하다: LLM기반 Multi Agent UX플랫폼 개발기
HCI Today summarized the key points
- •This article introduces an NSona talk revealed at a NAVER internal event, focusing on how to combine AI with user research.
- •Three people—a designer, an AI researcher, and a developer—planned and built the user persona bot NSona together.
- •They experimented by converting user research materials so that AI could use them directly, and by structuring the system so that multiple people could collaborate through conversation.
- •During development, their roles also shifted: the designer designed the questions, the researcher built the structure, and the developer provided critique.
- •The talk argues that in the AI era, it’s more important to get the starting point right than to chase completeness, and that we need to rethink collaboration itself.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article is meaningful for HCI practitioners and researchers because it shows AI not as a ‘well-performing model,’ but as an ‘experience for working together.’ It connects persona bots, multi-party dialogue structures, and new evaluation methods to reduce the gap between user research and service development. In particular, it prompts readers to think about when AI helps versus when it gets in the way in real product contexts, and how to design the points where human judgment should be involved.
CIT's Commentary
What’s especially interesting is that the article treats AI not as an end product, but as an interaction mechanism that extends research and collaboration. The attempt to make personas ‘speak’ is less about simple automation and more like an experiment that changes how a team shares and aligns on understanding users. However, as convenience grows, it can also become easier to drift away from real users—so the design needs to be clearer about where the ‘reproduction’ ends and where human judgment is reintroduced. The approach of creating evaluation processes separately for each service is also compelling, but for such tools to be used well, it must be designed not only around model performance, but also around how the team interprets and revises the outputs. In large-scale product environments like NAVER, this approach can be especially powerful, while also leading to research questions about operating costs and the boundaries of responsibility.
Questions to Consider While Reading
- Q.How much can a persona bot realistically replace actual user research, and from what point onward should it remain only as a supporting tool?
- Q.In multi-party, dialogue-based collaboration, what interface mechanisms are needed to prevent AI utterances from distorting the team’s judgment?
- Q.If the evaluation methods newly created for each service are to be reused across different organizations, what minimum shared criteria are needed?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.