Learning “Ethical Data Sharing” with Purrsuasion: An Educational Game for Negotiated Data Disclosure
Investigating Ethical Data Communication with Purrsuasion: An Educational Game about Negotiated Data Disclosure
HCI Today summarized the key points
- •This article reports on research into the game Purrsuasion, which examines how students create and interpret data visualizations that include non-disclosure conditions.
- •The research team turned scenarios involving information that must not be disclosed and information that must be shown into a “show-hide puzzle.”
- •Students took on separate roles as data providers and data requesters, exchanging visualizations and negotiating which images satisfied the conditions.
- •The experimental results showed that students tended to struggle to come up with good solutions and often stayed with the safe approach they found first, while requesters were not able to accurately understand the sender’s intent.
- •The research team concluded that this game can be a tool for learning and studying ethical data communication, and that additional support is needed to help with interpretation and trust.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article shows how visualization changes when it becomes a negotiation tool between people—not just “good visualization.” In particular, situations where you must handle both information that should be hidden and information that must be shown are common in real product design, education, public services, and AI-assisted tool development. HCI practitioners and researchers can learn how to embed trust, pathways for user intervention, and the possibility of failure into interfaces.
CIT's Commentary
What’s interesting is that it treats ethics not as a matter of individual intent, but as an issue of interaction. When showing and hiding come into conflict, users aren’t simply choosing a “correct chart.” Instead, within limited information, they must decide whether to trust, ask for more, and how far to infer. This structure also appears in AI agents and generative tools: rather than focusing on whether the model is right, it becomes more important to determine when the user intervenes and what they can verify. The finding that, despite seeming to allow automatic grading, the situation ultimately required contextual judgment, also demonstrates that rule-based automation alone is insufficient when building UX measurement or evaluation tools with LLMs. In environments like Korea’s service context—where speed and efficiency are strongly demanded—designing interfaces that are “slightly imperfect but transparent” becomes even more important.
Questions to Consider While Reading
- Q.In real products, how should visualizations like the one revealed in this game—“dangerous, but not completely wrong”—be presented with warnings or explanations?
- Q.When attaching LLMs or AI-assisted visualization tools, what interactions are needed to help users notice hidden data loss or distortion?
- Q.If this kind of negotiation-based data visualization is used in Korea’s service environment, how would users’ trust formation and re-verification behaviors differ from findings in overseas research?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.