Learning Usable Privacy That Actually Works in HCI Classes: How We Designed, Built, and Evaluated It with Active Learning
Teaching Usable Privacy in HCI Education: Designing, Implementing, and Evaluating an Active Learning Graduate
HCI Today summarized the key points
- •This article introduces a graduate course that teaches Usable Privacy in HCI education.
- •The research team designed a 15-week course and structured learning around real cases, role-play, discussions, and guest lectures.
- •The course helps students view privacy not only as a technical problem, but as a socio-technical issue intertwined with users, companies, and regulation.
- •Across two semesters of evaluation, students showed high engagement and improved ability to explain the pros and cons of privacy.
- •This study proposes a reusable course model for teaching privacy protection in a practical way within HCI education.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article shows how to teach Usable Privacy not as mere security knowledge, but as a problem of interactions—how people actually understand, make choices, and commit mistakes. In particular, bundling role-play, case discussions, and staged research assignments to boost learning outcomes is meaningful for both HCI/UX education and industry practice. The structure presented in the paper is a reference point not only for ‘what to teach,’ but for ‘how to shape the experience.’
CIT's Commentary
The core of this paper is that it turns privacy into a training activity focused on the points where people and systems diverge, rather than a course about memorizing rules. In particular, role-play and case discussions help students build the ability to read conflicts among stakeholders—not just to find the ‘right answer.’ This resembles areas where AI product consent screens or settings menus may look simple on the surface, but where failures happen frequently in practice. What’s also interesting is that the teaching method itself was treated as an evaluation target. Going forward, this could lead to research questions such as whether LLMs can provide supportive analysis of students’ written responses or discussion logs, and—crucially—how far automation can go without undermining the rigor of learning measurement. In Korea’s environment, the faster products add features—like those from Naver, Kakao, or startups—the more important it becomes to design these ‘user intervention pathways’ and failure modes.
Questions to Consider While Reading
- Q.If role-play and case discussions improved students’ ability to judge privacy, how can we verify that effect not only in assignments but also in real task performance or behavior change?
- Q.When teaching privacy in systems that include AI, what order of interface elements should we cover to help users understand the state and intervene—rather than focusing primarily on explanations of model performance?
- Q.When using LLMs to analyze students’ reflection essays or discussion content, what mechanisms are needed to preserve both educational efficiency and the validity of measurement?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.