Not Punishment, Participation: Letting Students Co-Create AI Policy Recommendations in a Design Classroom
Participatory, not Punitive: Student-Driven AI Policy Recommendations in a Design Classroom
HCI Today summarized the key points
- •This is a participatory study in which students in a university design course were asked to create generative AI policies themselves.
- •The research team ran student-led workshops without the professor, encouraging students to share their real AI-use experiences and concerns candidly.
- •Together, students developed 10 policy items, including examples of AI use, assignment-specific criteria, citation requirements, and accommodations for English learners.
- •Through the process, students used AI less blindly and more thoughtfully—considering their purpose and using it more carefully and intelligently.
- •The article argues that trust increases when AI regulation is built around dialogue and collaboration rather than punishment, and that this approach can be applied to other classes as well.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article reframes generative AI not as a question of whether it is a ‘good tool’ or a ‘bad tool,’ but as something to be understood through how people actually use it, where they get confused, and under what conditions they trust it—or hide its use. In particular, it matters that students are treated not as subjects to be regulated, but as co-designers of policy. For HCI and UX practitioners, it highlights that—more than the AI’s features—the way guidance is delivered, how exceptions are handled, and how responsibility is allocated are the key determinants of quality. For researchers, it suggests that participatory design can change not only the policy itself, but also user behavior.
CIT's Commentary
The core of this study is not about meticulously listing the ‘allowed boundaries of AI use,’ but about how those boundaries are understood—and by whom. It’s interesting that the moment students read the policy wording, they immediately think of score deductions and suspicion. That’s why interfaces that help users make judgments—such as task-specific examples and brief usage explanations—may be more important than surveillance-like mechanisms such as submitting chat logs. The format of a zine also functions as a safe intermediary: it’s not just an output, but a medium that helps surface experiences that are otherwise hard to articulate. Overall, the study shows that AI policy is ultimately an interaction design problem as well—one that must be designed together with transparency, pathways for intervention, and the potential for misunderstanding. Even in domestic universities or organizations like Naver and Kakao, it’s not enough to copy the same wording; you need to redesign the explanation approach and feedback loops to fit the local evaluation culture and power dynamics.
Questions to Consider While Reading
- Q.When students co-create policies, compliance may increase—but there’s also a risk that the rules become too loose. How can we design that balance?
- Q.Cutting AI use into numbers—like ‘allowed up to a certain percentage’—doesn’t reflect real work practices well. What interface or logging approach could be a better alternative?
- Q.When professors’ and students’ standards for AI use differ, trust breaks down easily. In an educational context, how much transparency is appropriate to ensure fairness?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.