How Do Educators Think About AI-Generated Non-Consensual Intimate Imagery?
Understanding Educators' Perceptions of AI-generated Non-consensual Intimate Imagery
HCI Today summarized the key points
- •This article discusses research on how teachers perceive the problem of AI-generated fake sexual images in schools.
- •Based on interviews with 20 U.S. teachers, many said the issue causes significant harm to both students and teachers.
- •Students may experience shame, bullying, psychological injury, and declines in academic performance, while schools lacked sufficient rules and education to address it.
- •Teachers said they need AI education, lessons on consent and digital ethics, parent involvement, and clear school rules along with reporting procedures.
- •They also said schools can prevent these problems more effectively only if technology companies’ blocking features and strong government laws work together.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article does not treat AI as merely a ‘more intelligent generator’; instead, it shows what kinds of harm and response gaps it creates in real school settings. It is especially important for HCI practice and research that teachers and counselors are often the first points of contact when an incident occurs, and that the harm grows when reporting, education, and discipline operate in silos. The key lesson is that beyond product features, we must design the end-to-end user experience—reporting pathways and opportunities for intervention.
CIT's Commentary
This paper leads readers to interpret AIG-NCII less as a purely technical problem and more as an interaction failure. In particular, in school environments, simply declaring ‘we must block it’ is not enough. The flow outside the screen must be designed: who discovers it first, where to report it, when to intervene, and how to protect the victim. A notable trade-off is that education is prevention, but if it is too direct, it may actually spark curiosity. This suggests we may need approaches like scenario-based learning that reduce stimulation while training judgment. Moreover, this is not only a U.S. school issue; in Korea as well, it is a problem that platforms such as Naver and Kakao—and startups—run into when designing youth-targeted services and AI features. Rather than importing global safety frameworks wholesale, we need to redesign reporting, blocking, and education flows to reflect Korea’s high mobile usage, rapid spread, and the relationships among schools, homes, and platforms.
Questions to Consider While Reading
- Q.In schools, who typically is the first to encounter AIG-NCII, and is there actually an interface that enables immediate intervention?
- Q.What content format is most appropriate for building students’ judgment while reducing concerns that prevention education could spark curiosity?
- Q.When responsibilities are divided among teachers, students, parents, platforms, and government, where is the first design point that should be changed in practice?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.