Teaching in the Age of ChatGPT: Knowing the Pain
To teach in the era of ChatGPT is to know pain
HCI Today summarized the key points
- •This article explores how generative AI makes it difficult for university remote science classes and learning assessments to work effectively.
- •Instructors end up spending more time than teaching itself on detecting academic misconduct, as students submit assignments produced by AI.
- •The author argues that learning requires effort and trial-and-error, and that when AI provides only answers, students lose opportunities to think for themselves.
- •In particular, assessments such as short quizzes and writing assignments are easily replaced by AI, causing the original learning effects to disappear.
- •In the end, AI is not making education better—it is significantly disrupting students’ real learning and instructors’ ability to run classes effectively.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article frames AI not as a ‘smart tool,’ but as an interaction design problem that changes the educational experience. It highlights that the learning process—and even the assessment approach—shifts depending on when, why, and how students use AI. Particularly in context-light environments such as remote or asynchronous learning, small design differences can dramatically affect trust, user intervention, and the quality of learning. This is a great example for HCI practitioners and researchers.
CIT's Commentary
The core of this piece isn’t about the performance of LLMs themselves, but about how the structure of learning interactions breaks down. The moment students ‘generate’ answers, an assignment that was originally meant to support thinking can turn into a mere output-production engine. The key issue here isn’t whether AI is banned or allowed, but how visible the system state is and where the user can intervene. Just as safety-critical systems need clearly defined failure modes, education too must have explicit failure modes—and as the boundary between assessment and learning blurs, we need mechanisms that can help recover. What’s especially interesting is that this problem can also be revisited through HCI methodologies. For example, even if you use LLMs to assist UX measurement tools, you must preserve the rigor and reproducibility of measurement. In Korea’s online education ecosystem—large platforms and domestic startup environments alike—‘rapid adoption’ is likely demanded even more strongly, so practical intervention design may be needed beyond global discourse.
Questions to Consider While Reading
- Q.Rather than trying to guess whether a student used AI, what interface signals would indicate that an assignment is actually prompting genuine thinking?
- Q.In asynchronous online classes, how can we design to increase points where learners can intervene without overburdening instructors?
- Q.Instead of assessments that block AI use, what conditions should an assignment structure have so that learning remains even when students use AI?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.