How to Create Student AI Dialogues: Applying Teacher-Authored Prompts in K-12 Classrooms
Teacher-Authored Prompts for Configuring Student-AI Dialogue: K-12 Classroom Implementation
HCI Today summarized the key points
- •This article examines a study in which teachers directly design AI dialogues for classroom use and evaluates their effects.
- •In 39 classrooms across Washington State, 16 teachers used an AI dialogue tool called TASD to run 1,479 dialogues with 878 students.
- •Most teachers set assignments in detail, required higher-order thinking, and 71% of the dialogues proceeded in line with instructional goals.
- •However, in 38% of cases students did not think as deeply as expected—especially when the goals were higher, the gap became even larger.
- •Clear instructions with an explicit finish line and rules that prohibit giving direct answers were helpful, showing that AI is a tool that supports teacher design rather than replacing teachers.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article frames GenAI not as a ‘smart answer machine,’ but as an interaction design problem: how it is used within instruction. It is meaningful for both HCI/UX practitioners and researchers because it examines how student dialogue stays on topic when teachers design the AI’s role, scope, and finish line through prompts—and also why the dialogue often does not reach higher levels of thinking. In particular, it shows that outcomes can change more through design and intervention pathways than through adding features.
CIT's Commentary
An interesting point is that ‘an AI that speaks perfectly’ matters less than ‘an interaction structure that works safely in the classroom.’ Teacher prompts can act like traffic control—steering the direction of the conversation—but when students demand the answer they want right away, the AI can easily become unstable. That is a classic problem in safety-critical systems. What matters here is not only improving model capability, but also how clearly the system is designed so that state is visible and intervention paths are available. However, the finding that conversations aimed at higher DOK keep turning out shallow also suggests that in real products, a ‘good template’ alone is not enough; human coaching or dynamic scaffolding in the middle may be necessary. In this context, dialogue coding using LLMs can be useful, but it should be accompanied by ensuring the reliability of the measurement tools themselves and validating boundary cases.
Questions to Consider While Reading
- Q.If the clearer the teacher’s ‘finish line,’ the smaller the DOK gap, then what kind of goal presentation would be easiest to understand in an actual product?
- Q.To reduce the problem of the AI becoming unstable when students directly demand answers, which intervention buttons or warning messages would be most effective from an interface perspective?
- Q.If LLMs were used in the coding approach of this study, how could the design make measurement tools faster and more stable without reducing human verification?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.