AI Feedback vs. Human Feedback: Which Source Earns Trust and Changes Learner Behavior in Computing Education
Same Feedback, Different Source: How AI vs. Human Feedback Attribution and Credibility Shape Learner Behavior in Computing Education
HCI Today summarized the key points
- •This article examines research on how students interpret feedback from AI versus humans and how that changes their behavior.
- •The researchers split feedback generated by the same LLM into two conditions by attributing it to either an AI system or a human teaching assistant, changing only the visible source.
- •When students believed the feedback was human, they focused on the assignment for longer, but the difference did not come from waiting time alone.
- •On the other hand, students who were told it was human but did not believe it performed worse than those who received AI feedback.
- •In short, human attribution helps only when learners believe it is real; in situations where it is hard to trust, a candid AI label is safer.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article shows that the issue with AI feedback is not simply whether the model ‘gets it right,’ but how users interpret that feedback and translate it into action. Even when the content is the same, the time learners stay engaged and their performance outcomes can differ depending on whether they believe the feedback came from AI or from a human. This is especially important for UX design in services where trust and intervention matter—such as help systems, tutors, and copilots. The study offers useful experimental insights for such contexts.
CIT's Commentary
The core of this research is not performance comparison, but learners’ beliefs and lived experience during the interaction. Even if the output from the same LLM is identical, a cue that ‘a person saw it’ can create motivation; however, if that cue is not believable, the results can be worse than with a fully transparent AI. This is a compelling example of when the so-called ‘human touch’ in AI products helps—and when it backfires. In particular, in environments like many domestic services where AI features are added quickly while users become increasingly accustomed to AI over time, it may be safer to prioritize honest source labeling and carefully designed intervention pathways over convincing human impersonation. At the same time, these findings raise research questions for building UX measurement tools or feedback systems that use LLMs: beyond appearances, we need to rigorously validate trust and behavioral change.
Questions to Consider While Reading
- Q.How can we design an interface that preserves the motivational benefits of human attribution for learners while reducing the negative effects when the feedback is not trusted?
- Q.To what level of detail should the ‘source labeling’ of AI feedback be provided so that users can appropriately judge the likelihood of trust and intervention?
- Q.In domestic education and coding service environments, what kind of hybrid framing is appropriate to transparently present AI-assisted feedback while maintaining the value of human involvement?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.