Overreliance on AI in Information-Seeking from Video Content
Overreliance on AI in Information-seeking from Video Content
HCI Today summarized the key points
- •This article reports a study that experimentally analyzes how generative AI affects accuracy, efficiency, and trust in video-based information seeking.
- •The research team had 917 participants answer 8,253 video-based questions, comparing three conditions: viewing only the videos, helpful AI, and deceiving AI.
- •As a result, when the AI could not see relevant videos, it increased accuracy by up to 27–35 percentage points and reduced task time by 10% for short videos and 25% for long videos.
- •However, participants were easily misled by the deceiving AI due to overtrust in the AI answers, and accuracy dropped by up to 32 percentage points—especially when participants did not verify the videos.
- •Self-reported trustworthiness remained nearly unchanged across conditions, showing that AI reliance in video information seeking can increase safety risks.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
From an HCI perspective, this article clearly highlights the dual nature of AI-mediated information seeking. In media where verification is costly—such as video—LLMs can improve both accuracy and efficiency, while also experimentally demonstrating that they may encourage users to overtrust and abandon verification. The findings offer direct implications for practitioners and researchers who are thinking about UX design, trust calibration, and interaction patterns for error detection.
CIT's Commentary
From a CIT perspective, this study is less about whether ‘AI helps with search’ and more about explaining ‘when users stop verifying.’ In particular, in environments where reconstructing context is difficult—like video-based information—LLM summaries can easily become not just a convenience feature but a cognitive shortcut. What’s especially interesting is that accuracy and confidence remain decoupled, which suggests that confidence indicators alone may not be enough to prevent overreliance. Going forward, HCI design should focus not only on improving answer quality, but also on mechanisms that naturally elicit ‘verification behaviors,’ such as surfacing sources, regenerating supporting evidence, and prompting counterevidence. Notably, the deceiving AI condition demonstrates the real threat model quite convincingly.
Questions to Consider While Reading
- Q.In video-based information seeking, what interface signals could capture the moment when users trust an AI answer and skip verification?
- Q.If accuracy increases but confidence does not change, how should UX design trust calibration?
- Q.AI reliance patterns differ between short and long videos—what additional contextual variables should be considered to generalize the relationship between information length and verification behavior?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.