AI Can Help—But Skills May Not Improve: The “Confidence Up, Ability Stuck” Problem
Confidence Without Competence in AI-Assisted Knowledge Work
HCI Today summarized the key points
- •This is a study on conversational strategies designed to reduce overconfidence when using LLMs.
- •The research team designed a web tool called Deep3 through interviews with 85 European university students and 16 interview sessions.
- •Deep3 offers three approaches: step-by-step solutions with re-explanation, comparing opposing viewpoints, and hints.
- •The experimental results say that while a typical LLM made participants feel like they understood, their actual performance was the lowest.
- •Meanwhile, the hint-based approach increased performance the most, and appropriately uncomfortable friction that helps people think is important for learning.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article shows that LLMs are not just tools that provide answers quickly; it frames them as interactions that can effectively awaken and validate a user’s thinking. In particular, by experimentally confirming that feeling like you understand can differ from actually understanding, it offers important implications for UX and learning design. For HCI practitioners and researchers, it’s a piece that makes you reconsider how ‘comfortable experiences’ and ‘good learning experiences’ can come into conflict.
CIT's Commentary
What’s most interesting in this study is that the interaction structure—not the AI’s raw performance—changed the outcomes. The approach of giving answers right away made users feel comfortable, but it also weakened actual understanding the most. By contrast, introducing small frictions—such as self-explanation, counterarguments, and step-by-step hints—made learning more robust. That said, more friction is not always better; what matters is where the user can get stuck and where they can re-engage. The same applies in the context of domestic services. In environments that prioritize fast completion, such designs may not be adopted immediately. So it may be more realistic to naturally incorporate lightweight interventions—like ‘check-your-understanding questions’ or ‘revisiting’—within a familiar Naver/Kakao-style flow. Ultimately, the key is to view AI not as a smart answer machine, but as a tuning mechanism that helps users make better judgments.
Questions to Consider While Reading
- Q.Could a method that forces self-explanation risk dropping users’ confidence too much?
- Q.Between step-by-step hints and counterargument-style interactions, which approach is more likely to fit a broader user base in real products?
- Q.In domestic education and work services, what interface patterns would be most appropriate for naturally incorporating this kind of ‘intentional friction’?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.