The more AI helps, the more people “give up”: what happens when users stop trying to change the tools they use
Learned Helplessness in AI-Assisted Work: When Users Stop Trying to Shape the Tools They Use
HCI Today summarized the key points
- •This article describes a phenomenon in which people using AI tools stop trying to change things themselves after repeated experiences.
- •Just like ‘learned helplessness’ from earlier research, people eventually stop attempting things when their actions don’t seem to change the outcome.
- •In AI, this can happen easily when edits aren’t reflected, feedback isn’t visible, or users can’t change their settings.
- •Even if it looks like people are using the system well on the surface, in reality their edits, reports, and suggestions decline—making it easier for the AI to fail to learn and to miss mistakes.
- •To fix this, you need to show edit results immediately, enable users to change settings, and continuously communicate how user input is being reflected.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article frames AI not just as a ‘smart feature,’ but as an interaction problem that includes how users respond and intervene. In particular, it explains why users quietly stop trying when their edits aren’t reflected or when feedback doesn’t close the loop—an important signal that UX practitioners often miss. Even if product metrics look fine, collaboration can still break down in practice, which offers major implications for HCI research and design.
CIT's Commentary
The most important insight is that, more than AI performance itself, what matters is whether the user’s sense that ‘my actions change the system’ is preserved. When edits aren’t applied or feedback seems to float in the air, users quickly shift into a state that looks like learned helplessness—not just simple dissatisfaction, but a failure of interaction design. Especially in safety-critical work, even if there is structurally a human-in-the-loop, cognitively users can end up effectively outside the loop. So rather than relying on surface metrics like approval rates, you should examine whether overrides, edits, and configuration changes actually cycle in a meaningful way—and whether that pathway is visible to the user. An interesting point is that the research tools used to find these problems can also be enhanced with AI. For example, even if you summarize interview logs with an LLM, the interpretation criteria that distinguish ‘giving up’ from ‘satisfaction’ must remain strictly enforced. In industry, this problem can be easier to hide in fast deployment environments—such as large services like Naver and Kakao, or domestic startups—making closed feedback and transparent status indicators even more important.
Questions to Consider While Reading
- Q.When users rarely make edits or provide feedback, how can we tell whether it’s due to satisfaction or learned helplessness?
- Q.How can we design an interface that gives users the feeling that their input is reflected while still reducing incorrect interventions?
- Q.In environments like Korea’s—where there are rapid experiments and frequent releases—what level of visibility should we provide for the ‘visibility’ of the feedback loop?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.