"Oops! ChatGPT is Temporarily Unavailable!": A Diary Study on Knowledge Workers' Experiences of LLM Withdrawal
HCI Today summarized the key points
- •This article examines what becomes visible when knowledge workers stop using LLMs (Large Language Models).
- •The research team conducted a four-day diary study and interviews with 10 Korean knowledge workers who frequently use LLMs.
- •When the LLM disappeared, immediate support in search, writing, and problem-solving was cut off, making work delays, avoidance, and discomfort clearly apparent.
- •On the other hand, when participants worked alone and constructed logic directly, clarity of work, ownership of results, and awareness of priorities increased.
- •In the end, LLMs have taken root not as optional tools, but as the foundation of everyday work—meaning value-centered usage design is necessary.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article is meaningful in that it shows—through real-world experiences of stopping use—that LLMs have become more than just productivity tools; they have solidified into the basic infrastructure of work. For HCI/UX practitioners and researchers, it helps explain how a lack of technology can create fractures in workflows, self-efficacy, and collaboration practices. In particular, it raises design questions not about whether to use LLMs, but about the criteria, the extent, and the values with which they should be used.
CIT's Commentary
An interesting point is that the study does not treat dependency itself only as a flaw; it also interprets it as an infrastructural state in which everyday work has been reorganized around LLMs. So rather than being merely a detox study, it is closer to uncovering the invisible decomposition of tasks and the rediscovery of values revealed by stopping use. That said, whether the ‘recovery’ and ‘reappraisal’ observed over a short period of just four days will persist in the long term still requires separate validation. Practically, it seems more important to design interactions that set boundaries by task and leave room for human judgment than to rely on prohibition or restraint. value-driven appropriation will gain strength when it translates into concrete interaction patterns—such as prompt guides, stepwise requests for help, and explicit checkpoints for review.
Questions to Consider While Reading
- Q.In an environment where LLM use has become work infrastructure, how can we design boundaries so that certain tasks must be led by humans?
- Q.If value-driven appropriation is applied to real work tools, what interaction patterns can maintain efficiency without undermining users’ expertise?
- Q.Can the value recovery revealed in a short withdrawal experience be sustained in long-term usage contexts, and what longitudinal research could confirm this?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.