Ask HN: Are you getting addicted to the dev workflow of coding with agents?
Ask HN: Are you too getting addicted to the dev workflow of coding with agents?
HCI Today summarized the key points
- •This article is a comment thread that compares the LLM usage experience to game reward structures and discusses the future form of AI tools.
- •One comment describes it as the feeling of opening a loot box every time you execute after a long plan, because the outcome is hard to predict.
- •The same comment frames this repetition of uncertain rewards as an operant conditioning structure—like a Skinner box.
- •Another comment expects that the next iteration of Claude Code will appear like the metaverse, a holodeck, or the next Minecraft.
- •Both viewpoints suggest that LLM-based tools can evolve around unpredictable rewards and a new kind of interface experience.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This piece compares the experience of using LLMs (large language models) to ‘uncertain rewards’ and a ‘Skinner box,’ explaining why users keep repeating prompts. For HCI/UX practitioners and researchers, it’s meaningful because it shows that generative AI interaction is not just an efficiency tool—it’s a behavior design problem intertwined with expectations, rewards, and learning. It also explores how agentic tools may change users’ immersion and reliance.
CIT's Commentary
This comment is interesting because it captures the core value of generative AI not in ‘generating correct answers,’ but in ‘designing the reward structure.’ A system where the output changes every time you submit a prompt can help exploration, but it can also create reinforcement-learning-like interactions that pull users into repeated runs. That’s why UX evaluation shouldn’t focus only on accuracy or speed; it should also consider expectation management, tolerance for failure, and the ability to stop. The second comment’s ‘future metaverse’ analogy may sound exaggerated, but it can be read as a signal that an interface wrapping around the work process could become a new computing paradigm. In the end, the key isn’t fantasy—it’s how predictable and controllable the collaboration partner becomes in everyday work.
Questions to Consider While Reading
- Q.In LLM interactions, do ‘uncertain rewards’ increase user engagement, or do they lead to fatigue and overuse?
- Q.When evaluating the UX of generative AI, what behavioral and qualitative indicators should be considered alongside accuracy?
- Q.As agentic tools move deeper into the workflow, how should users’ sense of control and responsibility be designed?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.