An Era Where AI Agents Become “Users” and Do the Work Directly
AI Agents as Users
HCI Today summarized the key points
- •This article explains how definitions of “users” and design standards must change as AI agents begin using websites and apps like people.
- •AI agents are new users who can perform tasks ranging from search and input to reservations and payments—and “users” no longer means only people.
- •Because agents can look at and read screens, parse the accessibility tree, and sometimes take direct action via APIs, having an easy-to-understand structure is important.
- •If tasks like checking dates or ordering products are designed only for human eyes, agents are likely to get things wrong—causing harm to users as well.
- •So for now, we should strengthen accessibility so both people and agents can use systems more easily; however, some services may block agents due to security and revenue considerations.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article helps you see how the scope of “users” expands from people to AI agents. For HCI/UX practitioners, it’s a chance to reaffirm why accessibility, predictable interactions, and clear structure still matter. For researchers, it raises new questions about how agents actually interpret real systems and where they fail. The key point is that—more than model performance—the interface can determine the outcome.
CIT's Commentary
An interesting takeaway is that the moment you call agents a “new kind of user,” the criteria for a good interface get reorganized again. A screen that’s easy for humans isn’t necessarily easy for machines, but a well-structured screen benefits both people and agents. That said, real products don’t always satisfy this neatly. In domains where safety and regulation matter—such as finance, healthcare, and reservation services—agent-friendliness doesn’t automatically translate into convenience. Instead, it becomes crucial how you handle confirmation steps and how you design failure paths. So what’s needed isn’t just “agent support,” but interaction design that makes system state transparent and lets users intervene at any time. At the same time, this shift changes research questions as well. It’s important to augment existing accessibility metrics or UX measurement tools with AI, while also rigorously preserving the reliability and reproducibility of the measurements themselves.
Questions to Consider While Reading
- Q.In interfaces that agents actually use, to what extent do human-centered accessibility guidelines remain valid, and where do their limitations start to show?
- Q.When allowing agent use in safety-critical services, how should you design the optimal failure paths so users can intervene and correct issues at the right times?
- Q.When building UX measurement tools using LLMs, how can you verify the balance between measurement convenience and research rigor?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.