Agentic AI Ethics: Who Should Decide the Rules?
UX Researchers as Ethical Arbitrators: Navigating the Ethics of Agentic AI
HCI Today summarized the key points
- •This article explains what ethical role UX researchers should play in an era when AI makes decisions on our behalf.
- •In the past, apps where users pressed buttons directly were the center of the experience, but now services powered by AI agents that act on the user’s behalf are becoming more common.
- •These systems are convenient, but if money or other important choices change without users understanding why, they lose both trust and a sense of control.
- •That’s why UX researchers shouldn’t focus only on usability—they need to review safety mechanisms such as providing explanations, allowing users to stop, and enabling undo.
- •Ultimately, UX research going forward is about more than whether something is easy to use—it’s about ensuring users can hand over responsibility with confidence.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article helps you see AI not as a “smart feature,” but as an “interface that acts on a person’s behalf.” For HCI/UX practitioners, the key issue isn’t whether the model produces the right answer—it’s when users can and should intervene, and when they can safely stop. In particular, it treats elements like trust, explanations, and undoing as part of a safe user experience, clearly pointing out risks that are easy to overlook when designing agentic AI.
CIT's Commentary
What’s interesting is that the challenge of agentic AI is being addressed not through performance, but through the design of delegated authority. The more a system moves automatically, the more comfortable users seem to become—but in reality, the core question is: “When do I lose decision-making power?” In high-stakes domains like finance or logistics, a single quiet malfunction can severely erode trust. So what matters isn’t more automation; it’s a structure where status is visible, the system can be stopped, and actions can be rechecked. Going one step further, this becomes a research question: how to measure such transparency and intervention points quantitatively. Even if you use LLMs to assist UX measurement tools, it’s still essential to be rigorous about what you use as the basis for measuring trust and perceived control. In contexts like Korea’s service environment—where rapid releases and high mobile dependence are common—these “slightly inconvenient but safe” mechanisms may need to be smaller and more naturally integrated than in global case studies.
Questions to Consider While Reading
- Q.When an autonomous AI acts on a user’s behalf, at what point can we say human re-verification is absolutely necessary?
- Q.Between an interface with lots of ‘explanations’ and one that ‘executes immediately,’ what criteria can preserve trust while maintaining efficiency?
- Q.In high-risk areas such as finance, hiring, and logistics, which UX metrics are most useful for quantitatively measuring users’ sense of control and safety?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.