Human Choice, Causality, and the Role of the “Human-Computer Interface” in Risky AI
Human Agency, Causality, and the Human Computer Interface in High-Stakes Artificial Intelligence
HCI Today summarized the key points
- •This article addresses the problem of people losing control of AI in high-stakes domains, rather than the issue of whether people trust AI.
- •The author argues that, just as a bad UI can lead to major accidents, a bad AI can also amplify human errors by misrepresenting system state on the screen.
- •It also points out that even when AI predictions are uncertain, explanation tools often show only correlations rather than causes, making it difficult for users to judge why things happened.
- •To address this, the author proposes the Causal-Agency Framework (CAF), which bundles causal reasoning, uncertainty display, and intervention-capable interfaces.
- •Ultimately, the article argues that an AI interface that people can understand and change is more important than AI that merely increases trust.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article helps you rethink AI not as a ‘smart model,’ but as a question of how people can steer, understand, and stop it. In particular, in high-risk settings, it argues that what matters more than explainability is how transparent the system state is—and when users are able to intervene. For HCI and UX practitioners, it’s a warning that interface failures can quickly become safety issues. For researchers, it’s a hint that evaluation criteria should shift from trust to real joint task performance.
CIT's Commentary
The most interesting point is the attempt to move explainable AI beyond ‘easy-to-read explanations’ and toward interfaces that users can actually intervene in. However, applying this in industry introduces a trade-off: the more states, uncertainties, and intervention paths you show, the safer the system may become, but the interface becomes heavier and users may experience decision fatigue. So what’s important is not showing more information, but designing to leave only the control points that are truly necessary right now. This framework can also be adapted not only to medical and public-sector contexts, but to services like Naver and Kakao. For example, it can lead to research questions about how far recommendation, search, and agent features should take over for users—and from where users should be able to take control again. In the end, the core is not whether the AI gets things right, but whether users can correct the wrong automation—when, why, and how.
Questions to Consider While Reading
- Q.In high-risk AI interfaces, how far should the minimum intervention points for users go?
- Q.Showing uncertainty may improve safety, but it can also make the screen more complex—how should we evaluate this trade-off?
- Q.To measure the success of explainable AI by joint task performance rather than trust or satisfaction, what kind of experimental design is needed?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.