Explainable AI for Blind and Low-Vision Users: A Guide for the Agentic Era—Trust, Input Modality, and “Why It Decided That”
Explainable AI for Blind and Low-Vision Users: Navigating Trust, Modality, and Interpretability in the Agentic Era
HCI Today summarized the key points
- •This study examines how blind and low-vision users trust and understand AI explanations.
- •The research team argues that in an era where AI executes multiple steps on its own, unseen errors become a bigger problem.
- •Based on interview results, participants found explanations delivered through a conversational Q&A format more useful than color-coded images.
- •In addition, many participants tended to attribute the cause of AI mistakes to themselves, making it harder to notice the problem.
- •The study suggests that we need explanations that users can verify through text or dialogue, step-by-step checks, and designs co-created with users.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article frames XAI less as a problem of producing good explanations and more as a question of how users can verify and intervene in AI behavior. In particular, it explains why visual explanations become a barrier for blind and low-vision (BLV) users, and why conversational explanations and step-by-step checks are especially important. For HCI/UX practitioners, it offers concrete criteria for accessibility design; for researchers, it provides trust- and validation-related questions for the agentic AI era.
CIT's Commentary
The core of this article is “verifiable interaction,” not the form of the explanation. When AI becomes an agent that executes multiple steps on its own, a single wrong answer can grow into a much larger failure as it propagates backward and forward. In that context, what matters more than visual heatmaps is where the system state has progressed to, and whether users can stop, ask again, or roll back at specific points. Notably, the way BLV users verify through conversation goes beyond a simple accessibility requirement—it generalizes into safety interface principles that apply to all users. Also, since LLMs are not only explanation generators but tools for producing explanations, there is room to design LLM-based UX measurement questionnaires or audit procedures to evaluate the quality of these “verifiable explanations” together.
Questions to Consider While Reading
- Q.To make conversational explanations work in real products without imposing excessive cognitive load, what information should be shown first and what should be revealed later?
- Q.In agentic AI, what are the minimal interface patterns that allow users to undo, stop, and re-check?
- Q.How can the self-blame bias found among BLV users be generalized to AI trust design for other user groups?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.