We decided not to hide our disabilities: reclaiming and raising awareness of next-gen accessibility symbols
"Because we are no longer ashamed of our disabilities, we are proud": Advocating and Reclaiming Next-Gen Accessibility Symbols
HCI Today summarized the key points
- •This article explores new accessibility symbols and technologies designed to better communicate invisible disabilities.
- •The research team worked with 23 participants with disabilities and co-created online, examining why existing symbols are widely recognized and why misunderstandings arise.
- •Participants wanted symbols embedded in tools such as badges, wristbands, mobile devices, and XR—so they could be shown only when needed or accompanied by more detailed explanations.
- •In particular, the symbols work not only as images, but also together with the device and the situation in which they are displayed, making them most effective when users directly control the scope of disclosure.
- •The study argues that accessibility symbols should be redesigned not as mere markers, but as an information-delivery system that reduces misunderstandings and protects the agency of the people concerned.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article helps you see accessibility not as ‘signage design,’ but as ‘interaction design.’ Disability information matters not just because it is visible, but because of who can see it, when, and how much—and what happens when it is misunderstood. For HCI practitioners, it highlights the need to design trust, control, and explainability together; for researchers, it shows why symbols and context must be studied in tandem.
CIT's Commentary
One interesting point is that this study treats accessibility symbols not as ‘pictures,’ but as interfaces that embody an ‘exposure strategy.’ While it may look like a problem of a single badge, in practice it requires more than that: mechanisms for adjusting exposure level, pathways that lead to explanations, and features that reduce misunderstandings. Especially in environments like XR or wearables—where the surroundings can read my information more easily—automatic labeling or other people’s over-inference quickly becomes a failure mode. The key, then, is less about how recognizable the symbol is and more about when the user chooses to intervene and what they can hide or add. This issue is also important in the context of domestic services. Mobile-first products from Naver, Kakao, and startups often treat fast sharing as a strength, but for sensitive information such as disability or health details, that speed can become a burden. In such cases, a structure that shows ‘only what’s needed, at the right moment’ is more suitable than one that shows ‘more.’ Also, when using LLMs as UX measurement tools, you should not only look at convenience—you must also validate response bias and reproducibility.
Questions to Consider While Reading
- Q.When an accessibility symbol appears on a device screen or an XR overlay, what is the minimum interaction that allows users to easily adjust the exposure range?
- Q.When do QR codes or supplementary text that explain the symbol in more detail help—and when do they instead lead to excessive disclosure?
- Q.In Korea’s mobile service context, what user participation approach is most realistic for creating accessibility markers recognized by the community?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.