The Ghost Scale: treating ai authorship as a primary visual affordance
HCI Today summarized the key points
- •This article discusses how to visually communicate provenance in user interfaces that include generative AI.
- •The author argues that current AI integration approaches treat copyright attribution as hidden metadata only, which increases user confusion.
- •The author also explains that when users read generative media, they experience greater cognitive fatigue as they try to decode human intent.
- •To address this, the author created Ghost Scale, a CSS framework that differentiates human intent density using opacity.
- •The key takeaway is that making whether content was generated by AI more explicit through clearer visual signals can reduce users’ cognitive burden and interpretation confusion.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article is meaningful for HCI/UX practitioners and researchers because it reframes the cognitive burden users experience when generative AI is mixed into interfaces as a problem of ‘authorship cues.’ It goes beyond simply adding an AI indicator, prompting readers to consider how clearly the interface must visually convey information provenance and intent signals, and how those signals affect users’ attention, trust, and understanding. In particular, it’s a useful case for examining how cognitive load, visual hierarchy, and perceptual cue design connect.
CIT's Commentary
From a CIT perspective, the core of this piece is not ‘how to make AI content visible,’ but ‘how quickly and clearly users can figure out what they need to judge.’ If authorship is left only as hidden metadata, real interfaces end up mixing together provenance, perceived trustworthiness, and whether editing intervention occurred—forcing users to spend unnecessary cognitive resources on interpretation. That said, an approach that relies only on opacity, like Ghost Scale, must adequately address cross-platform consistency, variation in color vision, and accessibility issues (especially luminance contrast). In other words, visual signals are only one dimension of the cue; they must be designed together with contextual labels, interaction timing, and information structure to work in practice.
Questions to Consider While Reading
- Q.When AI generation status is expressed using visual cues such as opacity, how can we distinguish whether users interpret it as a decrease in ‘importance’ versus a decrease in ‘trust’?
- Q.What authorship cues remain consistently understandable even when platforms and contexts change, and what user-testing tasks should be designed to validate them?
- Q.What conditions lead to better UX when visual cues are combined with explainability (XAI) or metadata exposure, rather than trying to reduce cognitive load with visual cues alone?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.