Putting Annotations in Visualization: Key Tips from Visualization Practitioners and Educators
Designing Annotations in Visualization: Considerations from Visualization Practitioners and Educators
HCI Today summarized the key points
- •This article is research on how to design annotations in visualization.
- •The research team interviewed 10 practitioners and 7 educators to investigate the real criteria and concerns involved in creating annotations.
- •Annotations require joint decisions about who they are for, what the viewer should see first, where to place them, and how much to include.
- •They also need to be decided in terms of whether to connect targets using color and shape, and whether the annotations should appear as one integrated layer with the data or as a separate layer.
- •The study organizes six considerations for annotation design and provides a shared language that can be used for tool development, teaching, and critique.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article shows that the text and marks attached to charts are not just decorative—they are an interaction design problem that shapes how people read information and what they may misunderstand. For HCI/UX practitioners, it helps to organize practical work criteria such as ‘how much to explain,’ ‘where to place annotations,’ and ‘when to hand off to tools.’ For researchers, it provides a design vocabulary that connects user understanding, attention guidance, and information overload.
CIT's Commentary
A key strength of this paper is that it consolidates a somewhat scattered sense of annotation design into six decision criteria. In particular, the attitude that ‘there is no single correct answer’ is especially valuable. However, when these criteria are applied to real products, the fact that they can conflict with one another becomes more apparent. For example, placing annotations directly makes them easier to read, but they break down as screen space gets tighter; using a legend keeps the structure tidy, but users then have to keep moving back and forth. So the most important questions are not ‘what looks prettier,’ but ‘where does the user pause, what do they miss, and how can they re-intervene when things go wrong?’ If chart annotations are supported by AI, you should design not only the generation, but also an editing path that lets users quickly revise and verify results. In that sense, these qualitative frameworks connect directly to workflows where an LLM drafts and a human refines.
Questions to Consider While Reading
- Q.When translating these six considerations into real chart editing features for a product, what should be automated first—and what must still be decided by a human?
- Q.In constrained spaces like mobile or small screens, what criteria should be used to evaluate the trade-off between direct labeling and using a legend?
- Q.If you build a tool that uses an LLM to generate annotation drafts, how should you present state and rationale in a way that users can trust and easily edit?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.