How Should We Evaluate Visualizations That Capture Human Emotions?
Assessing Affective Objectives for Communicative Visualizations
HCI Today summarized the key points
- •This study examines how to evaluate the impact that visualizations have on people’s thoughts, attitudes, and behaviors.
- •Cognitive objectives are relatively easy to assess because it’s straightforward to see whether users remember or understand facts—almost like a test.
- •Affective objectives related to emotions and beliefs are harder to observe directly, so they require clearer evaluation criteria.
- •The researchers compile methods from education, economics, psychology, and public relations, and propose evaluation criteria that are fast to apply, immediately usable, and trustworthy.
- •In a case study using a humanitarian crisis video from Somalia, story-centered content about people led to the highest donation rates, demonstrating the need to evaluate affective objectives.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article makes a strong point that visualization is not just a tool for ‘presenting’ information, but an interaction designed to change people’s attitudes and behaviors. In particular, it’s highly useful for HCI/UX practitioners because it connects not only emotional responses, but also how to define and measure specific goals in a given way. It also prompts a rethink: design intent and evaluation metrics should not be separated.
CIT's Commentary
What’s especially interesting is that it shifts the criteria for ‘good visualization’ away from model performance and toward changes in user experience. Data-driven, people-centered, and mixed designs can produce different outcomes even when delivering the same message—and for behavioral goals in particular, balance can behave differently than expected. This isn’t just a question of presentation style; it’s about where users intervene, what they trust, and when they transition to action. Moreover, when building evaluation tools, having people create everything isn’t the only answer. By using LLMs to rapidly design survey instruments, rubrics, and interview-assist tools, these ambiguous emotion- and attitude-related goals can be validated more often and more cheaply. When research frameworks move into products, it’s important to examine failure modes and intervention pathways—not just produce a polished design.
Questions to Consider While Reading
- Q.If we apply this evaluation framework to real product A/B tests, how can we reduce the gap between changes in attitudes and changes in actual behavior?
- Q.When data-centered and people-centered content are combined, what follow-up research questions can help us dig deeper into why behavior might decrease instead?
- Q.When designing measurement tools for emotional goals using LLMs, how far should we go with automation—and what parts should not be automated?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.