How AI Literacy Shapes GenAI Use
HCI Today summarized the key points
- •This is a study examining the two dimensions of AI literacy—skills that shape how people use generative AI—and how user behavior differs.
- •AI literacy consists of prompt fluency and output literacy, and the two do not necessarily grow together.
- •The study categorized users into four groups—novices, naive advanced users, skeptical non-users, and AI experts—to identify the differences.
- •People who do a better job embedding context and constraints in prompts are more likely to use AI conversationally, but they often do not verify the accuracy of the results.
- •Therefore, AI tools should be designed not only to help users ask better questions, but also to encourage them to check and verify the results.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article reframes genAI use not as a simple matter of ‘whether people use it,’ but as a question of what they ask, how they ask it, and how they verify the results. For HCI/UX practitioners, it highlights that prompt proficiency and the ability to judge outputs are distinct design challenges. For researchers, it provides grounds for not treating AI literacy as a single, one-dimensional metric. This is especially relevant when designing for users with low AI literacy and for trust- and verification-focused UX.
CIT's Commentary
From a CIT perspective, the key point of this article is that ‘using AI well’ does not automatically mean ‘judging AI well.’ Many organizations focus on prompt training alone, but the real risk lies in how well users can perceive the accuracy, timeliness, and evidential grounding of the output. Therefore, CIT argues that AI experience design should separate input support from verification support. For example, beyond interfaces that help users ask better questions, we need mechanisms that reduce the cost of judgment—such as tracking core claims, comparing sources, and warning users about potentially outdated information. Also, ‘distrust’ and ‘indifference’ may not be mere resistance; they can be sophisticated usage strategies. As a result, user segmentation should be redrawn around trust and verification behaviors rather than around skill level alone.
Questions to Consider While Reading
- Q.In our product, at what moments do users become ‘naive power users’—people who write prompts well but do not verify the outputs?
- Q.In contexts where output accuracy matters, what UI patterns can naturally lead users to check sources and re-verify?
- Q.When measuring AI literacy, how should prompt proficiency and output judgment ability be separated and evaluated?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.