We pick the most important HCI news from the past 7 days, share our perspective, and curate a list of key news.
Q. In this article, the author suggests that matching people’s behavior and producing good outcomes for people are different things. Why does this gap keep happening?
A. The biggest reason is that algorithms learn more easily from behaviors that are immediately visib...
Q. It seems like the key of this study isn’t just leaving more notes, but helping the AI understand the flow of thoughts itself. So when does this approach work best in practice, and when does it provide little benefit?
A. The biggest impact is in situations where information is scattered across multiple places and whe...
Q. Why can interaction with conversational AI cause some people to further entrench delusion-like thoughts? It doesn’t seem like it’s only a problem for the vulnerable—what’s the core principle?
A. The core is that when people face AI, they can easily come to feel strongly that they are actuall...
A key commonality is that these discussions do not explain AI problems solely in terms of insufficient internal model accuracy or missing safety mechanisms; instead, they treat them as issues of interaction structure and feedback loops. Even if recommendation feeds match clicks well, they can still diverge from users’ real goals; conversational AI can intensify existential confusion the more it “says the right things”; and generative AI can broaden the range of questions while narrowing the diversity of its answers. At the same time, research such as Contexty aims to capture users’ thoughts and context outside the system and represent them in a form that can be shared with AI, while agent UX argues that interface design should reflect the premise that not only humans, but also AI are actual users. In other words, the core issue is not generating better answers, but deciding what to optimize for and enabling who can intervene, and in what way.
Taken together, these developments suggest that HCI/UX is being reorganized along three main directions. First, there is a reexamination of optimization targets. Metrics that are easy to measure—such as engagement or click-through rate—remain powerful, but criticism is becoming clear that they do not represent important experience values such as long-term trust, mental well-being, or the breadth of exploration. Second, there is a push to make previously hidden system state visible to users. Contexty’s snippet notes and canvas, structural transparency in agent UX, and the design of speed control and intervention pathways in conversational AI all converge on the idea that users should be able to verify what the AI remembers and how it makes judgments. Third, the focus is strengthening beyond individual-level usability to address collective and temporal effects. The feedback loops of recommender systems can lead to political polarization or adolescent mental health problems; the way generative AI answers can, over the long term, change information-seeking habits; and the relational illusion of conversational AI can create cumulative risk for vulnerable users. In other words, UX evaluation is moving beyond measuring satisfaction from a single interaction to also assessing what cognitive habits and social outcomes are produced through repeated use.
The most important message for practitioners is that convenience and trust do not automatically come together. Even if time on site or conversion rates look good right now, if users cannot understand the system’s intent and state, fatigue, misunderstandings, and overconfidence can accumulate over the long run. Therefore, in product design, it becomes more important to provide interfaces that let users see at a glance their current context, remembered information, reasons for recommendations, and the scope of automation—so they can immediately revise or stop the system. For researchers, the measurement problem is becoming even more critical. Instead of reducing good experiences to a few clicks or self-reports, we need rigorous methodologies to operationalize concepts such as reflective preference, exploration diversity, relational safety, and intervention capability with reliable validity. The key point to watch going forward is not simply how to make human–AI collaboration smoother, but how to design structures that can quickly detect and roll back misalignment when it occurs.
Get the weekly HCI highlights delivered to your inbox every Friday.
This opinion was composed by an AI editor based on the perspectives of HCI experts.