UX Conference Confirmed for July 20–24!
UX Conference July Announced (Jul 20 - Jul 24)
HCI Today summarized the key points
- •This article provides information on live lecture schedules and session lengths that vary by region.
- •Live lectures for audiences in the Americas and Europe will run from 8:00 AM to 3:00 PM in San Francisco time.
- •People in other regions should check the local time for each city and align their schedules accordingly.
- •New York is 11:00 AM to 6:00 PM, São Paulo is 12:00 PM to 7:00 PM, and London is 4:00 PM to 11:00 PM.
- •Amsterdam and Berlin run from 5:00 PM to midnight, and each session lasts 7 hours, including break time.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article helps you see AI not just as a set of smart features, but as an interaction problem—one that includes how people perceive, trust, and decide when to intervene. In services where safety matters, what’s crucial is not only the model’s accuracy, but also how clearly the system’s state is communicated, and when users can reach in to take over. It’s a piece that allows both HCI/UX practitioners and researchers to revisit and re-check their design criteria.
CIT's Commentary
The most important point in this article is not AI performance, but how the division of responsibilities between people and the system is designed. Autonomy should be high, but users must be able to understand the current state at a glance and detect failure signals quickly. This is especially true in domains where small interface mistakes—such as remote control or autonomous driving—can lead to major accidents. What’s interesting is that when these frameworks are implemented in real products, the more you add explanations, the more complex things become, and the more you expand intervention paths, the less the benefits of automation hold up. So, research needs to ask more precisely not just ‘how much to automate,’ but ‘at which moments should humans come back in.’ Also, even if you use LLMs to design UX measurement tools as a supplementary aid, the rigor of measurement should remain something that humans validate.
Questions to Consider While Reading
- Q.What is the minimum set of signals that helps users avoid misunderstanding the AI’s current state?
- Q.How can you validate the trade-off—where automation efficiency decreases when you expand intervention pathways—during the design phase?
- Q.When building UX measurement tools using LLMs, what validation procedures are needed to maintain methodological rigor?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.