One Is Not Enough: How People Use Multiple AI Models in Everyday Life
HCI Today summarized the key points
- •This article is based on research that investigated how people use multiple MLLMs together in everyday life, dividing roles among them.
- •Through a four-day diary study and interviews, participants operated differently depending on personal and work contexts, with each model taking on a primary or supporting role.
- •Model selection was continuously reconfigured based on factors such as first impressions, expert evaluations and social cues, as well as subscription and usage costs.
- •Users improved efficiency and trustworthiness using model-switching strategies such as step-by-step role division, adjusting difficulty and latency, and cross-verification.
- •The study suggests that design should be approached from an AI ecosystem perspective—not a single-AI mindset—and that tools helping users move between task states and separate memory are important.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article explores how people use multiple MLLMs together in everyday life, assigning roles across models—and how role handoffs, switching, and verification become new user skills in the process. For HCI/UX practitioners, it offers a chance to think about how interfaces should support an entire work flow, not just individual back-and-forth conversations. For researchers, it highlights key issues—trust calibration, context transfer, and memory separation—that can be examined empirically. In particular, it captures well the moment when generative AI shifts from being a mere ‘tool’ to becoming an ‘ecosystem.’
CIT's Commentary
What’s especially interesting is that users don’t choose models purely by performance rankings. Instead, they blend factors like familiarity, cost, response speed, and social signals to form a kind of task ecosystem. The key point here is that the real cost isn’t the switching itself—it’s the switching overhead: you have to restate the context, reconstruct traces from prior conversations, and recalibrate trust. That means the design focus shouldn’t be limited to competing for ‘smarter models’ alone. It should also aim to make context movement between models seamless and to separate and transfer memory at the level of work units. However, since the sample skews toward experienced users, more validation is needed to see how widely this strategy spreads among general users.
Questions to Consider While Reading
- Q.Among users who move between multiple models, which switching cost do they feel most strongly in practice: re-explaining context, comparing results, or recalibrating trust?
- Q.If features like task cards or topic-based memory separation are introduced, could there be side effects such as over-automating users’ judgments or locking them more strongly into a particular model?
- Q.How do multi-MLLM usage patterns differ between expert and non-expert users, and how might those differences affect interface design priorities?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.