The Next Level of AI Use: How to Know When to Use It—With Confidence
& The next research skill: Knowing when to use AI with confidence
HCI Today summarized the key points
- •This article explains, in an era where AI can replace research interviews, how far humans should go—and where they must directly intervene.
- •Anthropic completed 81,000 AI-facilitated research interviews across 159 countries in just one week, showing how quickly AI research is expanding.
- •While research work keeps growing, time keeps shrinking—so repetitive, rule-based tasks should be handled by AI, while people focus on critical judgments.
- •For research where questions are clear and the same approach matters, AI performs well; for delicate aspects like emotion or atmosphere, humans tend to handle them better.
- •In the end, strong research is completed when AI takes on speed and scale, and people interpret meaning to produce conclusions you can trust.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article doesn’t treat AI as merely a ‘faster automation tool.’ Instead, it pushes readers to re-ask where humans should step in during the research process. For HCI practitioners, it’s helpful because it distinguishes the human roles required at different stages—such as conducting interviews, probing with follow-up questions, and interpreting data. In particular, it encourages you to think about the trade-offs among consistency, speed, and depth, which can connect directly to how you run UX research in practice.
CIT's Commentary
The core of this piece isn’t about how well AI moderation performs; it’s about identifying the moments when human judgment changes the outcome. In safety-critical systems, if an interface is ambiguous, users may miss the system state—and small mistakes can escalate into major accidents. Research has a similar dynamic: even if AI asks good questions, if it misses the participant’s hesitation or the nuances of emotion, the meaning can shift. That’s why the more you expand automation, the more important it becomes to design what you delegate to AI and what you require humans to verify. At the same time, you also need ways to evaluate these AI research tools themselves. Just as you might measure follow-up question quality or research bias with LLM-based methods, it can naturally extend into research questions that use HCI methodology to improve AI.
Questions to Consider While Reading
- Q.What criteria can distinguish research that AI moderation fits well from research where humans must be involved?
- Q.How should an interface be designed for automated interviews so that subtle emotions or hesitation from participants aren’t missed?
- Q.If you build a tool to check research quality or bias using LLMs, what metrics should you validate first?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.