Usability Testing: When Is the Best Time to Do It?
When is the best time to conduct usability testing?
HCI Today summarized the key points
- •This article explains when to conduct usability testing throughout the process of building and iterating on a product.
- •If you keep running usability tests from early ideas through after launch, you can find problems sooner and fix them more easily.
- •You can break it down into stages: defining the problem, checking wireframes and prototypes, conducting reviews during development, performing checks before launch, and verifying after launch.
- •Especially in high-risk situations—such as major redesigns, new features, or pricing changes—usability testing should be used to reduce uncertainty.
- •Even if you’re short on cost and time, you can start with small tests. The important thing is to check frequently right away.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article frames usability testing not as a one-off check, but as a ‘learning loop’ that should be built into the entire product development lifecycle. For HCI practitioners and researchers, having clear criteria on when to test, what questions to ask, and which metrics to look at is especially useful. In particular, connecting prototype testing, live product data, and post-release findings helps teams discover problems earlier—making it much easier to apply insights directly to real product operations.
CIT's Commentary
The key is to treat usability testing not as a ‘design review,’ but as a decision-making mechanism that reduces uncertainty. That’s a good flow—yet in real products, speed and rigor often clash. So rather than repeating the same method at each stage, you should vary the questions: early on, check whether the structure is right; later, examine error recovery and how users handle exceptional situations. For experiences that include AI, it’s even more important to focus on explainability of interactions and the pathways for user intervention—not just the outcome. You need tests that confirm whether users can see what the system is doing ‘right now.’ Using LLMs can speed up interview summaries and pattern categorization, but if you automate the judgment criteria, the rigor of the research may be compromised. That’s why it’s important to design LLM usage as an assistive tool rather than a replacement for human judgment.
Questions to Consider While Reading
- Q.When conducting usability testing for screens that include AI features, how can we measure users’ understanding of the system state and their ability to intervene, rather than relying on simple success rates?
- Q.When post-release analytics data and usability test results point in different directions, what criteria should be used to set priorities?
- Q.When summarizing usability test results using an LLM, where are the points that must be reviewed by humans to preserve research rigor?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.