The product owner uses AI for designing flows, and his statement was: “That took me one evening, what took you a month to make”
HCI Today summarized the key points
- •This article describes a case where screens created with AI tools like Claude didn’t match the existing design system, leading to conflict between the team and the product owner.
- •The author points out that even though there is a design system with fonts, colors, variables, and a UI kit, Claude ignores it and generates arbitrary designs.
- •Developers say it’s faster to rebuild from scratch than to fix those outputs, while the product owner argues that AI can do it in one evening and demands the same productivity.
- •The community advises clarifying AI usage principles and role division, defining the design strategy, and demonstrating evidence in terms of numbers across engineering, design, and management.
- •Overall, the conflict stems less from AI adoption itself and more from the lack of organizational standards and unclear responsibility boundaries—highlighting the need to think about career management as well.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article clearly shows that, in HCI and UX practice, AI tools cannot guarantee product quality with ‘generative capability’ alone. In particular, it reveals what kinds of friction arise when the design system, implementation libraries, and role division within an organization don’t align. It’s not just a matter of how to use the tools; it can also be read as a people–tool–organization alignment problem, which makes it meaningful for both practitioners and researchers.
CIT's Commentary
From a CIT perspective, this case shouldn’t be concluded as ‘Claude can’t do it.’ Instead, it should be viewed as a situation where AI cannot understand the organization’s design principles on its own. Generative AI can produce quick first drafts, but it doesn’t take responsibility for the surrounding context—such as consistency with the design system, rules for component reuse, and implementation feasibility. So the core issue is less about model performance and more about defining the scope of use and the boundaries of responsibility. In HCI terms, evaluation criteria change completely depending on whether AI is treated as a personal productivity tool or as part of a collaborative workflow. Ultimately, what’s needed isn’t a stronger model, but design leadership, operating rules, and the decision-making structure across engineering, design, and product.
Questions to Consider While Reading
- Q.If results are inconsistent even when AI references the design system, what contextual information and constraints must be provided together for it to be practically valid?
- Q.When adopting generative AI as part of a product team, how should design quality and collaboration responsibilities be measured and agreed upon?
- Q.When a product owner judges only by speed, what evidence and metrics should an HCI/UX team use to design the decision-making structure?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.