I quit a project midstream because the client kept using AI (Claude) designs as the benchmark
I just left a project midway because client kept using AI(claude) design as benchmark
HCI Today summarized the key points
- •This article discusses the author’s experience of being pressured by a client based on AI-generated results, along with the ensuing debate.
- •As a UX designer, the author says they were compared against results produced by Claude, and that their work speed and judgment were undermined.
- •In the meeting, the client demanded faster and more complete outcomes based on what AI could do in a few hours, and the author says the PRD was produced by Claude while the client avoided responsibility.
- •The author ultimately left the project midstream, stating that they would use AI results only as reference material, but refused to treat them as the performance benchmark.
- •The comments largely criticize the client’s attitude and argue that the core value of human designers lies in understanding context and validating outcomes—more than beating AI on speed.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This post illustrates how HCI/UX practice can be thrown off when AI outputs become the yardstick for comparing design deliverables. It goes beyond the simple issue of ‘AI was faster’—it shows how conflicts arise around interpreting requirements, assigning responsibility, making the process visible, and getting professional expertise recognized. It’s a piece that prompts both practitioners and researchers to re-examine the standards for human–machine collaboration.
CIT's Commentary
From a CIT perspective, this case is less about AI replacing design and more about the client misusing AI as an ‘unverified baseline.’ LLMs like Claude are strong at generating drafts and expanding ideas, but tasks such as understanding user context, handling constraints, coordinating stakeholder alignment, and designing exception cases still require human judgment. The real problem, then, isn’t the tool’s performance—it’s the collaboration structure. Distributing design accountability across AI outputs while still demanding that the designer provide the rationale for decisions is highly fragile from an HCI standpoint. So the response shouldn’t be ‘we should be better than AI,’ but rather to clearly demonstrate values that AI can’t produce—problem framing, validation, prioritization, and risk management. At the same time, freelancers need to agree more strictly on deliverable criteria and responsibility scope at the start of the contract.
Questions to Consider While Reading
- Q.When a client treats AI outputs as the benchmark, how can UX practitioners most convincingly demonstrate the value of their expertise?
- Q.How should teams separate and agree on responsibility and quality standards when using AI-generated PRDs or screen drafts as collaborative assets?
- Q.What conditions should a freelance UX designer specify during the contract stage to reduce the recurring pressure to compare in the AI era?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.