"I Just Need GPT to Refine My Prompts": Rethinking Onboarding and Help-Seeking with Generative 3D Modeling Tools
HCI Today summarized the key points
- •This article reports on research into how users learn and seek help when using generative AI-based 3D modeling tools.
- •The research team compared user behaviors by conducting observations and interviews with 14 novice participants and 12 expert participants, for a total of 26 people.
- •Most participants started with prompt input rather than tutorials, and some novices refined their prompts using external AI such as ChatGPT.
- •Novices accepted results as ‘good enough,’ whereas experts evaluated them more strictly based on production readiness and precision.
- •In the end, generative 3D modeling turns help-seeking into a chain of support across AIs, and the study suggests that different design approaches are needed for novices and experts.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article is an HCI case study showing that generative AI is not merely a tool that ‘generates better,’ but one that reveals how users learn (help-seeking), where they get stuck, and what criteria they use to accept results. In particular, it offers direct implications for UX design and evaluation by examining onboarding where prompts become the entry point, AI-for-AI support enabled through external LLMs, and the different satisfaction criteria of novices versus experts. For practitioners, it informs how to design help features and workflows; for researchers, it provides grounds to revisit learning, trust, and intervention points.
CIT's Commentary
The most interesting aspect of this study is that onboarding shifted from ‘the process of reading tutorials’ to ‘the process of trying prompts.’ Users looked for external LLMs such as ChatGPT before relying on in-tool help, indicating that this is not just a convenience issue—rather, the structure of help is being reorganized across a multi-AI ecosystem. However, while this trend may lower entry barriers for novices, it could lead experts to lose precision and reproducibility. Therefore, in designing generative 3D tools, we need to design not only how results are generated, but also how well the system state is made visible, how failure modes are communicated, and when users can intervene. In product environments like Korea—where services often emphasize quick experimentation and light entry—‘good enough’ satisfaction criteria may also spread more widely.
Questions to Consider While Reading
- Q.When prompts are placed at the center of onboarding, what is the minimal interaction needed for novices to learn the tool’s limitations and failure modes sufficiently?
- Q.AI-for-AI help via external LLMs can improve productivity in practice, but at what point might it also harden incorrect expectations or lock in low-quality outputs?
- Q.When experts’ and novices’ criteria for ‘good enough’ differ, what is the most effective way to separate and connect the workflows of the two groups within a single interface?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.