When Humans and AI Work Together: How Do Human Traits vs. AI Traits Change Real-World Behavior? (Comparing Simulated and User Studies)
Imperfectly Cooperative Human-AI Interactions: Comparing the Impacts of Human and AI Attributes in Simulated and User Studies
HCI Today summarized the key points
- •This study compares how differences in people’s personality and AI characteristics affect outcomes when human and AI goals are not perfectly aligned.
- •The research team ran 2,000 AI-simulated conversations and experiments with 290 real participants, comparing the same scenario across both approaches.
- •In the experiments—covering hiring negotiations and transactional situations where the AI could hide some information—the study examined extraversion and agreeableness, as well as the AI’s adaptability, expertise, and transparency.
- •In the simulated studies, personality traits had a larger impact, but in experiments with real people—especially AI transparency—had a greater influence on both outcomes and users’ perceptions.
- •Ultimately, the article argues that transparency, which reveals an AI’s internal thinking, can help understanding but may also reduce trust and satisfaction—so it must be adjusted according to the situation.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article shows that AI should not be viewed merely as a ‘smart engine,’ but that, within interactions with people, specific factors reshape trust and satisfaction. In particular, it addresses why transparency can be helpful in some cases and burdensome in others, and why simulation results can diverge from real users’ reactions—making it highly relevant for HCI/UX practitioners and researchers. It also clearly reveals the user-experience turning points that are easy to miss if you look only at model scores.
CIT's Commentary
One interesting finding is that transparency is not always the right answer. Showing internal reasoning can make things easier to understand, but it can also make users feel more strongly that, ‘Are you trying to persuade me?’ These results suggest that when designing AI agents, you should examine the interface’s intervention pathways before focusing on performance. They also make it clear that, rather than asking how well LLM simulations mimic people, it’s crucial to check where they diverge from actual user responses. Ultimately, when research frameworks are translated into products, the key is not simply disclosing more information, but disclosing it at the right time and providing controllability—so users can roll it back or adjust it.
Questions to Consider While Reading
- Q.If increasing transparency actually lowers user trust, how should we determine what level of information disclosure counts as ‘appropriate transparency’?
- Q.When LLM simulation results differ from real user studies, which should product design prioritize when interpreting the findings?
- Q.In situations where goals are not fully aligned—such as negotiation or customer service—how can we design interfaces that show users when to intervene and when to hand things over?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.