How to Live Together with People and Robots: A “Dual-Space” Design Approach That Simultaneously Considers Robot Design and Human Perception in Healthcare
Towards Considerate Human-Robot Coexistence: A Dual-Space Framework of Robot Design and Human Perception in Healthcare
HCI Today summarized the key points
- •This article reports research on how humans and robots coexist in healthcare settings, and on how people come to accept robots.
- •The research team re-interviewed nine participants who took part in a 14-week co-design process to examine how expectations for a medical robot changed.
- •As a result, people’s thinking differed across four criteria: whether they viewed the robot by components or as a whole, and whether they focused only on the present or extended their view into the future.
- •The study also found that robots were understood through a process in which design and user experience influence each other, and that people acted not only as participants but also as interpreters and mediators.
- •The researchers say that for robots to be used effectively, it is necessary to communicate the rationale for the design in advance, provide time for adaptation, and show consideration by respecting people’s roles and boundaries.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article does not simply judge healthcare robots as ‘good’ or ‘bad.’ Instead, it examines how people interpret robots and how they become familiar with them over time. For HCI and UX practitioners, it shows that function design alone is not enough—how explanations are delivered, when the system is introduced, and the pathways for user involvement can significantly reshape the experience. For researchers, it offers hints on how to study a more concrete ‘coexistence process’ beyond mere acceptance (attitude), using long-term co-design and qualitative analysis.
CIT's Commentary
A particularly interesting point is that the robot is treated not as an object, but as an ‘interaction target whose meaning changes over time.’ In particular, the article does not lock people’s perceptions into a single survey result; it breaks them down into interpretive dimensions such as the degree of decomposition, the time axis, and the source of evidence—making it practically useful as well. In safety-critical environments like healthcare, trust can easily break if the system’s state is unclear, even when performance is strong. This article brings that problem to the forefront of design. However, in real-world deployment, transparency alone is not sufficient; you also need to design when and how users can intervene. Ultimately, what matters is less ‘what the robot can do’ and more ‘how people can understand and adjust that robot in a particular way.’ This perspective carries over not only to healthcare robots, but also to the adoption of AI agents.
Questions to Consider While Reading
- Q.How detailed and in what form should explanations of a medical robot’s design be provided so that they help understanding while preventing unrealistic expectations?
- Q.When users’ perceptions change over the long term, what qualitative and quantitative indicators can be used to measure the maturity of coexistence?
- Q.To reduce misunderstandings and anxiety that arise ‘after deployment,’ what intervention pathway should be designed first in the field?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.