[Interview] “The revamped Bixby will become the first touchpoint you meet when using Samsung products”…Vice President Park Jisun
[인터뷰] “새로워진 빅스비, 삼성 제품 사용할 때 가장 먼저 만나는 접점될 것”…박지선 부사장
HCI Today summarized the key points
- •This article explains how Samsung’s newly revamped Bixby has evolved into a smarter AI assistant.
- •The new Bixby is not just an assistant that listens to voice commands—it’s a device agent that understands both the user’s intent and the device’s status.
- •Users don’t have to find the right menu themselves; they just need to say what they want, and Bixby helps immediately with setting changes and troubleshooting.
- •Bixby has shifted to an LLM-centered architecture, enabling it to combine multiple functions and APIs, and it has significantly improved its understanding of Korean.
- •Samsung is building Bixby into a common AI gateway connecting Galaxy devices, appliances, and TVs—so anyone can use AI easily.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article frames AI not as a ‘smart model,’ but as an interaction problem—how users can actually delegate, verify, and undo actions. In particular, it’s meaningful for HCI/UX practitioners because it lets you examine both the promise and the risks that emerge as Bixby shifts from a response-style assistant to a device agent. Rather than just describing a feature, it highlights new design considerations for agentic interaction.
CIT's Commentary
An interesting point is that agentic AI is experienced not as the addition of new features, but as ‘delegation of action.’ Users want less manual effort, but anxiety grows if the system doesn’t clearly show what it knows and how far it has gone. For example, behind the convenience of changing display settings on your behalf, there should be an explanation of which settings were chosen and why—along with a clear path to revert immediately. These principles of transparency and the ability to intervene have always been important in autonomous driving or remote-control systems, and they’re just as necessary for device agents. Also, in languages with strong context dependence like Korean, it seems we need evaluation methods that reflect Korean users’ tone and expectations, rather than simply importing the standards used in global research.
Questions to Consider While Reading
- Q.What are the key signals that help users decide how much they trust and delegate to an agent, and how can these be communicated through the interface?
- Q.For features that execute complex tasks on the user’s behalf, how much of a path to intervene or undo should be provided?
- Q.What metrics and tasks would be best to design for evaluating AI interfaces that fit Korean language and Korean users’ expression styles?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.