Analysis of Naver Integrated Search’s AIB Adoption and Changes in Web Performance
네이버 통합검색 AIB 도입과 웹 성능 변화 분석
HCI Today summarized the key points
- •This article explains how Naver Integrated Search’s AI Briefing (AIB) feature affects the search speed metric LCP.
- •After AIB was introduced in March 2025, chat UI and animations were added starting in July, increasing the share of the screen devoted to the feature.
- •As AIB became more prominent, LCP worsened to around 3.1 seconds, and the number of users in slower segments increased as well.
- •The cause was in the process of rendering the screen rather than the server: the structure that reveals text little by little and then reorganizes it introduced delays.
- •So Naver is using other criteria alongside AIB—such as TTFT—and preparing performance management approaches that fit the UI’s characteristics.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article reframes AI features not as a problem with a ‘smart model,’ but as a question of how users experience the interface on screen. In particular, it shows how everyday interaction elements—such as chat UI, progressive rendering, and animations—can distort performance metrics. It also emphasizes that UX practitioners and researchers should not trust raw measurements blindly, but interpret them in context. This is especially relevant for teams working on products like search, summaries, and AI briefings.
CIT's Commentary
The most interesting point in this piece is that the performance issues with AIB stem not from the server, but from the interaction structure itself. When text is streamed word-by-word and the DOM is rebuilt to highlight content, LCP can end up capturing the moment the system finishes organizing—not the moment the user actually sees meaningful content. This kind of distortion often occurs in chat UIs. That’s why it’s important not to judge quality by LCP alone; you should also look at metrics like TTFT, which reflect when users receive their first meaningful input. However, changing metrics can loosen measurement rigor, so it’s crucial to clearly define which user context the metric represents before adopting it. This approach is particularly practical in environments like Korea’s search and portal ecosystem, where different UI patterns coexist—especially for AI products.
Questions to Consider While Reading
- Q.How can we validate the correlation between user-perceived performance and TTFT when TTFT is used as the core metric instead of LCP in a chat UI?
- Q.What kinds of rendering designs are possible to reduce performance-metric distortion while preserving interactions such as word-level streaming and highlighting?
- Q.In services where different UI patterns are mixed, what is the most practical way to separate common metrics from dedicated, UI-specific metrics?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.