Tracking Adoption of Research Recommendations: the Recommendation-Adoption Score
Tracking Adoption of Research Recommendations: The Recommendation-Adoption Score
HCI Today summarized the key points
- •This article introduces the Recommendation-Adoption Score (RAS), which measures how much research recommendations are actually adopted.
- •The authors explain that recommendations should be treated like inventory, with clear descriptions, explicit links to supporting evidence, defined completion criteria, and named accountable owners.
- •RAS splits recommendation statuses into Adopted, Committed, Communicated, and Canceled, and weights adopted recommendations with 1–3 points based on user impact.
- •The score is calculated as (actual user value ÷ total possible user value) × 100, and is tracked using a 12-month rolling window to observe trends.
- •If RAS is low, it means value is leaking between research and delivery; the key is the improvement trend over time rather than any single score.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This piece is meaningful for HCI/UX practitioners because it attempts to quantitatively track the path from research findings to actual product changes. Typically, the impact of research can fade after the presentation, but RAS reveals where recommendations get lost and structures definitions of status, ownership, and value—helping diagnose collaboration bottlenecks. The key is that it helps you see ‘delivered value,’ not just ‘good insights.’
CIT's Commentary
From a CIT perspective, RAS is not merely an operational metric; it’s a governance tool that exposes translation loss between research, design, and development. In particular, treating recommendations like an inventory and clearly separating statuses into Adopted/Committed/Communicated is practically effective—it reduces the illusion created by vague ‘in progress’ states. However, if this metric becomes a performance indicator used to evaluate teams, it could backfire by increasing easy-to-score tweaks rather than high-value work. That’s why CIT argues that RAS should be run as a way to diagnose system health—not as a tool for assigning blame. As the authors emphasize, trends matter, and you need an interpretive stance that separates researchers’ quality from an organization’s execution capability.
Questions to Consider While Reading
- Q.To keep RAS as a system-diagnosis metric rather than a team-evaluation metric, what operating principles and review cadence would be needed?
- Q.How well does the approach of assigning 0.66 points to Committed actually distinguish between ‘promised value’ and ‘realized value’?
- Q.In environments like Korean product organizations—where decision-making authority is distributed—what is the most realistic way to design clear ownership for recommendations?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.