“Relational AI” in Education: How It Exchanges, How We Co-Design, and Indigenous Perspectives
Relational AI in Education: Reciprocity, Participatory Design, and Indigenous Worldviews
HCI Today summarized the key points
- •This article explores how AI should be used in education without harming relationships and communities.
- •The authors view learning not as an individual’s achievement, but as a process co-created through relationships among people and with the environment where learning happens.
- •GenAI is convenient, but it can reduce the power to think and collaborative learning, and it also consumes large amounts of resources such as electricity, water, and data.
- •It also points out that ways of collecting data and knowledge without permission can harm Indigenous rights, culture, and local responsibilities.
- •Therefore, AI should not be a tool used all the time; it should be a relationship-centered tool used only when needed—protecting communities and nature.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article is important for HCI/UX researchers because it frames AI not as a ‘smart tool,’ but as an interaction that changes learning relationships. In particular, it addresses both moments when AI helps in educational settings and moments when it weakens collaboration, reflection, and accountability—pushing readers beyond simple performance evaluation to rethink standards for user experience and intervention design. Practically, it also raises questions about when to turn AI on or off and what kinds of authority to leave to whom.
CIT's Commentary
The core message of this piece is that educational AI should not be viewed as a ‘machine that quickly provides the right answers,’ but as an interface that helps manage and tune relationships. A particularly thought-provoking point is the warning that as AI convenience increases, learners may lose the ability to think and speak for themselves. This concern carries over directly to safety- and responsibility-critical services, such as autonomous driving or remote-control systems. If the system’s state is not visible, if there are weak pathways for mid-course intervention, or if users have no way to recover when it fails, then even if performance looks good, the actual experience will be unstable. That’s why this discussion should move from the idea of a ‘good model’ to how to design a ‘good control mechanism.’ Furthermore, AI literacy should expand beyond basic usage skills into relational literacy—knowing when to rely on AI and when to reconnect with people.
Questions to Consider While Reading
- Q.To ensure that educational AI helps learners reflect rather than weakening their ability to think, what kinds of signals and control mechanisms are needed in the interface?
- Q.When you make AI a ‘tool that intervenes only when needed’ rather than an ‘always-on assistant,’ what trade-offs between convenience loss and educational benefits emerge in real products?
- Q.When building tools to measure learning experiences or UX with LLMs, how can we preserve both the efficiency of automation and the rigor of research methodologies?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.