AnkleType: A Foot-Based Text Entry Technique in Virtual Reality That Requires Neither Hands nor Eyes
AnkleType: A Hands- and Eyes-free Foot-based Text Entry Technique in Virtual Reality
HCI Today summarized the key points
- •This article introduces AnkleType, a VR text-entry technique that inputs characters using ankle gestures without using hands or gaze.
- •The authors investigate the ankle rotation range and preferred gestures in both sitting and standing to establish design criteria for the input method.
- •Based on these findings, they propose two input strategies: using ankle rotation for navigation, and using forefoot and heel taps for confirm/cancel actions.
- •Through initial experiments and a 7-day longitudinal study, they measure typing speed and error rates for both novice users and users after learning to validate the technique’s effectiveness.
- •AnkleType is promising compared with existing hand-based VR input, especially because it demonstrates the possibility of eye-hand-free text entry during extended use and while standing.
This summary was generated by an AI editor based on HCI expert perspectives.
Why Read This from an HCI Perspective
This article is worth reading because it addresses a core HCI challenge: how to enable text entry in VR without relying on hands or gaze. In particular, the authors’ decision to design around an unconventional input channel—ankle motion—while also accounting for posture (standing vs. sitting) and gaze dependence is especially meaningful. It is not merely an idea proposal; by validating learnability and real-world usability through user-guided studies and longitudinal evaluation, it offers valuable references for both UX practitioners and researchers.
CIT's Commentary
From a CIT perspective, what matters more about AnkleType is not the novelty of being a ‘new input modality’ by itself, but its value as a strong example of context-adaptive interaction design. The authors first measure how the ankle’s range of motion differs between standing and sitting, then reflect those findings in the keyboard layout and interaction approach—an approach that feels very HCI. That said, current performance still requires training, so the technique’s value is likely to be higher in scenarios such as VR tasks where hands are occupied, situations where visual attention is distributed, and accessibility-assist interfaces, rather than in fully productized deployments. In CIT, we would also view this kind of foot-based input not as a standalone replacement, but as one axis within a multimodal input portfolio. If future work extends validation to fatigue, long-term use, variability in shoes and hardware, and user groups with disabilities, the impact will be much larger.
Questions to Consider While Reading
- Q.To what extent can the learning curve of ankle-based input improve with long-term use, and can it stabilize enough to replace keyboard typing in real task contexts?
- Q.Beyond standing/sitting, how should layout and interaction strategies change in constrained spaces, while moving, or when balance is unstable?
- Q.When combining ankle-based input with other VR input channels (hands, gaze, voice), what division of labor can achieve the lowest fatigue and the highest efficiency?
This commentary was generated by an AI editor based on HCI expert perspectives.
Please refer to the original for accurate details.
Subscribe to Newsletter
Get the weekly HCI highlights delivered to your inbox every Friday.