Characterizing and Predicting Engagement of Blind and Low-Vision People with an Audio-Based Navigation App [Pre-recorded CHI 2022 presentation]
CHI’22: ACM Conference on Human Factors in Computing Systems
Session: Late Breaking Work (LBW)
Audio-based navigation technologies can help people who are blind or have low vision (BLV) with more independent navigation, mobility, and orientation. We explore how such technologies are incorporated into their daily lives using machine learning models trained on the engagement patterns of over 4,700 BLV people. For mobile navigation apps, we identify user engagement features like the duration of first-time use and engagement with spatial audio callouts that are greatly relevant to predicting user retention. This work contributes a more holistic understanding of important features associated with user retention and strong app usage, as well as insight into the exploration of ambient surroundings as a compelling use case for assistive navigation apps. Finally, we provide design implications to improve the accessibility and usability of audio-based navigation technology.
- Date:
- Speakers:
- Tiffany Liu
- Affiliation:
- Stanford
-
-
Tiffany Liu
Research Intern
Stanford University
-
Javier Hernandez
Principal Researcher
-
Mar Gonzalez Franco
Principal Researcher
-
-
Melanie Kneisel
Senior Software Engineer
-
Adam Glass
Principal SDE
-
Jarnail Chudge
Innovation Architect
-
Amos Miller
Product Strategist
-
-