Let’s Talk About X: Combining Image Recognition and Eye Gaze to Support Conversation for People with ALS
- Shaun Kane ,
- Meredith Ringel Morris
Proceedings of DIS 2017 |
Published by ACM
Communicating at a natural speed is a significant challenge for users of augmentative and alternative communication (AAC) devices, especially when input is provided by eye gaze, as is common for people with ALS and similar conditions. One way to improve AAC throughput is by drawing on contextual information from the outside world. Toward this goal, we present SceneTalk, a prototype gaze-based AAC system that uses computer vision to identify objects in the user’s field of view and suggests words and phrases related to the current scene. We conducted a formative evaluation of SceneTalk with six people with ALS, in which we evaluated their preference for user interface modes and output preferences. Participants agreed that integrating contextual awareness into their AAC device could be helpful across a diverse range of situations.