Probabilistic Features for Connecting Eye Gaze to Spoken Language Understanding
- Anna Prokofieva ,
- Malcolm Slaney ,
- Dilek Hakkani-Tür
Proceedings of ICASSP |
Published by IEEE - Institute of Electrical and Electronics Engineers
Many users obtain content from a screen and want to make requests of a system based on items that they have seen. Eye-gaze information is a valuable signal in speech recognition and spoken-language understanding (SLU) because it provides context for a user’s next utterance—what the user says next is probably conditioned on what they have seen. This paper investigates three types of features for connecting eye-gaze information to an SLU system: lexical, and two types of eye-gaze features. These features help us to understand which object (i.e. a link) that a user is referring to on a screen. We show a 17% absolute performance improvement in the referenced-object F-score by adding eye-gaze features to conventional methods based on a lexical comparison of the spoken utterance and the text on the screen.
© IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.