Learning Bidirectional Intent Embeddings by Convolutional Deep Structured Semantic Models for Spoken Language Understanding
- Yun-Nung Chen ,
- Dilek Hakkani-Tür ,
- Xiaodong He
NIPS workshop |
The recent surge of intelligent personal assistants motivates spoken language understanding of dialogue systems. Considering high-level semantics, intent embeddings can be viewed as the universal representations that help derive a more flexible intent schema to overcome the domain constraint and the genre mismatch. A convolutional deep structured semantic model (CDSSM) is applied to jointly learn the representations for human intents and associated utterances. Two sets of experiments, intent expansion and actionable item detection, are conducted to evaluate the power of the learned intent embeddings. The representations bridge the semantic relation between seen and unseen intents for intent expansion, and connect intents from different genres for actionable item detection. The discussion and analysis of experiments provide a future direction for reducing human effort of data annotation and eliminating domain and genre constraints for spoken language understanding.