Learning Latent Semantic Annotations for Grounding Natural Language to Structured Data
- Guanghui Qin ,
- Jin-Ge Yao ,
- Xuening Wang ,
- Jinpeng Wang ,
- Chin-Yew Lin
Empirical Methods in Natural Language Processing |
Published by Association for Computational Linguistics
DOI | Publication | Publication | Publication
Previous work on grounded language learning did not fully capture the semantics underlying the correspondences between structured world state representations and texts, especially those between numerical values and lexical terms. In this paper, we attempt at learning explicit latent semantic annotations from paired structured tables and texts, establishing correspondences between various types of values and texts. We model the joint probability of data fields, texts, phrasal spans, and latent annotations with an adapted semi-hidden Markov model, and impose a soft statistical constraint to further improve the performance. As a by-product, we leverage the induced annotations to extract templates for language generation. Experimental results suggest the feasibility of the setting in this study, as well as the effectiveness of our proposed framework.