Artemis: A Novel Annotation Methodology for Indicative Single Document Summarization
- Rahul Jha ,
- Keping Bi ,
- Yang Li ,
- Mahdi Pakdaman ,
- Asli Celikyilmaz ,
- Ivan Zhiboedov ,
- Kieran McDonald
EMNLP Workshop on Evaluation and Comparison for NLP systems |
We describe Artemis (Annotation methodology for Rich, Tractable, Extractive, Multi-domain, Indicative Summarization), a novel hierarchical annotation process that produces indicative summaries for documents from multiple domains. Current summarization evaluation datasets are single-domain and focused on a few domains for which naturally occurring summaries can be easily found, such as news and scientific articles. These are not sufficient for training and evaluation of summarization models for use in document management and information retrieval systems, which need to deal with documents from multiple domains. Compared to other annotation methods such as Relative Utility and Pyramid, Artemis is more tractable because judges don’t need to look at all the sentences in a document when making an importance judgment for one of the sentences, while providing similarly rich sentence importance annotations. We describe the annotation process in detail and compare it with other similar evaluation systems. We also present analysis and experimental results over a sample set of 532 annotated documents.