Summ^N: A Multi-Stage Summarization Framework for Long Input Dialogues and Documents
- Yusen Zhang ,
- Ansong Ni ,
- Ziming Mao ,
- Chen Henry Wu ,
- Chenguang Zhu ,
- Budhaditya Deb ,
- Ahmed Awadallah ,
- Dragomir Radev ,
- Rui Zhang
ACL 2022 |
Text summarization is an essential task to help readers capture salient information from documents, news, interviews, and meetings. However, most state-of-the-art pretrained language models are unable to efficiently process long text commonly seen in the summarization problem domain. In this paper, we propose Summ^N, a simple, flexible, and effective multi-stage framework for input texts that are longer than the maximum context lengths of typical pretrained LMs. Summ^N first generates the coarse summary in multiple stages and then produces the final fine-grained summary based on them. The framework can process input text of arbitrary length by adjusting the number of stages while keeping the LM context size fixed. Moreover, it can deal with both documents and dialogues and can be used on top of any underlying backbone abstractive summarization model. Our experiments demonstrate that Summ^N significantly outperforms previous state-of-the-art methods by improving ROUGE scores on three long meeting summarization datasets AMI, ICSI, and QMSum, two long TV series datasets from SummScreen, and a newly proposed long document summarization dataset GovReport. Our data and code are available at this https URL.