Toward Human Parity in Conversational Speech Recognition
- Wayne Xiong ,
- Jasha Droppo ,
- Xuedong Huang ,
- Frank Seide ,
- Mike Seltzer ,
- Andreas Stolcke ,
- Dong Yu ,
- Geoffrey Zweig
IEEE/ACM Transactions on Audio, Speech, and Language Processing | , Vol 25: pp. 2410-2423
Conversational speech recognition has served as a flagship speech recognition task since the release of the Switchboard corpus in the 1990s. In this paper, we measure a human error rate on the widely used NIST 2000 test set for commercial bulk transcription. The error rate of professional transcribers is 5.9% for the Switchboard portion of the data, in which newly acquainted pairs of people discuss an assigned topic, and 11.3% for the CallHome portion, where friends and family members have open-ended conversations. In both cases, our automated system edges past the human benchmark, achieving error rates of 5.8% and 11.0%, respectively. The key to our system’s performance is the use of various convolutional and long-short-term memory acoustic model architectures, combined with a novel spatial smoothing method and lattice-free discriminative acoustic training, multiple recurrent neural network language modeling approaches, and a systematic use of system combination. Comparing frequent errors in our human and machine transcripts, we find them to be remarkably similar, and highly correlated as a function of the speaker. Human subjects find it very difficult to tell which errorful transcriptions come from humans. Overall, this suggests that, given sufficient matched training data, conversational speech transcription engines are approximating human parity in both quantitative and qualitative terms.