Real-time Single-channel Speech Enhancement with Recurrent Neural Networks

Single-channel speech enhancement using deep neural networks (DNNs) has shown promising progress in recent years. In this work, we explore several aspects of neural network training that impact the objective quality of enhanced speech in a real-time setting. In particular, we base all studies on a novel recurrent neural network that enhances full-band short-time speech spectra on a single-frame-in, single-frame-out basis, a framework that is adopted by most classical signal processing methods. We propose two novel learning objectives that allow separate control over expected speech distortion versus noise suppression. Moreover, we study the effect of feature normalization and sequence lengths on the objective quality of enhanced speech. Finally, we compare our method with state-of-the-art methods based on statistical signal processing and deep learning, respectively.

[SLIDES]

发言人详细信息

Yangyang Raymond Xia received his BSc and MSc in Electrical and Computer Engineering in 2015 and 2016, respectively, from Carnegie Mellon University. He is now a PhD candidate at CMU's Robust Speech Recognition Group, where he works on robust speech enhancement and speaker identification methods by combining speech signal processing and deep learning, under the supervision of Professor Richard M. Stern.

日期:
演讲者:
Yangyang (Raymond) Xia
所属机构:
Carnegie Mellon University