DNN-Based Online Speech Enhancement Using Multitask Learning and Suppression Rule Estimation

Most of the currently available speech enhancement algorithms use a statistical signal processing approach to remove the noise component from observed signals. The performance of these algorithms is thus dependent on the statistical assumptions they make about speech and noise signals, which are often inaccurate. In this work, we consider machine learning as an alternative, using deep neural networks to discover the transformation from noisy to clean speech. While DNNs are now the standard approach for acoustic modeling in speech recognition, there has been fewer studies looking at DNNs for improving the signal quality for the human listener. We consider a realistic scenario where both environmental noise and room reverberation are present and where a strict real-time processing requirement is enforced by the application. We examine several structures in which a DNN can replace conventional speech enhancement systems, including end-to-end DNN regression and also suppression rule estimation by DNNs. We also propose to use multitask learning with the estimation of bin-wise speech probability presence as the secondary task, and show it to provide improvements to the enhancement performance.

发言人详细信息

Seyedmahdad (Matt) Mirsamadi is currently a PhD student in the Center for Robust Speech Systems (CRSS) at the University of Texas at Dallas. His current PhD research is on robust far-field automatic speech recognition. During his internship at Microsoft, he has worked on deep neural networks for speech enhancement.

日期:
演讲者:
Seyedmahdad Mirsamadi
所属机构:
The University of Texas at Dallas

系列: Microsoft Research Talks