Geometry-constrained Beamforming Network for end-to-end Farfield Sound Source Separation

Environmental noise, reverberation and interfering speakers negatively affect the quality of the speech signal and therefore degrade the performance of many speech communication systems including automatic speech recognition systems, hearing assistive devices and mobile devices. Many deep learning solutions are available to perform source separation and reduce background noise. However, when a physical interpretation of a signal is possible or multi-channel inputs are available conventional acoustic signal processing, e.g., beamforming and direction-of-arrival estimators (DOA), tend to be more interpretable and yield reasonably good solutions in many cases. This motivates to integrate deep learning and conventional acoustic signal processing solutions to profit from each other, as has been proposed by several works. However, the integration is typically performed in a modular way where each component is optimized individually, which may lead to non-optimal solution.

In this talk, we propose a DOA-driven beamforming network (DBnet) for end-to-end source separation, i.e., the gradient is passed in an end-to-end optimization way from time-domain separated speech signals of speakers to time-domain microphone signals. For DBnet structure, we consider either recurrent neural network (RNN) or a mixture of convolutional and RNN. We analyze the performance of the DBnet for challenging noisy and reverberant conditions and benchmark it with the state-of-the-art source separation methods.

Speaker Bios

Ali Aroudi is a PhD candidate from the University of Oldenburg and a research associate from Cluster of Excellence Hearing4All, Oldenburg, Germany. His research focus is on signal processing and machine learning for speech enhancement and brain-computer interfaces. From August 2014 he has worked as a researcher at the Signal Processing Group of the University of Oldenburg where he worked on cognitive-driven speech enhancement using hearing aid microphone signals and EEG signals. In 2018 he was a visiting researcher and a research intern at WS Audiology hearing aid company, Erlangen, Germany where he worked on closed-loop real-time cognitive-driven speech enhancement. In 2020 He pursued a research internship with NTT Communication Science Laboratories, Kyoto, Japan with a focus on DNN-based source separation using convolutional beamforming for cognitive-driven speech enhancement.

Date:
Haut-parleurs:
Ali Aroudi
Affiliation:
University of Oldenburg & Cluster of Excellence Hearing4All

Taille: Microsoft Research Talks