Geometry-constrained Beamforming Network for end-to-end Farfield Sound Source Separation
Environmental noise, reverberation and interfering speakers negatively affect the quality of the speech signal and therefore degrade the performance of many speech communication systems including automatic speech recognition systems, hearing assistive devices and mobile devices. Many deep learning solutions are available to perform source separation and reduce background noise. However, when a physical interpretation of a signal is possible or multi-channel inputs are available conventional acoustic signal processing, e.g., beamforming and direction-of-arrival estimators (DOA), tend to be more interpretable and yield reasonably good solutions in many cases. This motivates to integrate deep learning and conventional acoustic signal processing solutions to profit from each other, as has been proposed by several works. However, the integration is typically performed in a modular way where each component is optimized individually, which may lead to non-optimal solution.
In this talk, we propose a DOA-driven beamforming network (DBnet) for end-to-end source separation, i.e., the gradient is passed in an end-to-end optimization way from time-domain separated speech signals of speakers to time-domain microphone signals. For DBnet structure, we consider either recurrent neural network (RNN) or a mixture of convolutional and RNN. We analyze the performance of the DBnet for challenging noisy and reverberant conditions and benchmark it with the state-of-the-art source separation methods.
发言人详细信息
Ali Aroudi is a PhD candidate from the University of Oldenburg and a research associate from Cluster of Excellence Hearing4All, Oldenburg, Germany. His research focus is on signal processing and machine learning for speech enhancement and brain-computer interfaces. From August 2014 he has worked as a researcher at the Signal Processing Group of the University of Oldenburg where he worked on cognitive-driven speech enhancement using hearing aid microphone signals and EEG signals. In 2018 he was a visiting researcher and a research intern at WS Audiology hearing aid company, Erlangen, Germany where he worked on closed-loop real-time cognitive-driven speech enhancement. In 2020 He pursued a research internship with NTT Communication Science Laboratories, Kyoto, Japan with a focus on DNN-based source separation using convolutional beamforming for cognitive-driven speech enhancement.
- 日期:
- 演讲者:
- Ali Aroudi
- 所属机构:
- University of Oldenburg & Cluster of Excellence Hearing4All
系列: Microsoft Research Talks
-
Decoding the Human Brain – A Neurosurgeon’s Experience
Speakers:- Pascal Zinn,
- Ivan Tashev
-
-
-
-
Galea: The Bridge Between Mixed Reality and Neurotechnology
Speakers:- Eva Esteban,
- Conor Russomanno
-
Current and Future Application of BCIs
Speakers:- Christoph Guger
-
Challenges in Evolving a Successful Database Product (SQL Server) to a Cloud Service (SQL Azure)
Speakers:- Hanuma Kodavalla,
- Phil Bernstein
-
Improving text prediction accuracy using neurophysiology
Speakers:- Sophia Mehdizadeh
-
-
DIABLo: a Deep Individual-Agnostic Binaural Localizer
Speakers:- Shoken Kaneko
-
-
Recent Efforts Towards Efficient And Scalable Neural Waveform Coding
Speakers:- Kai Zhen
-
-
Audio-based Toxic Language Detection
Speakers:- Midia Yousefi
-
-
From SqueezeNet to SqueezeBERT: Developing Efficient Deep Neural Networks
Speakers:- Sujeeth Bharadwaj
-
Hope Speech and Help Speech: Surfacing Positivity Amidst Hate
Speakers:- Monojit Choudhury
-
-
-
-
-
'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project
Speakers:- Peter Clark
-
Checkpointing the Un-checkpointable: the Split-Process Approach for MPI and Formal Verification
Speakers:- Gene Cooperman
-
Learning Structured Models for Safe Robot Control
Speakers:- Ashish Kapoor
-