Research intern talk: Real-time single-channel speech separation in noisy & reverberant environments
Real-time single-channel speech separation aims to unmix an audio stream captured from a single microphone that contains multiple people talking at once, environmental noise, and reverberation into multiple de-reverberated and noise-free speech tracks, each track containing only one talker. While large state-of-the-art DNNs can achieve excellent separation from anechoic mixtures of speech, the main challenge is to create compact and causal models that can separate reverberant mixtures at inference time. In this research project, we explore low-complexity, resource-efficient, causal DNN architectures for real-time separation of two or more simultaneous speakers. A cascade of three CRUSE models were trained to sequentially perform noise-suppression, separation, and de-reverberation. For comparison, a larger end-to-end CRUSE model was trained to output two anechoic speech signals directly from noisy reverberant speech mixtures. We propose an efficient single-decoder architecture with “best-and-rest†training for real-time recursive speech separation or two or more speakers. Evaluations on WHAMR! and real monophonic recordings of speech mixtures from REAL-M and DNS challenge datasets, according to speech separation and perceptual measures like SI-SDR and DNS-MOS, show that these compact causal models can separate speech mixtures with low latency.
- Date: