Separating Long-Form Speech with Group-Wise Permutation Invariant Training
- Wangyou Zhang ,
- Zhuo Chen ,
- Naoyuki Kanda ,
- Shujie Liu ,
- Jinyu Li ,
- Sefik Emre Eskimez ,
- Takuya Yoshioka ,
- Xiong Xiao ,
- Zhong Meng ,
- Yanmin Qian ,
- Furu Wei
Interspeech 2022 |
Multi-talker conversational speech processing has drawn many interests for various applications such as meeting transcription. Speech separation is often required to handle overlapped speech that is commonly observed in conversation. Although the original utterancelevel permutation invariant training-based continuous speech separation approach has proven to be effective in various conditions, it lacks the ability to leverage the long-span relationship of utterances and is computationally inefficient due to the highly overlapped sliding windows. To overcome these drawbacks, we propose a novel training scheme named Group-PIT, which allows direct training of the speech separation models on the long-form speech with a low computational cost for label assignment. Two different speech separation approaches with Group-PIT are explored, including direct long-span speech separation and short-span speech separation with long-span tracking. The experiments on the simulated meeting-style data demonstrate the effectiveness of our proposed approaches, especially in dealing with a very long speech input.