TransVIP: Speech to Speech Translation System with Voice and Isochrony Preservation
- Chenyang Le ,
- Yao Qian ,
- Dongmei Wang ,
- Long Zhou ,
- Shujie Liu ,
- Xiaofei Wang ,
- Midia Yousefi ,
- Yanmin Qian ,
- Jinyu Li ,
- Sheng Zhao ,
- Michael Zeng
NeurIPS 2024 |
There is a rising interest and trend in research towards directly translating speech from one language to another, known as end-to-end speech-to-speech translation. However, most end-to-end models struggle to outperform cascade models, i.e., a pipeline framework by concatenating speech recognition, machine translation and text-to-speech models. The primary challenges stem from the inherent complexities involved in direct translation tasks and the scarcity of data. In this study, we introduce a novel model framework TransVIP that leverages diverse datasets in a cascade fashion yet facilitates end-to-end inference through joint probability. Furthermore, we propose two separated encoders to preserve the speaker’s voice characteristics and isochrony from the source speech during the translation process, making it highly suitable for scenarios such as video dubbing. Our experiments on the French-English language pair demonstrate that our model outperforms the current state-of-the-art speech-to-speech translation model.