Multi-Sensory Speech Processing: Incorporating Automatically Extracted Hidden Dynamic Information
- Amarnag Subramanya ,
- Li Deng ,
- Zicheng Liu ,
- Zheng Zhang
Proceedings of the IEEE International Conference on Multimedia & Expo (ICME), Amsterdam |
We describe a novel technique for multi-sensory speech processing for enhancing noisy speech and for improved noise robust speech recognition. Both air- and bone-conductive microphones are used to capture speech data where the bone sensor contains virtually noise-free hidden dynamic information of clean speech in the form of formant trajectories. The distortion in the bone-sensor signal such as teeth clacking and noise leakage can be effectively removed by making use of the automatically extracted formant information from the bone-sensor signal. This paper reports an improved technique for synthesizing speech waveforms based on the LPC cepstra computed analytically from the formant trajectories. When this new signal stream is fused with the other available speech data streams, we achieved improved performance for noisy speech recognition.