Robot ego‐noise suppression with labanotation‐template subtraction
- Jekaterina Jaroslavceva ,
- Naoki Wake ,
- Kazuhiro Sasabuchi ,
- Katsushi Ikeuchi
IEEJ Transactions on Electrical and Electronic Engineering |
In this study, we aim to improve automatic-speech-recognition (ASR) accuracy in the presence of robot ego-noise toward a better human-robot interaction. Although several noise reduction methods have been proposed to increase ASR accuracy or signal-to-noise ratio (SNR) by predicting ego-noises through a short-time motion-template subtraction or a neural network, these methods showed poor performance in some practical use cases, such as attenuating long-term motion-associated ego-noise. Based on the motion-template subtraction method, we address the problem of creating ego-noise templates associated with a wide variety of robot motions. For representing robot motions, we employ a dance notation referred to as Labanotation. The rationales behind our approach are: (i) Labanotation allows quantizing infinite motion patterns using a finite number of Labanotation combinations; (ii) Labanotation-based motion description is hardware-independent; and (iii) long-time noise templates facilitate the localization of noise templates in a speech-with-noise signal compared to short-time templates. The effectiveness of the Labanotation-template subtraction (LTS) method was tested for five commercial ASRs in terms of ASR accuracy, SNR, and source-to-distortion ratio. We show that LTS leads to a reasonable performance, comparable to the other methods. The contribution of this study is (i) to propose to use Labanotation to reasonably collect noise templates, (ii) to demonstrate the practical effectiveness of LTS as well as examples of Labanotations for household actions.