Parametric Analysis and Synthesis of Sound Scenes for Perceptual Spatial Audio Reproduction

Spatial sound reproduction methods for recorded sound scenes are an active field of research, in parallel with the evolving vision-related or multi-modal technologies that aim to deliver a new generation of immersive multimedia content to the user. This presentation focuses specifically on techniques that try to render the recorded sound scene as perceptually close to the original as possible. Contrary to non-parametric methods, which are based on a passive decoding of the recorded signals to loudspeakers or headphones, parametric methods impose a model on the sound field and aim to estimate adaptively its parameters in the time-frequency domain from the recordings. This parameterization permits great flexibility in terms of scene manipulation, coding and rendering to arbitrary setups. This presentation gives an overview of parametric rendering strategies and present in more detail a generalization of the Directional Audio Coding method developed throughout this research.

日期:
演讲者:
Archontis Politis
所属机构:
University of Maryland