Directional Sources and Listeners in Interactive Sound Propagation using Reciprocal Wave Field Coding
- Chakravarty R. Alla Chaitanya ,
- Nikunj Raghuvanshi ,
- Keith Godin ,
- Zechen Zhang ,
- Derek Nowrouzezahrai ,
- John Snyder
ACM Transactions on Graphics (SIGGRAPH 2020) | , Vol 39(4)
(Equal contribution from first and second listed authors)
Everyday sources like voices and musical instruments radiate in a strongly directional manner. The radiated sounds propagate through complex environments undergoing scattering, diffraction, and occlusion before reaching a directional listener. For instance, someone speaking from an adjoining room will be heard loudest when they speak facing a common door, and their voice will be heard as coming through the door. Interactive sound propagation aims to render such audio cues in virtual worlds to deeply enhance the sense of presence. It is a tough problem: modeling, extracting, encoding, and rendering must all be done within tight computational resources. We present the first technique that can render such immersive wave effects in any complex game/VR scene. In particular, we can render directional effects from freely moving and rotating sources with arbitrary source directivity function. Spatial audio rendering can be performed as binaural via any head-related transfer function (HRTF), or through speaker panning. We show an integration of our interactive sound propagation system in Unreal Engine (TM). Our system’s CPU performance is insensitive to scene complexity and angular source/listener resolutions.
Directional Sources and Listeners in Interactive Sound Propagation using Reciprocal Wave Field Coding
We present the first wave-based interactive system for practical rendering of global sound propagation effects including diffraction, in complex game and VR scenes with moving directional sources and listeners. Learn about Project Triton > Note: There is no narration in this video, only acoustic loudness and direction variations as indicated by notes in each scene.
Interactive sound simulation: Rendering immersive soundscapes in games and virtual reality
The audio-visual immersion of game engines and virtual reality/mixed reality has a vast range of applications, from entertainment to productivity. Physical simulation is required in these applications to produce nuanced, believable renderings that respond fluidly to unpredictable user interaction. Simulating sound phenomena synchronized with visuals must be done within tight real-time constraints. The wave behavior of audible sound is quite different from visible light, requiring fundamentally distinct techniques. The resulting challenges have impeded practical adoption in the past, but these barriers are finally being overcome, with accelerating usage of advanced sound technologies in interactive applications today. In this webinar led by Microsoft Principal Researcher Dr. Nikunj Raghuvanshi, learn the ins and outs of creating practical, high-quality sound simulations. You will get an overview of the…