Directional Sources and Listeners in Interactive Sound Propagation using Reciprocal Wave Field Coding

ACM Transactions on Graphics (SIGGRAPH 2020) | , Vol 39(4)

(Equal contribution from first and second listed authors)

Publication

Everyday sources like voices and musical instruments radiate in a strongly directional manner. The radiated sounds propagate through complex environments undergoing scattering, diffraction, and occlusion before reaching a directional listener. For instance, someone speaking from an adjoining room will be heard loudest when they speak facing a common door, and their voice will be heard as coming through the door. Interactive sound propagation aims to render such audio cues in virtual worlds to deeply enhance the sense of presence. It is a tough problem: modeling, extracting, encoding, and rendering must all be done within tight computational resources. We present the first technique that can render such immersive wave effects in any complex game/VR scene. In particular, we can render directional effects from freely moving and rotating sources with arbitrary source directivity function. Spatial audio rendering can be performed as binaural via any head-related transfer function (HRTF), or through speaker panning. We show an integration of our interactive sound propagation system in Unreal Engine (TM). Our system’s CPU performance is insensitive to scene complexity and angular source/listener resolutions.

Summarized parameter fields for «HouseScene». For each listener position (green circle), we precompute and store directional parameter fields that vary over 3D source position. We visualize slices at listener height through these fields. Initial source direction (left) encodes the direction of radiation at each source location for the shortest (earliest arriving or «direct») path arriving at the listener. Similarly an arrival direction at listener is also encoded for each source location, not shown. Indirect energy transfer between source and listener is encoded in a 6×6 «reflections transfer matrix» (RTM) that aggregates about six signed axes in world space around both source and listener. RTM images (right) are summarized here by summing over listener directions , representing source transfer anisotropy for an omnidirectional microphone. Our encoding and runtime use the full matrix, shown in the paper. Overall our fully reciprocal encoding captures directionality at both source and listener.

Directional Sources and Listeners in Interactive Sound Propagation using Reciprocal Wave Field Coding

We present the first wave-based interactive system for practical rendering of global sound propagation effects including diffraction, in complex game and VR scenes with moving directional sources and listeners. Learn about Project Triton > Note: There is no narration in this video, only acoustic loudness and direction variations as indicated by notes in each scene.

Interactive sound simulation: Rendering immersive soundscapes in games and virtual reality

The audio-visual immersion of game engines and virtual reality/mixed reality has a vast range of applications, from entertainment to productivity. Physical simulation is required in these applications to produce nuanced, believable renderings that respond fluidly to unpredictable user interaction. Simulating sound phenomena synchronized with visuals must be done within tight real-time constraints. The wave behavior of audible sound is quite different from visible light, requiring fundamentally distinct techniques. The resulting challenges have impeded practical adoption in the past, but these barriers are finally being overcome, with accelerating usage of advanced sound technologies in interactive applications today. In this webinar led by Microsoft Principal Researcher Dr. Nikunj Raghuvanshi, learn the ins and outs of creating practical, high-quality sound simulations. You will get an overview of the…