Diffraction Kernels for Interactive Sound Propagation in Dynamic Environments

Atul Rungta     Carl Schissler     Nicholas Rewkowski Ravish Mehra    Dinesh Manocha
University of North Carolina at Chapel Hill & Oculus Research

Abstract

We present a novel method to generate plausible diffraction effects for interactive sound propagation in dynamic scenes. Our approach precomputes a diffraction kernel for each dynamic object in the scene and combines them with interactive ray tracing algorithms at runtime. A diffraction kernel encapsulates the sound interaction behavior of individual objects in the free field and we present a new source placement algorithm to significantly accelerate the precomputation. Our overall propagation algorithm can handle highly-tessellated or smooth objects undergoing rigid motion. We have evaluated our algorithm’s performance on different scenarios with multiple moving objects and demonstrate the benefits over prior interactive geometric sound propagation methods. We also performed a user study to evaluate the perceived smoothness of the diffracted field and found that the auditory perception using our approach is comparable to that of a wave-based sound propagation method.



Atul Rungta, Carl Schissler, Nicholas Rewkowski, Ravish Mehra, and Dinesh Manocha. Diffraction Kernels for Interactive Sound Propagation in Dynamic Environments

Preprint (PDF, 18.8 MB), (IEEE VR 2018, Proceedings of IEEE TVCG)

Video (WMV, 107 MB)