Acoustic Classification and Optimization for
Multi-Modal Rendering of Real-World Scenes


Carl Schissler, Christian Loftin, Dinesh Manocha

University of North Carolina at Chapel Hill


Abstract

We present a novel algorithm to generate virtual acoustic effects in captured 3D models of real-world scenes for multimodal augmented reality. We leverage recent advances in 3D scene reconstruction in order to automatically compute acoustic material properties. Our technique consists of a two-step procedure that first applies a convolutional neural network (CNN) to estimate the acoustic material properties, including frequency-dependent absorption coefficients, that are used for interactive sound propagation. In the second step, an iterative optimization algorithm is used to adjust the materials determined by the CNN until a virtual acoustic simulation converges to measured acoustic impulse responses. We have applied our algorithm to many reconstructed real-world indoor scenes and evaluated its fidelity for augmented reality applications.


Publication


Acoustic Classification and Optimization for Multi-Modal Rendering of Real-World Scenes
IEEE Transactions on Visualization and Computer Graphics, 2017


Video


Download .mp4 (97 MB)