Dynamic Sound Field Synthesis for Speech and Music Optimization


Zhenyu Tang, Nicolas Morales, Dinesh Manocha

University of North Carolina at Chapel Hill, University of Maryland



Abstract

We present a novel acoustic optimization algorithm to synthesize dynamic sound fields in a static scene. Our approach places new active loudspeakers or virtual sources in the scene so that the dynamic sound field in a region satisfies optimization criteria to improve speech and music perception. We use a frequency domain formulation of sound propagation and reduce the computation of dynamic sound field synthesis to solving a linear least squares problem, and do not impose any constraints on the environment or loudspeakers type, or loudspeaker placement. We highlight the performance on complex indoor scenes in terms of speech and music improvements. We evaluate the performance with a user study and highlight the perceptual benefits for virtual reality and multimedia applications.


Publication


Dynamic Sound Field Synthesis for Speech and Music Optimization
To appear in ACMMM 2018

Noise Field Control using Active Sound Propagation and Optimization
To appear in iWANENC 2018

Video


Download .mp4 (103.4 MB)