Rendering environmental voice reverberation for large-scale
distributed virtual worlds
Micah
Taylor, Nicolas Tsingos, Dinesh
Manocha
Sound propagation in massive scenes:
Our algorithm reduces precompute and runtime costs
allowing hundreds of sources to be rendered on a single CPU core.
We present a method for coordinating
enhanced VoIP communication between many clients in real-time. For each client
a unique environmental response is generated based on the client's location in
the scene. By efficiently compressing precomputed
environmental responses, our system can deliver an appropriate early response
for each sound source as well as a late decay for each final output stream.
- Describe
acoustic signature to quickly sample large scenes
- Reduce set of precomputed responses by combining similar samples
- Efficiently
compute propagation responses on scenes with thousands of receivers
- Compress
thousands of responses using efficient data structures
Basic process:
- -Form similarity
metric
-Geometric properties, related to acoustic rendering equation
-Based on first order direction, distance, material, and diffusion
- -Use metric to
segment scene into acoustic regions
-Run simulation on regions
-Collect responses
- -Store non-zero
unique responses
-Hashed list based on scene location
-Insert time: avg O(log
n)
-Lookup time: avg O(1)
-Storage size: avg k O(n), k related to audio
energy
Tutorial, comparison, walkthrough
(2014)
(movie 41MB)
Demonstration of several effects.
Diffuse reflection comparision
(2014)
(movie 123MB)
Comparision of results with and without diffuse simulation.
Tutorial and office demo (2012)
(movie 200MB)
Demonstration of several effects.
(movie 1.2MB)
Comparison with Ground Truth