Sound Synthesis and Propagation  Papers YouTube Playlist

In recent years, there has been a renewed interest in sound rendering for interactive applications. Our group has been working on novel algorithms for sound synthesis, as well as geometric and numeric approaches for sound propagation.

Symphony: Real-time Physically-based Sound Synthesis for Large Scale Environments

Nikunj Raghuvanshi and Ming C. Lin

We present an interactive approach for generating realistic physically-based sounds from rigid-body dynamic simulations. We use spring-mass systems to model each object's local deformation and vibration, which we demonstrate to be an adequate approximation for capturing physical effects such as magnitude of impact forces, location of impact, and rolling sounds. No assumption is made about the mesh connectivity or topology. Surface meshes used for rigid-body dynamic simulation are utilized for sound simulation without any modifications. We use results in auditory perception and a novel priority-based quality scaling scheme to enable the system to meet variable, stringent time constraints in a real-time application, while ensuring minimal reduction in the perceived sound quality. With this approach, we have observed up to an order of magnitude speed-up compared to an implementation without the acceleration. As a result, we are able to simulate moderately complex simulations with up to hundreds of sounding objects at over 100 frames per second (FPS), making this technique well suited for interactive applications like games and virtual environments. Furthermore, we utilize OpenAL and EAX on Creative Sound Blaster Audigy 2 cards for fast hardware-accelerated propagation modeling of the synthesized sound.

Project website...  YouTube Video

Synthesizing Contact Sounds Between Textured Models

Zhimin Ren, Hengchin (Yero) Yeh, and Ming C. Lin

We present a new interaction handling model for physics-based sound synthesis in virtual environments. A new three-level surface representation for describing object shapes, visible surface bumpiness, and microscopic roughness (e.g. friction) is proposed to model surface contacts at varying resolutions for automatically simulating rich, complex contact sounds. This new model can capture various types of surface interaction, including sliding, rolling, and impact with a combination of three levels of spatial resolutions. We demonstrate our method by synthesizing complex, varying sounds in several interactive scenarios in a game-like virtual environment. The three-level interaction model for sound synthesis enhances the perceived coherence between audio and visual cues in virtual reality applications.

Project website...  YouTube Video

Physics-based Liquid Sound Synthesis

William Moss, Hengchin (Yero) Yeh, Ming C. Lin, and Dinesh Manocha

We present a novel approach for synthesizing liquid sounds directly from visual simulations of fluid dynamics. The sound generated by liquid is mainly due to the vibration of resonating bubbles in the medium. Our approach couples physically-based equations for bubble resonance with a real-time shallow-water fluid simulator as well as an hybrid SPH-grid-based simulator to perform automatic sound synthesis. Our system has been effectively demonstrated on several benchmarks.

Project website...  YouTube Video

Tabletop Ensemble: Touch-Enabled Virtual Percussion Instruments

Zhimin Ren, Ravish Mehra, Jason Coposky, and Ming C. Lin

We present an interactive virtual percussion instrument system, Tabletop Ensemble, that can be used by a group of collaborative users simultaneously to emulate playing music in real world while providing them with flexibility of virtual simulations. An optical multi-touch tabletop serves as the input device. A novel touch handling algorithm for such devices is presented to translate users’ interactions into percussive control signals appropriate for music playing. These signals activate the proposed sound simulation system for generating realistic user-controlled musical sounds. A fast physically-based sound synthesis technique, modal synthesis, is adopted to enable users to directly produce rich, varying musical tones, as they would with the real percussion instruments. In addition, we propose a simple coupling scheme for modulating the synthesized sounds by an accurate numerical acoustic simulator to create believable acoustic effects due to cavity in music instruments. This paradigm allows creating new virtual percussion instruments of various materials, shapes, and sizes with little overhead. We believe such an interactive, multi-modal system would offer capabilities for expressive music playing, rapid prototyping of virtual instruments, and active exploration of sound effects determined by various physical parameters in a classroom, museum, or other educational settings. Virtual xylophones and drums with various physics properties are shown in the presented system.

Project website... 

AD-Frustum: Adaptive Frustum Tracing for Interactive Sound Propagation

Anish Chandak, Christian Lauterbach, Micah Taylor, Zhimin Ren, and Dinesh Manocha

We present an interactive algorithm to compute sound propagation paths for transmission, specular refection and edge diffraction in complex scenes. Our formulation uses an adaptive frustum representation that is automatically sub-divided to accurately compute intersections with the scene primitives. We describe a simple and fast algorithm to approximate the visible surface for each frustum and generate new frusta based on specular refection and edge diffraction. Our approach is applicable to all triangulated models and we demonstrate its performance on architectural and outdoor models with tens or hundreds of thousands of triangles and moving objects. In practice, our algorithm can perform geometric sound propagation in complex scenes at 4-20 frames per second on a multi-core PC.

Paper... (IEEE Visualization, 2008)  YouTube Video

Interactive Sound Propagation in Dynamic Scenes Using Frustum Tracing

Christian Lauterbach, Anish Chandak, Micah Taylor, Zhimin Ren, and Dinesh Manocha

We present a new approach for simulating real-time sound propagation in complex, virtual scenes with dynamic sources and objects. Our approach combines the efficiency of interactive ray tracing with the accuracy of tracing a volumetric representation. We use a four-sided convex frustum and perform clipping and intersection tests using ray packet tracing. A simple and efficient formulation is used to compute secondary frusta and perform hierarchical traversal. We demonstrate the performance of our algorithm in an interactive system for game-like environments and architectural models with tens or hundreds of thousands of triangles. Our algorithm can simulate and render sounds at interactive rates on a high-end PC.

Project website...

Interactive Edge-diffraction for Sound Propagation in Complex Virtual Environments

Micah Taylor, Anish Chandak, Zhimin Ren, Christian Lauterbach, and Dinesh Manocha

We present an algorithm for interactive computation of diffraction paths for geometric-acoustics in complex environments. Our method extends ray-frustum tracing to efficiently compute volumetric regions of the sound field caused by long diffracting edges. We compute accurate diffraction paths from each source to the listener and based on the Uniform Theory of Diffraction, attenuate the input audio. The overall approach is general, can handle dynamic scenes, and can be used to compute diffraction and specular reflection paths with relatively little aliasing. We evaluate the accuracy through comparisons with physically validated geometric simulations. In practice, our edge diffraction algorithm can perform sound propagation at interactive rates in dynamic scenarios on a multi-core PC.

Project website...

An Efficient Time-domain Solver for the Acoustic Wave Equation on Graphics Processors

Ravish Mehra, Nikunj Raghuvanshi, Lauri Savioja, Ming C. Lin, and Dinesh Manocha

We present an efficient algorithm for time-domain solution of the wave equation for the purpose of room acoustics. The approach assumes that the speed of sound is spatially invariant. Numerical dispersion errors are controlled by computing an adaptive rectangular decomposition of the environment and using analytical solutions within the rectangular partitions based on the Discrete Cosine Transform. Sixth-order finite-difference transmission operators are used to model propagation across partition interfaces. A novel mapping of the complete solver to graphics processors is presented. It is demonstrated that by carefully mapping all components of the algorithm to match the parallel processing capabilities of graphics processors, significant improvement in performance is gained compared to a CPU-based implementation. Due to the near absence of numerical dispersion, this technique is suitable for both, computing impulse responses for auralization, as well as producing animated sound field visualizations. As a result of using graphics processors, a 1 second long simulation can be performed on a scene of air volume 7500 cubic meters till 1650 Hz in 18 minutes on a desktop computer. The performance of the algorithm is tested for many complex-shaped 3D spaces. The use of graphics processor results in a 25-fold improvement in computation time over a single-threaded CPU-based implementation, while producing numerically identical results. Also, substantial performance gain over a high-order finite-difference time-domain method is observed.To the best of the authors' knowledge, this is the fastest time-domain solver for the wave equation for modeling room acoustics of large, complex-shaped 3D scenes that generates broad-band results for auralization as well as visualization purposes.

Project website...

Fast and Accurate Specular Reflection Computation using Visibility Culling

Anish Chandak, Lakulish Antani, Micah Taylor, and Dinesh Manocha

We present an efficient technique to compute the potentially visible set (PVS) of triangles in a complex 3D scene from a viewpoint. The algorithm computes a conservative PVS at object space accuracy. Our approach traces a high number of small, volumetric frusta and computes blockers for each frustum using simple intersection tests. In practice, the algorithm can compute the PVS of CAD and scanned models composed of millions of triangles at interactive rates on a multi-core PC.We also use the visibility algorithm to accurately compute the reflection paths from a point sound source. The resulting sound propagation algorithm is 10-20X faster than prior accurate geometric acoustic methods.e present a fast algorithm to perform sound propagation in complex 3D scenes. Our approach computes propagation paths from each source to the listener by taking into account specular reflections and higher-order edge diffractions around finite edges in the scene. We use the well known Biot-Tolstoy-Medwin diffraction model along with efficient algorithms for region-based visibility to cull away primitives and significantly reduce the number of edge pairs that need to be processed. The performance of region-based visibility computation is improved by using a fast occluder selection algorithm that can combine small, connected triangles to form large occluders and perform conservative computations at object-space precision. We show that our approach is able to reduce the number of visible primitives considered for sound propagation by a factor of 2 to 4 for second order edge diffraction as compared to prior propagation algorithms. We demonstrate and analyze its performance on multiple benchmarks.

Project website...

Fast Geometric Sound Propagation with Finite Edge Diffraction

Lakulish Antani, Anish Chandak, Micah Taylor, and Dinesh Manocha

We present a fast algorithm to perform sound propagation in complex 3D scenes. Our approach computes propagation paths from each source to the listener by taking into account specular reflections and higher-order edge diffractions around finite edges in the scene. We use the well known Biot-Tolstoy-Medwin diffraction model along with efficient algorithms for region-based visibility to cull away primitives and significantly reduce the number of edge pairs that need to be processed. The performance of region-based visibility computation is improved by using a fast occluder selection algorithm that can combine small, connected triangles to form large occluders and perform conservative computations at object-space precision. We show that our approach is able to reduce the number of visible primitives considered for sound propagation by a factor of 2 to 4 for second order edge diffraction as compared to prior propagation algorithms. We demonstrate and analyze its performance on multiple benchmarks.

Project website...

RESound: Interactive Sound Rendering for Dynamic Virtual Environments

Micah Taylor, Anish Chandak, Lakulish Antani, and Dinesh Manocha

We present an interactive algorithm and system (RESound) for sound propagation and rendering in virtual environments and media applications. RESound uses geometric propagation techniques for fast computation of propagation paths from a source to a listener and takes into account specular reflections, diffuse reflections, and edge diffraction. In order to perform fast path computation, we use a unified ray-based representation to efficiently trace discrete rays as well as volumetric ray-frusta. RESound further improves sound quality by using statistical reverberation estimation techniques. We also present an interactive audio rendering algorithm to generate spatialized audio signals. The overall approach can handle dynamic scenes with no restrictions on source, listener, or obstacle motion. Moreover, our algorithm is relatively easy to parallelize on multi-core systems. We demonstrate its performance on complex game-like and architectural environments.

Project website...  YouTube Video

iSound: Interactive GPU-based Sound Auralization in Dynamic Scenes

Micah Taylor, Anish Chandak, Qi Mo, Christian Lauterbach, Carl Schissler, and Dinesh Manocha

We present an auralization algorithm for interactive virtual environments with dynamic sources and objects. Our approach uses a modified image source method that computes propagation paths combining direct transmission, specular reflections and edge diffractions up to a given order. We use a novel GPU-based multi-view raycasting algorithm for parallel computation of image sources along with a simple resampling scheme to reduce visibility errors. In order to reduce the artifacts in audio rendering of dynamic scenes, we use a higher order interpolation scheme that takes into account attenuation, cross-fading and delay. The resulting system can perform perform auralization at interactive rates on a high-end PC with NVIDIA GTX 280 GPU with 2-3 orders of reflections and diffraction. Overall, our approach can generate plausible sound rendering for game-like scenes with tens of thousands of triangles. We observe more than an order of magnitude improvement in computing propagation paths over prior techniques.

Project website...  YouTube Video

GSound: Interactive Sound Propagation and Rendering for Games

Carl Schissler and Dinesh Manocha

We present a sound propagation and rendering system for generating realistic environmental acoustic effects in real time for game-like scenes. The system uses ray tracing to sample triangles that are visible to a listener at an arbitrary depth of reflection. Sound reflection and diffraction paths from each sound source to the listener are then validated using ray-based occlusion queries. Frame-to-frame caching of propagation paths is performed to improve the consistency and accuracy of the output. Furthermore, we present a flexible framework, which takes a small fraction of CPU cycles for time-critical scenarios. To the best of our knowledge, this is the first practical approach that can generate realistic sound and auralization for games on current platforms.

Project website 

Direct-to-Indirect Acoustic Radiance Transfer

Lakulish Antani, Anish Chandak, Micah Taylor, and Dinesh Manocha

We present an efficient algorithm for simulating diffuse reflections of sound in a static scene. Our approach is built on the latest advances in precomputed light transport techniques for visual rendering and uses them to develop an improved acoustic radiance transfer technique. We precompute a direct-to-indirect acoustic transfer operator for the scene, and use it to map direct sound incident on the surfaces of the scene to multi-bounce diffuse indirect sound, which is then gathered at the listener to compute the final impulse response. The algorithm projects the transfer operator into a Haar wavelet basis which allows us to significantly accelerate the computation of higher-order diffuse reflections over prior methods. Our algorithm decouples the transfer operator from the source position so we can efficiently update the acoustic response at the listener when the source moves. We highlight its performance on various benchmarks and observe significant speedups over prior methods based on acoustic radiance transfer.

Project website  Video

Interactive Sound Propagation using Compact Acoustic Transfer Operators r

Lakulish Antani, Anish Chandak, Lauri Savioja, and Dinesh Manocha

We present an interactive sound propagation algorithm that can compute high orders of specular and diffuse reflections as well as edge diffractions in response to moving sound sources and a moving listener. Our formulation is based on a precomputed acoustic transfer operator, which we compactly represent using the Karhunen-Loeve transform. At runtime, we use a two-pass approach that combines acoustic radiance transfer with interactive ray tracing to compute early reflections as well as higher-order reflections and late reverberation. The overall approach allows accuracy to be traded off for improved performance at run-time, and has a low memory overhead. We demonstrate the performance of our algorithm on different scenarios, including an integration of our algorithm with Valve’s Source game engine.

Project website  Video

Aural Proxies and Directionally-Varying Reverberation for Interactive Sound Propagation in Virtual Environments r

Lakulish Antani and Dinesh Manocha

We present an efficient algorithm to compute spatially-varying, direction-dependent artificial reverberation and reflection filters in large dynamic scenes for interactive sound propagation in virtual environments and video games. Our approach performs Monte Carlo integration of local visibility and depth functions to compute directionally-varying reverberation effects. The algorithm also uses a dynamically-generated rectangular aural proxy to efficiently model 2--4 orders of early reflections. These two techniques are combined to generate reflection and reverberation filters which vary with the direction of incidence at the listener. This combination leads to better sound source localization and immersion. The overall algorithm is efficient, easy to implement, and can handle moving sound sources, listeners, and dynamic scenes, with minimal storage overhead. We have integrated our approach with the audio rendering pipeline in Valve's Source game engine, and use it to generate realistic directional sound propagation effects in indoor and ourdoor scenes in real-time. We demonstrate, through quantitative comparisons as well as evaluations, that our approach leads to enhanced, immersive multi-modal interaction.

Project website  Video

High-Order Diffraction and Diffuse Reflections for Interactive Sound Propagation in Large Environments

Carl Schissler, Ravish Mehra, Dinesh Manocha

We present novel algorithms for modeling interactive diffuse reflections and higher-order diffraction in large-scale virtual environments. Our formulation is based on ray-based sound propagation and is directly applicable to complex geometric datasets. We use an incremental approach that combines radiosity and path tracing techniques to iteratively compute diffuse reflections. We also present algorithms for wavelength-dependent simplification and visibility graph computation to accelerate higher-order diffraction at runtime. The overall system can generate plausible sound effects at interactive rates in large, dynamic scenes that have multiple sound sources. We highlight the performance in complex indoor and outdoor environments and observe an order of magnitude performance improvement over previous methods.

Project website  Video

Rendering environmental voice reverberation for large-scale distributed virtual worlds

Micah Taylor, Nicolas Tsinghos, Dinesh Manocha

We present an algorithm that can render environmental audio effects for a large number of concurrent voice users immersed in a large distributed virtual world. Our approach uses an offline step, efficiently computing acoustic similarity measures based on average path length, reflection direction and diffusion throughout the environment. The similarity measures are used to adaptively decompose the scene into acoustic regions. Sound propagation simulation is performed on the acoustic regions; the resulting acoustic response data can be used efficiently at runtime to enable reverberation effects. We show that adaptive sampling based on our similarity metric correlates well with errors in perceptual acoustical metrics, contrary to naive subsampling. We demonstrate real-time, plausible sound rendering of large number of voice streams in virtual environments encompassing areas of tens of square kilometers at a fraction of the authoring and memory cost of previous acoustical precomputation approaches.

Project website  Video

Efficieny Light and Sound Propagation in Refractive Media with Analytic Ray Curve Tracer

Qi Mo, Hengchen Yeh, Ming Lin and Dinesh Manocha

Refractive media, in which light and sound propagate along curved paths, are ubiquitous in the natural world. We present algorithms that achieve efficient and scalable propagation computation for fully general media profiles and complex scene configurations. Our method is based on the geometric ray models, but we trace analytic ray curves derived from locally coherent media profiles as primitives. To facilitate high performance of ray curve tracing, we design the explicit cell data structure and algorithm for static scenes, and the implicit cell technique for dynamic scenes and moving media. Two orders of magnitude speedup is achieved in path computation over existing methods. Furthermore, we present efficient acoustic pressure field computation that matches the path computation performance. Our algorithms enable propagation simulation of large outdoor scenes that are not computationally practical with previous methods. Various components of our algorithms are also complementary to other propagation methods and can be extended in multiple ways.

Project website 

Efficient Numerical Simulation of Sound Propagation

Nikunj Raghuvanshi, Rahul Narain, Nico Galoppo, and Ming C. Lin

Accurate sound rendering can add significant realism to complement visual display, as well as facilitate acoustic predictions for many real-life applications. In this paper, we present a technique which relies on an adaptive rectangular decomposition of a three-dimensional scene to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains and is thus able to achieve at least an order of magnitude performance gain compared to a standard FDTD implementation, while also being memory-efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range which, to the best of our knowledge, has not been previously attempted on a desktop computer. This work offers accelerated computation of accurate sound propagation for large scenes on commodity hardware, enabling realistic auditory display for virtual environments and accurate acoustics analysis for architectural design.

Project website...  YouTube Video

Precomputed Wave Simulation for Real-Time Sound Propagation of Dynamic Sources in Complex Scenes

Nikunj Raghuvanshi, John Snyder, Ravish Mehra, Ming C. Lin, and N. Govindaraju

We present a method for real-time sound propagation that captures all wave effects, including diffraction and reverberation, for multiple moving sources and a moving listener in a complex, static 3D scene. It performs an offline numerical simulation over the scene and then applies a novel technique to extract and compactly encode the perceptually salient information in the resulting acoustic responses. Each response is automatically broken into two phases: early reflections (ER) and late reverberation (LR), via a threshold on the temporal density of arriving wavefronts. The LR is simulated and stored in the frequency domain, once per room in the scene. The ER accounts for more detailed spatial variation, by recording a set of peak delays/amplitudes in the time domain and a residual frequency response sampled in octave frequency bands, at each source/receiver point pair in a 5D grid. An efficient run-time uses this precomputed representation to perform binaural sound rendering based on frequency-domain convolution. Our system demonstrates realistic, wave-based acoustic effects in real time, including diffraction low-passing behind obstructions, sound focusing, hollow reverberation in empty rooms, sound diffusion in fully-furnished rooms, and realistic late reverberation.

Project website  Video

Wave-Based Sound Propagation in Large Open Scenes using an Equivalent Source Formulation

Ravish Mehra, Nikunj Raghuvanshi, Lakulish Antani, Anish Chandak, Sean Curtis, and Dinesh Manocha

We present a novel approach for wave-based sound propagation suitable for large, open spaces spanning hundreds of meters, with a small memory footprint. The scene is decomposed into disjoint rigid objects. The free-field acoustic behavior of each object is captured by a compact per-object transfer-function relating the amplitudes of a set of incoming equivalent sources to outgoing equivalent sources. Pairwise acoustic interactions between objects are computed analytically, yielding compact inter-object transfer functions. The global sound field accounting for all orders of interaction is computed using these transfer functions. The runtime system uses fast summation over the outgoing equivalent source amplitudes for all objects to auralize the sound field at a moving listener in real-time.We demonstrate realistic acoustic effects such as diffraction, low-passed sound behind obstructions, focusing, scattering, high-order reflections, and echoes, on a variety of scenes.

Project website  Video

Wave-Ray Coupling for Interactive Sound Propagation in Large Complex Scenes

Hengchin Yeh, Ravish Mehra, Zhimin Ren, Lakulish Antani, Ming C. Lin, and Dinesh Manocha

We present a novel hybrid approach that couples geometric and numerical acoustic techniques for interactive sound propagation in complex environments. Our formulation is based on a combination of spatial and frequency decomposition of the sound field. We use numerical wave-based techniques to precompute the pressure field in the near-object regions and geometric propagation techniques in the far-field regions to model sound propagation. We present a novel two-way pressure coupling technique at the interface of near- object and far-field regions. At runtime, the impulse response at the listener position is computed at interactive rates based on the stored pressure field and interpolation techniques. Our system is able to simulate high-fidelity acoustic effects such as diffraction, scattering, low-pass filtering behind obstruction, reverberation, and high-order reflections in large, complex indoor and outdoor environments and Half-Life 2 game engine. The pressure computation requires orders of magnitude lower memory than standard wave-based numerical techniques.

Project website  Video

Source and Listener Directivity for Interactive Wave-based Sound Propagation

Ravish Mehra, Lakulish Antani, Sujeong Kim, and Dinesh Manocha

We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

Project website  Video

Acoustic Pulse Propagation in an Urban Environment using an three-dimensional Numerical Simulation

Ravish Mehra, Nikunj Raghuvanshi, Anish Chandak, Donal G. Albert, D. Keith Wilson, and Dinesh Manocha

Acoustic pulse propagation in outdoor urban environments is a physically complex phenomenon due to the predominance of reflection, diffraction, and scattering. This is especially true in non-line-of-sight cases, where edge diffraction and high-order scattering are major components of acoustic energy transport. Past work by Albert and Liu [J. Acoust. Soc. Am. 127, 1335–1346 (2010)] has shown that many of these effects can be captured using a two-dimensional finite-difference time-domain method, which was compared to the measured data recorded in an army training village. In this paper, a full three-dimensional analysis of acoustic pulse propagation is presented. This analysis is enabled by the adaptive rectangular decomposition method by Raghuvanshi, Narain and Lin [IEEE Trans. Visual. Comput. Graphics 15, 789–801 (2009)], which models sound propagation in the same scene in three dimensions. The simulation is run at a much higher usable bandwidth (nearly 450 Hz) and took only a few minutes on a desktop computer. It is shown that a three-dimensional solution provides better agreement with measured data than two-dimensional modeling, especially in cases where propagation over rooftops is important. In general, the predicted acoustic responses match well with measured results for the source/sensor locations.

Project website 

WAVE: Interactive Wave-based Sound Propagation for Virtual Environments

Ravish Mehra, Atul Rungta, Abhinav Golas, Ming Lin, and Dinesh Manocha

We present an interactive wave-based sound propagation system that generates accurate, realistic sound in virtual environments for dynamic (moving) sources and listeners. We propose a novel algorithm to accurately solve the wave equation for dynamic sources and listeners using a combination of precomputation techniques and GPU-based runtime evaluation. Our system can handle large environments typically used in VR applications, compute spatial sound corresponding to listener’s motion (including head tracking) and handle both omnidirectional and directional sources, all at interactive rates. As compared to prior wave-based techniques applied to large scenes with moving sources, we observe significant improvement in runtime memory. The overall soundpropagation and rendering system has been integrated with the Half-Life 2 game engine, Oculus-Rift head-mounted display, and the Xbox game controller to enable users to experience high-quality acoustic effects (e.g., amplification, diffraction low-passing, highorder scattering) and spatial audio, based on their interactions in the VR application. We provide the results of preliminary user evaluations, conducted to study the impact of wave-based acoustic effects and spatial audio on users’ navigation performance in virtual environments.

Project website 

Parallel Wave-Based Sound Propagation for Distributed Memory Architectures

Nicolas Morales, Vivek Chavda, Ravish Mehra, Dinesh Manocha

We present a parallel time-domain simulator to solve the acoustic wave equation for large acoustic spaces on a distributed memory architecture.

Our formulation is based on the adaptive rectangular decomposition (ARD) algorithm, a low dispersion method which performs acoustic wave propagation in three dimensions for homogeneous media. We propose an efficient parallelization of the different stages of the ARD pipeline; using a novel load balancing scheme and a hypergraph partitioning scheme to reduce communication cost, we achieve scalable performance on distributed memory architectures.

Our resulting parallel algorithm makes it possible to compute the sound pressure field for high frequencies in large environments that are thousands of cubic meters in volume. We highlight the performance of our system on large clusters with 16000 cores on homogeneous indoor and outdoor benchmarks up to 10 kHz. To the best of our knowledge, this is the first time-domain parallel acoustic wave solver that can handle such large domains and frequencies.

Project website 

Principal Investigators

Research Sponsors

Current Members

Past Members

  • Anish Chandak
  • Nico Galoppo
  • Christian Lauterbach
  • William Moss
  • Rahul Narain
  • Nikunj Raghuvanshi

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.