University of North Carolina, Chapel Hill



We present novel approaches for creating user-centric social experiences in virtual environments that are populated with both user controlled avatars, and intelligent virtual agents. We employ a data-driven method to generate personalized avatars that resemble the human subject in both appearance and motion, using inexpensive commodity hardware. We also present a multi-agent simulation framework that can generate pausible behaviors and full body motion for tens of agents at interactive rates. Agents are capable of finding collision-free paths in complex environments, and interacting with the avatars using natural language processing and generation, as well as non-verbal behaviours such as gazing, gesturing, facial expressions etc. We have integrated our formulation with commercial HMD's to allow the user to interact with the virtual agents, thus creating an immersive experience.


Motion Recognition of Self & Others on Realistic 3D Avatars

Current 3D capture and modeling technology can rapidly generate highly photo-realistic 3D avatars of human subjects. However, while the avatars look like their human counterparts, their movements often do not mimic their own due to existing challenges in accurate motion capture and re-targeting. A better understanding of factors that influence the perception of biological motion would be valuable for creating virtual avatars that capture the essence of their human subjects. To investigate these issues, we captured 22 subjects walking in an open space. We then evaluated factors that affect recognition of motion of self and others, rendered on photo-realistic 3D virtual avatars. Our studies provide several interesting insights into motion recognition on photo-realistic avatars of the subject. In particular, we found that virtual avatars lead to an increase in self-recognition, compared to point lights.

Narang, S., Best, A., Feng, Kang, S., Manocha. D., & Shapiro, A.. (2017, May). Motion recognition of self and others on realistic 3D avatars. Computer Animation and Virtual Worlds 28(3-4). Wiley Online Library

PDF   Bibtex   Video: (MP4, 3.2 MB) 

PedVR: Simulating Gaze-Based Interactions between a Real User and Virtual Crowds

We present a novel interactive approach, PedVR, to generate plausible behaviors for a large number of virtual humans, and to enable natural interaction between the real user and virtual agents. Our formulation is based on a coupled approach that combines a 2D multi-agent navigation algorithm with 3D human motion synthesis. The coupling can result in plausible movement of virtual agents and can generate gazing behaviors, which can considerably increase the believability. We have integrated our formulation with the DK-2 HMD and demonstrate the benefits of our crowd simulation algorithm over prior decoupled approaches. Our user evaluation suggests that the combination of coupled methods and gazing behavior can considerably increase the behavioral plausibility.

Narang, S., Best, A., Randhavane, T., Shapiro, A., & Manocha, D. (2016, November). PedVR: simulating gaze-based interactions between a real user and virtual crowds. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology (pp. 91-100). ACM.

PDF   Bibtex   Video: (MP4, 13.6 MB) 

F2FCrowds: Planning Agent Movements to Enable Face-to-Face Interactions

We present an approach for multi-agent navigation that facilitates face-to-face interaction in virtual crowds. We describe a model of approach behavior for virtual agents that includes a novel interaction velocity prediction (IVP) algorithm. This algorithm is combined with human body motion synthesis constraints and facial actions to improve the behavioral realism of virtual agents. We combine these techniques with full-body crowd simulation and evaluate their benefits by conducting a user study using immersive hardware. Our results indicate that such techniques enabling face-to-face interactions can improve the sense of presence felt by the user. The virtual agents using these algorithms also appear more responsive and are able to elicit more reaction from the users.

Randhavane, T., Bera, A., & Manocha, D. (2017) F2FCrowds: Planning Agent Movements to Enable Face-to-Face Interactions. Technical Report, UNC Chapel Hill.