University of North Carolina, Chapel Hill



We present novel approaches for creating user-centric social experiences in virtual environments that are populated with both user controlled avatars, and intelligent virtual agents. We propose algorithms to increase the motion, and behavioral realism of the virtual agents, thus creating immersive virtual experiences. Agents are capable of finding collision-free paths in complex environments, and interacting with the avatars using natural language processing and generation, as well as non-verbal behaviours such as gazing, gesturing, facial expressions etc. We also present a multi-agent simulation framework that can generate pausible behaviors and full body motion for hundreds of agents at interactive rates.


Recent projects include:

Generating Virtual Avatars with Personalized Walking Gaits

We present a novel algorithm for automatically synthesizing personalized walking gaits for a human user from noisy motion caputre data. The overall approach is robust and can generate personalized gaits with little or no artistic intervention using commodity sensors.

Narang, S., Best, A., Shapiro, A., & Manocha. D. (2017, October). Generating Virtual Avatars with Personalized Walking Gaits using Commodity Hardware. ACM Multimedia, Proceedings of Thematic Workshops (to appear).

PDF   Video: (MP4, 28.6 MB) 

Motion Recognition of Self & Others on Realistic 3D Avatars

We evaluated factors that affect recognition of motion of self and others, rendered on photo-realistic 3D virtual avatars. Our studies provide several interesting insights into motion recognition on photo-realistic avatars of the subject. In particular, we found that virtual avatars lead to an increase in self-recognition, compared to point lights.

Narang, S., Best, A., Feng, Kang, S., Manocha. D., & Shapiro, A.. (2017, May). Motion recognition of self and others on realistic 3D avatars. Computer Animation and Virtual Worlds 28(3-4). Wiley Online Library

PDF   Bibtex   Video: (MP4, 3.2 MB) 

PedVR: Simulating Gaze-Based Interactions between a Real User and Virtual Crowds

We present a novel interactive approach, PedVR, to generate plausible behaviors for a large number of virtual humans, and to enable natural gaze-based interactions between the real user and virtual agents. Our user evaluation suggests that the combination of our collision avoidance method and gazing behavior can considerably increase the behavioral plausibility of the simulation.

Narang, S., Best, A., Randhavane, T., Shapiro, A., & Manocha, D. (2016, November). PedVR: simulating gaze-based interactions between a real user and virtual crowds. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology (pp. 91-100). ACM.

PDF   Bibtex   Video: (MP4, 13.6 MB) 

F2FCrowds: Planning Agent Movements to Enable Face-to-Face Interactions

We present an approach for multi-agent navigation that facilitates face-to-face interaction in virtual crowds, based on a novel interaction velocity prediction (IVP) algorithm. Our user evaluation indicates that such techniques enabling face-to-face interactions can improve the sense of presence felt by the user. The virtual agents using these algorithms also appear more responsive and are able to elicit more reaction from the users.

Randhavane, T., Bera, A., & Manocha, D. (2017) F2FCrowds: Planning Agent Movements to Enable Face-to-Face Interactions. Technical Report, UNC Chapel Hill (to appear in Presene).


GAMMA Research Group
UNC Dept. of Computer Science