University of North Carolina, Chapel Hill
We present novel approaches for creating user-centric social experiences in virtual environments that are populated with both user controlled avatars, and intelligent virtual agents. We propose algorithms to increase the motion, and behavioral realism of the virtual agents, thus creating immersive virtual experiences. Agents are capable of finding collision-free paths in complex environments, and interacting with the avatars using natural language processing and generation, as well as non-verbal behaviours such as gazing, gesturing, facial expressions etc. We also present a multi-agent simulation framework that can generate pausible behaviors and full body motion for hundreds of agents at interactive rates.
Recent projects include:
We present an interactive algorithm to generate plausible movements for human-like agents interacting with other agents or avatars in a virtual environment. Our approach also allows the user to interact with the agents from a first-person perspective in immersive settings. Interactions generated using our approach lead to an increase in the user’s sense of co-presence.
Narang, S., Best, A., & Manocha. D. (2018). Simulating Movement Interactions between Avatars & Agents in Virtual Worlds using Human Motion Constraints. Proceedings of IEEE VR 2018 (to appear).
Main Paper Supplemental Doc Video: (MP4, 28.6 MB)Narang, S., Randhavane, T., Best, A., Shapiro, A., & Manocha, D. (2016). FbCrowd: Interactive Multi-agent Simulation with Coupled Collision Avoidance and Human Motion Synthesis. Technical Report, UNC Chapel Hill.
PDFWe present a novel algorithm for automatically synthesizing personalized walking gaits for a human user from noisy motion caputre data. The overall approach is robust and can generate personalized gaits with little or no artistic intervention using commodity sensors.
Narang, S., Best, A., Shapiro, A., & Manocha. D. (2017, October). Generating Virtual Avatars with Personalized Walking Gaits using Commodity Hardware. ACM Multimedia, Proceedings of Thematic Workshops (to appear).
PDF Video: (MP4, 28.6 MB)We evaluated factors that affect recognition of motion of self and others, rendered on photo-realistic 3D virtual avatars. Our studies provide several interesting insights into motion recognition on photo-realistic avatars of the subject. In particular, we found that virtual avatars lead to an increase in self-recognition, compared to point lights.
Narang, S., Best, A., Feng, Kang, S., Manocha. D., & Shapiro, A.. (2017, May). Motion recognition of self and others on realistic 3D avatars. Computer Animation and Virtual Worlds 28(3-4). Wiley Online Library
PDF Bibtex Video: (MP4, 3.2 MB)We present a novel interactive approach, PedVR, to generate plausible behaviors for a large number of virtual humans, and to enable natural gaze-based interactions between the real user and virtual agents. Our user evaluation suggests that the combination of our collision avoidance method and gazing behavior can considerably increase the behavioral plausibility of the simulation.
Narang, S., Best, A., Randhavane, T., Shapiro, A., & Manocha, D. (2016, November). PedVR: simulating gaze-based interactions between a real user and virtual crowds. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology (pp. 91-100). ACM.
PDF Bibtex Video: (MP4, 13.6 MB)We present an approach for multi-agent navigation that facilitates face-to-face interaction in virtual crowds, based on a novel interaction velocity prediction (IVP) algorithm. Our user evaluation indicates that such techniques enabling face-to-face interactions can improve the sense of presence felt by the user. The virtual agents using these algorithms also appear more responsive and are able to elicit more reaction from the users.
Randhavane, T., Bera, A., & Manocha, D. (2017)
F2FCrowds: Planning Agent Movements to Enable Face-to-Face Interactions.
Technical Report, UNC Chapel Hill (to appear in Presene).
PDF
We present a novel approach for generating plausible verbal interactions between virtual human-like agents and user avatars in shared virtual environments. Sense-Plan-Ask, or SPA, extends prior work in propositional planning and natural language processing to enable agents to plan with uncertain information, and leverage question and answer dialogue with other agents and avatars to obtain the needed information and complete their goals. The agents are additionally able to respond to questions from the avatars and other agents using natural-language enabling real-time multi-agent multi-avatar communication environments.
Our algorithm can simulate tens of virtual agents at interactive rates interacting, moving, communicating, planning, and replanning. We find that our algorithm creates a small runtime cost and enables agents to complete their goals more effectively than agents without the ability to leverage natural-language communication. We demonstrate quantitative results on a set of simulated benchmarks and detail the results of a preliminary user-study conducted to evaluate the plausibility of the virtual interactions generated by SPA. Overall, we find that participants prefer SPA to prior techniques in 84\% of responses including significant benefits in terms of the plausibility of natural-language interactions and the positive impact of those interactions.
Best, A., Narang, S., & Manocha, D. (2020, To appear). SPA: Verbal Interactions between Agents andAvatars in Shared Virtual Environments using Propositional Planning In Proceedings of the 2020 IEEE Conference on Virtual Reality
PDF (to come), Supplemental Material: PDF Tech report: PDF Video: (MP4, 18 MB) Supplemental Video: (MP4, 4.5 MB)