Interactive Crowd Content Generation and Analysis using Trajectory-level Behavior Learning
Sujeong Kim, Aniket Bera, Dinesh Manocha
University of North Carolina at Chapel-Hill
We present an interactive approach for analyzing crowd videos and generating content for multimedia applications. Our formulation combines online tracking algorithms from computer vision, non-linear pedestrian motion models from computer graphics, and machine learning techniques to automatically compute the trajectory-level pedestrian behaviors for each agent in the video. These learned behaviors are used to detect anomalous behaviors, perform crowd replication, augment crowd videos with virtual agents, and segment the motion of pedestrians. We demonstrate the performance of these tasks using indoor and outdoor crowd video benchmarks consisting of tens of human agents; moreover, our algorithm takes less than a tenth of a second per frame on a multi-core PC. The overall approach can handle dense and heterogeneous crowd behaviors and is useful for realtime crowd scene analysis applications. |
[Video]
[Downloads]
[pdf][Related Work]
-
GLMP- Realtime Pedestrian Path Prediction using Global and Local Movement Patterns [Project Page]
-
Interactive Crowd Content Generation and Analysis using Trajectory-level Behavior Learning, ISM 2015 [Project Page]
-
BRVO: Predicting Pedestrian Trajectories using Velocity-Space Reasoning, IJRR 2015 [Project Page]
-
Efficient Trajectory Extraction and Parameter Learning for Data-Driven Crowd Simulation, Graphics Interface 2015 [Project Page]