V-Sentinel: A Novel Framework for Situational Awareness and Surveillance Suya You Integrated Media Systems Center Computer Science Department University of Southern California March 2005 1
Objective Developing advanced approaches for situational awareness, assessment, and response support to security and military applications Fusion: integrating information from varied sensors and resources to represent the spatial relationships and dynamic activities of the real-world Interpretation: enhancing the ability to bring out obscure critical features and disambiguating conflicting data interpretations Presentation: presenting and visualizing the data in innovative ways to maximize information extraction and understanding 2
Problem Statement We already have the capability to access to a multitude of systems which provide content-rich information from many different sensors and resources Problem, analyzing and visualizing them as separate streams/windows provides no integration of information, no high-level scene comprehension, and leads to overwhelm users 3
Problem Statement (con.) Situational awareness, assessment, and response supports 4
A Simple Example (1) Separate streams/windows presentation Visualization as separate streams provides no integration of information, no high-level scene comprehension, and obstructs collaboration 5
A Simple Example (2) Imagine if we scale the scenario to a sensor network delivering dozens of data streams from ground-based sensors, UAVs, satellites, and mobile sensors distributed through a scene 6
A Simple Example (3) Separate streams aiding with geospatial information captures only a snapshot of the real world, therefore lacks any representation of dynamic events and activities occurring in the scene 1 2 3 2 3 1 7
A Simple Example (4) Simply combining the separate aerial photograph, geospatial model, and ground videos still provides limited situational awareness Human visual system is not capable of fusing and comprehending multiple independently viewpoints of a scene 8
Proposed Solution Visualizing all data in a single 3D context rapid scene comprehension and understanding rapid assessment and reaction 9
V-Sentinel: Dynamic Fusion of Multi-Sensor Data A 3D environment model is used as substrate and augmented with the images to create an Augmented Virtual Environment (AVE) presenting all the data in a common 3D context to maximize collaboration and comprehension of the big picture - a world in miniature a coherent human-cognitive framework - allowing users to easily understand relationships and switch focus between levels of detail and specific spatial or temporal aspects of the data addressing dynamic visualization and change detection - allowing dynamic images, events, and movements captured in imagery to be visualized and interpreted from arbitrary viewpoints 10
Architecture of V-Sentinel System 11
Main Components and Functionalities 3D scene modeling from LiDAR Dynamic object modeling from Imagery Sensor tracking & calibration Fusion of imagery and 3D model Real-time rendering and visualization Immersive and user interaction 12
Main Components - 3D Scene Modeling (Substrate) Accuracy Rapid Low cost LiDAR, Imagery, Stereo 3D model of entire USC campus and surround areas 13
Main Components Sensor Modeling (Tracking) Accurate sensor information for image projection and fusion (where am I, and where am I looking?) Hybrid GPS/INS/vision tracking approach GPS/INS data serve as an aid to the vision tracking by reducing search space and provide tolerance to interruptions Vision corrects for drift and error accumulation Complementary fusion filter Extended Kalman Filter framework 14
Main Components Imagery Fusion and Projection Video Projection vs. Texture Map dynamic vs. static texture image and position both change each video frame 3D model image texture 15
Dynamic Imagery Projectors Update sensor pose and image to paint the scene each frame Compute projection transformation during rendering of each frame Dynamic control during visualization session to reflect most recent information Real-time possible 16
Main Components Dynamic Object Analysis Image analysis: automatic detecting moving events & objects (people, vehicles) Object modeling: rapid creation of 3D object model Visualization: render a dynamic scene representation in real-time Ground A Image Plane B v n C Model 17
Dynamic Modeling Examples 18
Put Things Together Scene Modeling System - Models from Lidar - semi-automated - Building finding and extraction Dynamic Event Modeling System - Detection and tracking of moving objects - Rapid 3D creation of the object models Data Acquisition System - Accessing internet video streams at real-time - XML interface communications with other sensor modules Fusion and Rendering System - Real-time GPU code on dual CPU PC - Immersive visualization supports arbitrary display size and resolution (up to 1920x1200) GUI and Interaction System - Interactive GUI and remote control via XML interface for integration with existing sensor networks and monitoring systems - Local and/or remote user(s) can control view via joystick, keyboard, or mouse 19
Sample Application Scenario (1) Video surveillance Six surveillance cameras are deployed for situational awareness of a building complex of USC campus Networking and XML interface communicate with the sensors and the system System monitors and automatically changes viewpoint to alarms, geo-referenced positions, or arbitrary viewpoints Patrol mode automatically flies user-defined paths over the entire site 20
USC Camera Views 21
V-Sentinel View USC Campus 22
V-Sentinel View (navigate to arbitrary viewpoint) 23
V-Sentinel View (respond to alarm) 24
Sample Application Scenario (2) Simulation and training Collaborating with the Institute for Creative Technologies (ICT), an Army funded Training Research Center at USC Post-analysis of a live training exercise captured for AVE analysis/playback Rapid training exercise re-mapping ARMY MOUT Village Training Site 25
V-Sentinel Views - MOUT Village Site Live scenario videos are projected onto the 3D model of the MOUT Village Army training site to rapidly create a live training exercise 26
Acknowledgements Members of CGIT/IMSC/USC US Army, ONR, NGA, and industrial partners 27