V-Sentinel: A Novel Framework for Situational Awareness and Surveillance

Similar documents
Modeling and Visualization

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016

Hydra Fusion Tools. Capabilities Guide. Real-time 3D Reconstructions

HYDRA FUSION TOOLS CAPABILITIES GUIDE REAL-TIME 3D RECONSTRUCTIONS

Motion and Target Tracking (Overview) Suya You. Integrated Media Systems Center Computer Science Department University of Southern California

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

GEOMEDIA MOTION VIDEO ANALYSIS PROFESSIONAL

Camera Calibration for a Robust Omni-directional Photogrammetry System

Mobile Mapping and Navigation. Brad Kohlmeyer NAVTEQ Research

Project Overview Mapping Technology Assessment for Connected Vehicle Highway Network Applications

Extending a Defense Computing Cloud to Warfighters at the Edge

Outline of Presentation. Introduction to Overwatch Geospatial Software Feature Analyst and LIDAR Analyst Software

Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation

The Feature Analyst Extension for ERDAS IMAGINE

Pervasive Computing. OpenLab Jan 14 04pm L Institute of Networked and Embedded Systems

IMAGINE EXPANSION PACK Extend the Power of ERDAS IMAGINE

Stable Vision-Aided Navigation for Large-Area Augmented Reality

EVOLUTION OF POINT CLOUD

Acadia II Product Guide. Low-Power, Low-Latency Video Processing for Enhanced Vision in Any Condition

What s New in ecognition 9.0. Christian Weise

AUTOMATED UAV-BASED VIDEO EXPLOITATION FOR MAPPING AND SURVEILLANCE

TrueOrtho with 3D Feature Extraction

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping

On-line and Off-line 3D Reconstruction for Crisis Management Applications

The Most Comprehensive Solution for Indoor Mapping Applications

3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots

Digital Defence LiDAR based Smart 3D Geospatial Intelligence for Smart Defence

3D in the ArcGIS Platform. Chris Andrews

IMAGERY FOR ARCGIS. Manage and Understand Your Imagery. Credit: Image courtesy of DigitalGlobe

V-PANE Virtual Perspectives Augmenting Natural Experiences

W4. Perception & Situation Awareness & Decision making

REPORT DOCUMENTATION PAGE

2-4 April 2019 Taets Art and Event Park, Amsterdam CLICK TO KNOW MORE

Rapid Geo-Image Communications for Disaster Management

Information management, needs assessment, mapping

CommonSENSE: Software for displaying Full Motion Video for mission-critical C4ISR working positions

Semi-Automated and Interactive Construction of 3D Urban Terrains

Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

Virtual Testbeds for Planetary Exploration: The Self Localization Aspect

SkylineGlobe 6.5 s New Developments

What s New in ecognition 9.0

GeoDec: A Framework to Effectively Visualize and Query Geospatial Data for Decision-Making

Prof. Feng Liu. Spring /27/2014

Trimble VISION Positions from Pictures

PanoMOBI: Panoramic Mobile Entertainment System

Augmenting Reality with Projected Interactive Displays

STARTING WITH DRONES. Data Collection and Remote Sensing with UAVs, etc. Dr. Bill Hazelton LS

Mission Aware Cybersecurity

Designing a Self-Calibrating Pipeline for Projection Mapping Application. Kevin Wright Kevin Moule

Advanced IP solutions enabling the autonomous driving revolution

Static Scene Reconstruction

ossimplanet Users Manual

Computer Vision on Tegra K1. Chen Sagiv SagivTech Ltd.

From Multi-sensor Data to 3D Reconstruction of Earth Surface: Innovative, Powerful Methods for Geoscience and Other Applications

Aerial Visual Intelligence for GIS

Hyperspectral Remote Sensing in Acquisition of Geospatial Information for the Modern Warfare. Dr M R Bhutiyani,

Space and Naval Warfare Systems Center Atlantic Information Warfare Research Project (IWRP)

Perspective Sensing for Inertial Stabilization

Neue Verfahren der Bildverarbeitung auch zur Erfassung von Schäden in Abwasserkanälen?

Mixed-Reality for Intuitive Photo-Realistic 3D-Model Generation

SecuRescue. Michael HOFSTÄTTER

IP-S2 HD. High Definition 3D Mobile Mapping System

Reality Modeling Drone Capture Guide

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Training i Course Remote Sensing Basic Theory & Image Processing Methods September 2011

Algorithms for Image-Based Rendering with an Application to Driving Simulation

Aerial and Mobile LiDAR Data Fusion

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

GIS Data Collection. This chapter reviews the main methods of GIS data capture and transfer and introduces key practical management issues.

SENSEI-Panama Visualizing animal movement data on a virtual island in cave2. Jillian Aurisano and James Hwang June 29, 2016

3D Optics (including Photogrammetry)

ossimplanet Users Manual

Company Capabilities Briefing November 2011

Sheaf Theory: The Mathematics of Data Fusion

Geospatial Intelligence Centres Empowered by Airbus

Huawei Emergency Command Network Solution Brochure-Detailed

Creating outstanding digital cockpits with Qt Automotive Suite

Geometry of Aerial photogrammetry. Panu Srestasathiern, PhD. Researcher Geo-Informatics and Space Technology Development Agency (Public Organization)

Exelis Visual Information Software Solutions for TERRAIN ANALYSIS. Defense & Intelligence SOLUTIONS GUIDE.

ABSTRACT 1. INTRODUCTION

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

National Science Foundation Engineering Research Center. Bingcai Zhang BAE Systems San Diego, CA

5/14/2018. Experiences with AR and AI in development projects. Agenda. By: Sune Wolff, CTO & Partner at Unity Studios.

Positional tracking for VR/AR using Stereo Vision. Edwin AZZAM, CTO Stereolabs

Slide 1. Bentley Descartes V8i. Presented by: Inga Morozoff, Bentley

Computer Vision: Reconstruction, Recognition, and Interaction

Reality Modeling Webinar

Unmanned Vehicle Technology Researches for Outdoor Environments. *Ju-Jang Lee 1)

Building Great Situational Awareness Apps Using ArcGIS Developer Tools. Kerry Robinson Eric Bader Thomas Solow

3D Modeling of Objects Using Laser Scanning

3D MAPPING FORUM. Introduction to Modeling and Simulation in Real-Time 3D ArcGIS. Morakot Pilouk, Ph.D. Senior Software Developer/Consultant

ABSTRACT 1. INTRODUCTION

Online Interactive 4D Character Animation

Robot Localization based on Geo-referenced Images and G raphic Methods

EPRI Research Overview IT/Security Focus. Power Delivery & Energy Utilization Sector From Generator Bus Bar to End Use

Jamison R. Daniel, Benjamın Hernandez, C.E. Thomas Jr, Steve L. Kelley, Paul G. Jones, Chris Chinnock

ROBOT TEAMS CH 12. Experiments with Cooperative Aerial-Ground Robots

Real world data collecting tools. Company Introduction. C2L equipment.co,.ltd

Arbitrary cut planes Slab control with slab thickness Projection plane adjustment Box cropping Mandible detection MPR cross-section linked views

Maximizing GPU Power for Vision and Depth Sensor Processing. From NVIDIA's Tegra K1 to GPUs on the Cloud. Chen Sagiv Eri Rubin SagivTech Ltd.

Transcription:

V-Sentinel: A Novel Framework for Situational Awareness and Surveillance Suya You Integrated Media Systems Center Computer Science Department University of Southern California March 2005 1

Objective Developing advanced approaches for situational awareness, assessment, and response support to security and military applications Fusion: integrating information from varied sensors and resources to represent the spatial relationships and dynamic activities of the real-world Interpretation: enhancing the ability to bring out obscure critical features and disambiguating conflicting data interpretations Presentation: presenting and visualizing the data in innovative ways to maximize information extraction and understanding 2

Problem Statement We already have the capability to access to a multitude of systems which provide content-rich information from many different sensors and resources Problem, analyzing and visualizing them as separate streams/windows provides no integration of information, no high-level scene comprehension, and leads to overwhelm users 3

Problem Statement (con.) Situational awareness, assessment, and response supports 4

A Simple Example (1) Separate streams/windows presentation Visualization as separate streams provides no integration of information, no high-level scene comprehension, and obstructs collaboration 5

A Simple Example (2) Imagine if we scale the scenario to a sensor network delivering dozens of data streams from ground-based sensors, UAVs, satellites, and mobile sensors distributed through a scene 6

A Simple Example (3) Separate streams aiding with geospatial information captures only a snapshot of the real world, therefore lacks any representation of dynamic events and activities occurring in the scene 1 2 3 2 3 1 7

A Simple Example (4) Simply combining the separate aerial photograph, geospatial model, and ground videos still provides limited situational awareness Human visual system is not capable of fusing and comprehending multiple independently viewpoints of a scene 8

Proposed Solution Visualizing all data in a single 3D context rapid scene comprehension and understanding rapid assessment and reaction 9

V-Sentinel: Dynamic Fusion of Multi-Sensor Data A 3D environment model is used as substrate and augmented with the images to create an Augmented Virtual Environment (AVE) presenting all the data in a common 3D context to maximize collaboration and comprehension of the big picture - a world in miniature a coherent human-cognitive framework - allowing users to easily understand relationships and switch focus between levels of detail and specific spatial or temporal aspects of the data addressing dynamic visualization and change detection - allowing dynamic images, events, and movements captured in imagery to be visualized and interpreted from arbitrary viewpoints 10

Architecture of V-Sentinel System 11

Main Components and Functionalities 3D scene modeling from LiDAR Dynamic object modeling from Imagery Sensor tracking & calibration Fusion of imagery and 3D model Real-time rendering and visualization Immersive and user interaction 12

Main Components - 3D Scene Modeling (Substrate) Accuracy Rapid Low cost LiDAR, Imagery, Stereo 3D model of entire USC campus and surround areas 13

Main Components Sensor Modeling (Tracking) Accurate sensor information for image projection and fusion (where am I, and where am I looking?) Hybrid GPS/INS/vision tracking approach GPS/INS data serve as an aid to the vision tracking by reducing search space and provide tolerance to interruptions Vision corrects for drift and error accumulation Complementary fusion filter Extended Kalman Filter framework 14

Main Components Imagery Fusion and Projection Video Projection vs. Texture Map dynamic vs. static texture image and position both change each video frame 3D model image texture 15

Dynamic Imagery Projectors Update sensor pose and image to paint the scene each frame Compute projection transformation during rendering of each frame Dynamic control during visualization session to reflect most recent information Real-time possible 16

Main Components Dynamic Object Analysis Image analysis: automatic detecting moving events & objects (people, vehicles) Object modeling: rapid creation of 3D object model Visualization: render a dynamic scene representation in real-time Ground A Image Plane B v n C Model 17

Dynamic Modeling Examples 18

Put Things Together Scene Modeling System - Models from Lidar - semi-automated - Building finding and extraction Dynamic Event Modeling System - Detection and tracking of moving objects - Rapid 3D creation of the object models Data Acquisition System - Accessing internet video streams at real-time - XML interface communications with other sensor modules Fusion and Rendering System - Real-time GPU code on dual CPU PC - Immersive visualization supports arbitrary display size and resolution (up to 1920x1200) GUI and Interaction System - Interactive GUI and remote control via XML interface for integration with existing sensor networks and monitoring systems - Local and/or remote user(s) can control view via joystick, keyboard, or mouse 19

Sample Application Scenario (1) Video surveillance Six surveillance cameras are deployed for situational awareness of a building complex of USC campus Networking and XML interface communicate with the sensors and the system System monitors and automatically changes viewpoint to alarms, geo-referenced positions, or arbitrary viewpoints Patrol mode automatically flies user-defined paths over the entire site 20

USC Camera Views 21

V-Sentinel View USC Campus 22

V-Sentinel View (navigate to arbitrary viewpoint) 23

V-Sentinel View (respond to alarm) 24

Sample Application Scenario (2) Simulation and training Collaborating with the Institute for Creative Technologies (ICT), an Army funded Training Research Center at USC Post-analysis of a live training exercise captured for AVE analysis/playback Rapid training exercise re-mapping ARMY MOUT Village Training Site 25

V-Sentinel Views - MOUT Village Site Live scenario videos are projected onto the 3D model of the MOUT Village Army training site to rapidly create a live training exercise 26

Acknowledgements Members of CGIT/IMSC/USC US Army, ONR, NGA, and industrial partners 27