Distributed Real-Time Embedded Video Processing

Similar documents
An Update on CORBA Performance for HPEC Algorithms. Bill Beckwith Objective Interface Systems, Inc.

High-Assurance Security/Safety on HPEC Systems: an Oxymoron?

Service Level Agreements: An Approach to Software Lifecycle Management. CDR Leonard Gaines Naval Supply Systems Command 29 January 2003

Multi-Modal Communication

Setting the Standard for Real-Time Digital Signal Processing Pentek Seminar Series. Digital IF Standardization

Using Model-Theoretic Invariants for Semantic Integration. Michael Gruninger NIST / Institute for Systems Research University of Maryland

Dana Sinno MIT Lincoln Laboratory 244 Wood Street Lexington, MA phone:

FUDSChem. Brian Jordan With the assistance of Deb Walker. Formerly Used Defense Site Chemistry Database. USACE-Albuquerque District.

4. Lessons Learned in Introducing MBSE: 2009 to 2012

DoD Common Access Card Information Brief. Smart Card Project Managers Group

The C2 Workstation and Data Replication over Disadvantaged Tactical Communication Links

An Efficient Architecture for Ultra Long FFTs in FPGAs and ASICs

Requirements for Scalable Application Specific Processing in Commercial HPEC

Washington University

ARINC653 AADL Annex Update

Running CyberCIEGE on Linux without Windows

A Review of the 2007 Air Force Inaugural Sustainability Report

Empirically Based Analysis: The DDoS Case

MODELING AND SIMULATION OF LIQUID MOLDING PROCESSES. Pavel Simacek Center for Composite Materials University of Delaware

CASE STUDY: Using Field Programmable Gate Arrays in a Beowulf Cluster

Vision Protection Army Technology Objective (ATO) Overview for GVSET VIP Day. Sensors from Laser Weapons Date: 17 Jul 09 UNCLASSIFIED

Accuracy of Computed Water Surface Profiles

Architecting for Resiliency Army s Common Operating Environment (COE) SERC

2013 US State of Cybercrime Survey

A Distributed Parallel Processing System for Command and Control Imagery

High-Performance Linear Algebra Processor using FPGA

The extreme Adaptive DSP Solution to Sensor Data Processing

Data Reorganization Interface

Balancing Transport and Physical Layers in Wireless Ad Hoc Networks: Jointly Optimal Congestion Control and Power Control

Information, Decision, & Complex Networks AFOSR/RTC Overview

75 TH MORSS CD Cover Page. If you would like your presentation included in the 75 th MORSS Final Report CD it must :

ASSESSMENT OF A BAYESIAN MODEL AND TEST VALIDATION METHOD

Use of the Polarized Radiance Distribution Camera System in the RADYO Program

Space and Missile Systems Center

ASPECTS OF USE OF CFD FOR UAV CONFIGURATION DESIGN

73rd MORSS CD Cover Page UNCLASSIFIED DISCLOSURE FORM CD Presentation

ENVIRONMENTAL MANAGEMENT SYSTEM WEB SITE (EMSWeb)

CENTER FOR ADVANCED ENERGY SYSTEM Rutgers University. Field Management for Industrial Assessment Centers Appointed By USDOE

Kathleen Fisher Program Manager, Information Innovation Office

73rd MORSS CD Cover Page UNCLASSIFIED DISCLOSURE FORM CD Presentation

U.S. Army Research, Development and Engineering Command (IDAS) Briefer: Jason Morse ARMED Team Leader Ground System Survivability, TARDEC

Title: An Integrated Design Environment to Evaluate Power/Performance Tradeoffs for Sensor Network Applications 1

Parallelization of a Electromagnetic Analysis Tool

SINOVIA An open approach for heterogeneous ISR systems inter-operability

75th Air Base Wing. Effective Data Stewarding Measures in Support of EESOH-MIS

Concept of Operations Discussion Summary

COMPUTATIONAL FLUID DYNAMICS (CFD) ANALYSIS AND DEVELOPMENT OF HALON- REPLACEMENT FIRE EXTINGUISHING SYSTEMS (PHASE II)

Technological Advances In Emergency Management

Web Site update. 21st HCAT Program Review Toronto, September 26, Keith Legg

Sparse Linear Solver for Power System Analyis using FPGA

Topology Control from Bottom to Top

COTS Multicore Processors in Avionics Systems: Challenges and Solutions

Using Templates to Support Crisis Action Mission Planning

Wireless Connectivity of Swarms in Presence of Obstacles

DAFS Storage for High Performance Computing using MPI-I/O: Design and Experience

DEVELOPMENT OF A NOVEL MICROWAVE RADAR SYSTEM USING ANGULAR CORRELATION FOR THE DETECTION OF BURIED OBJECTS IN SANDY SOILS

Towards a Formal Pedigree Ontology for Level-One Sensor Fusion

C2-Simulation Interoperability in NATO

Col Jaap de Die Chairman Steering Committee CEPA-11. NMSG, October

Office of Global Maritime Situational Awareness

Spacecraft Communications Payload (SCP) for Swampworks

ATCCIS Replication Mechanism (ARM)

Shallow Ocean Bottom BRDF Prediction, Modeling, and Inversion via Simulation with Surface/Volume Data Derived from X-Ray Tomography

BUPT at TREC 2009: Entity Track

Corrosion Prevention and Control Database. Bob Barbin 07 February 2011 ASETSDefense 2011

Computer Aided Munitions Storage Planning

SURVIVABILITY ENHANCED RUN-FLAT

Space and Missile Systems Center

INTEGRATING LOCAL AND GLOBAL NAVIGATION IN UNMANNED GROUND VEHICLES

Polarized Downwelling Radiance Distribution Camera System

Defense Hotline Allegations Concerning Contractor-Invoiced Travel for U.S. Army Corps of Engineers' Contracts W912DY-10-D-0014 and W912DY-10-D-0024

Use of the Polarized Radiance Distribution Camera System in the RADYO Program

LARGE AREA, REAL TIME INSPECTION OF ROCKET MOTORS USING A NOVEL HANDHELD ULTRASOUND CAMERA

Dr. Stuart Dickinson Dr. Donald H. Steinbrecher Naval Undersea Warfare Center, Newport, RI May 10, 2011

Data Warehousing. HCAT Program Review Park City, UT July Keith Legg,

Detection, Classification, & Identification of Objects in Cluttered Images

Ross Lazarus MB,BS MPH

HEC-FFA Flood Frequency Analysis

Parallel Matlab: RTExpress on 64-bit SGI Altix with SCSL and MPT

New Dimensions in Microarchitecture Harnessing 3D Integration Technologies

Monte Carlo Techniques for Estimating Power in Aircraft T&E Tests. Todd Remund Dr. William Kitto EDWARDS AFB, CA. July 2011

Report Documentation Page

David W. Hyde US Army Engineer Waterways Experiment Station Vicksburg, Mississippi ABSTRACT

Data Replication in the CNR Environment:

NEW FINITE ELEMENT / MULTIBODY SYSTEM ALGORITHM FOR MODELING FLEXIBLE TRACKED VEHICLES

Dr. ing. Rune Aasgaard SINTEF Applied Mathematics PO Box 124 Blindern N-0314 Oslo NORWAY

Introducing I 3 CON. The Information Interpretation and Integration Conference

Advanced Numerical Methods for Numerical Weather Prediction

SETTING UP AN NTP SERVER AT THE ROYAL OBSERVATORY OF BELGIUM

Rockwell Science Center (RSC) Technologies for WDM Components

Lessons Learned in Adapting a Software System to a Micro Computer

MATREX Run Time Interface (RTI) DoD M&S Conference 10 March 2008

DATA COLLECTION AND TESTING TOOL: SAUDAS

Development and Test of a Millimetre-Wave ISAR Images ATR System

Advanced Numerical Methods for Numerical Weather Prediction

M&S Strategic Initiatives to Support Test & Evaluation

Smart Data Selection (SDS) Brief

WAITING ON MORE THAN 64 HANDLES

ENHANCED FRAGMENTATION MODELING

32nd Annual Precise Time and Time Interval (PTTI) Meeting. Ed Butterline Symmetricom San Jose, CA, USA. Abstract

Transcription:

Distributed Real-Time Embedded Processing Tiehan Lv Wayne Wolf Dept. of EE, Princeton University Phone: (609) 258-1424 Fax: (609) 258-3745 Email: wolf@princeton.edu Burak Ozer Verificon Corp. Abstract: The embedded systems group at Princeton University is building a distributed system for real-time analysis of video from multiple cameras. Most work in multiplecamera video systems relies on centralized processing. However, performing video computations at a central server has several disadvantages: it introduces latency that reduces the response time of the video system; it increases the amount of buffer memory required; and it consumes network bandwidth. These problems cause centralized video processing systems to not only provide lower performance but to use excess power as well. A deployable multi-camera video system must perform distributed computation, including computation near the camera as well as remote computations, in order to meet performance and power requirements. Smart cameras combine sensing and computation to perform real-time image and video analysis. A smart camera can be used for many applications, including face recognition and tracking. We have developed a smart camera system [Wol02] that performs real-time gesture recognition. This system, which currently runs on a Trimedia TM-100 VLIW processor, classifies gestures such as walking, standing, waving arms. It currently runs at 25 frames/sec on the Trimedia processor. The application uses a number of standard vision algorithms as well as some improvements of our own; the details of the algorithms are not critical to the distributed system research we propose here. However, real-time vision is very well suited to distributed system implementation. Using multiple cameras simplifies some important problems in video analysis. Occlusion causes many problems in vision; for example, when the subject turns such that only one arm can be seen from a single camera, the algorithms must infer that the arm exists in order to confirm that the subject in front of the camera is a person and not something else. When views are available from multiple cameras, the data can be fused to provide a global view of the subject that provides more complete information for higher-level analysis. Multiple cameras also allow us to replace mechanical panning and zooming with electronic panning and zooming. Electronically panned/zoomed cameras do not have inertia that affect tracking; they are also more reliable under harsh environmental conditions.

Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 20 AUG 2004 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Distributed Real-Time Embedded Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Dept. of EE, Princeton University; Verificon Corp. 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR S ACRONYM(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution unlimited 11. SPONSOR/MONITOR S REPORT NUMBER(S) 13. SUPPLEMENTARY NOTES See also ADM001694, HPEC-6-Vol 1 ESC-TR-2003-081; High Performance Embedded Computing (HPEC) Workshop (7th)., The original document contains color images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified 18. NUMBER OF PAGES 15 19a. NAME OF RESPONSIBLE PERSON Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18

Processing in the distributed smart camera is inherently distributed. Sending raw frames to a central node for processing would consume enormous amounts of bandwidth; that bandwidth not only increases the cost of the network, it consumes considerable energy as well. Performing low-level processing at each camera, then sending abstractions of the frames to other nodes that can combine the results from several frames saves considerable power. Distributed processing is also probably the most effective approach to meeting real-time deadlines and low latency. The smart camera system will often be used as input to a decision-making system or person, so low latency is an important criteria. Using multiple nodes to process the data should considerably reduce the latency in obtaining the result. We believe that a combination of design-time and run-time decisions are required for the successful deployment of video processing networks. Some run-time decisions are necessary because some characteristics of the system will not be known until run time, and those characteristics may change during operation of the network. Furthermore, the very configuration of the network may not be known at design time, as it may depend on the size of the area to be monitored, the physical constraints of installing nodes, etc. The overall hardware and software architecture of the network must be designed to operate well not just at a single design point, but across a range of possible configurations and operating decisions. We are developing a middleware-based system for video analysis. The DVM (distributed video middleware) runs on top of the operating system (Linux) on each node. The DVM layer deals with video objects. The video objects may be described in any of several ways: regions of an image, curves that represent sections of the image, etc. The video objects represent the current state of analysis. The DVM layer manages the location of the video object data as it goes through the various processing stages. The DVM may be guided by design-time information provided by tools, such as the relative amounts of data and processing time required for various operations; it also makes use of run-time information that helps it make the best use of available resources. A DVM-based architecture leverages COTS operating systems. It also helps to minimize the amount of work that is required to port a batch-oriented video application to a distributed platform. Relatively few video processing algorithm experts are also adept at distributed computing. The video object model allows them to map their video algorithms onto data structures and let the DVM layer manage the allocation and scheduling tasks required to run in distributed mode. References [Wol02] Wayne Wolf, Burak Ozer, and Tiehan Lv, Smart cameras as embedded systems, IEEE Computer, 35(9) September 2002, pp. 48-53.

Distributed Real-Time Embedded Processing Tiehan Lv (1), Burak Ozer (2), Wayne Wolf (1) (1) Dept. of EE, Princeton University (2) Verificon Corp.

Smart Camera Systems A smart camera is a video surveillance system that is able to identify body parts and objects and then recognize the activity of people or objects in the scene.

Sample Scenarios

Algorithms Input Image Duplication Output Modification output Region Extraction Contour Following Ellipse Fitting Graph Matching Low Level HMM for Head HMM for Torso Gesture Recognition classifier HMM Output HMM for Hand1 HMM for Hand2 High Level

Architecture of a Smart Camera System Camera NTSC Camera NTSC TriMedia Board TM32 (VLIW) Shared Memory TriMedia Board TM32 (VLIW) Shared Memory PCI Bus Host PC SuperScalar RISC CPU

Multiple Camera Systems Electronically Panning&Zooming Occlusion

Centralized Processing Storage Cost Latency Communication Load Power Server

Centralized Processing vs. Distributed Processing Raw Data vs. Abstract Representation Network Load Energy Latency Processing Power

Design Time Decisions and Runtime Decisions Configuration Processor Special Functional Units Hardware Architecture Operating System Efficiency Flexibility

Distributed Middleware The Concept of Layers Trans-platform Development Trans-platform Communication

Distributed Middleware Processing Application DVM Operating Systems

Distributed Middleware Separate video processing algorithm and operating system Algorithm researcher focus on video processing Facilitate porting application to different systems

Conclusion and Future Work Distributed smart camera systems have advantages over traditional centralized processing systems Design time decisions and run-time decisions need to be combined to form an optimal solution Distributed video middleware can facilitate research and application development