Motion and Target Tracking (Overview) Suya You. Integrated Media Systems Center Computer Science Department University of Southern California
|
|
- Dwayne Asher Henry
- 6 years ago
- Views:
Transcription
1 Motion and Target Tracking (Overview) Suya You Integrated Media Systems Center Computer Science Department University of Southern California 1
2 Applications - Video Surveillance Commercial - Personals/Publics - Environment/Wildlife animal monitoring - Traffic measurement Law enforcement - National security Military & defense 2
3 Sensor and Technology Cheaper, cheaper - Very prevalent in commercial/military establishments High performance - Millions pixels - Full range - Networked (wired/wireless) - On-board processors 3
4 Machine Vision Covers many of challenging issues Sensor & data acquisition - Multiple & distributed sensor network Scene analysis & understanding - Detection - Tracking - Recognition Data representation & comprehension - Object and environment modeling - Simulation and Visualization 4
5 Research & Systems Ground Small/modest-scale environment - Infrastructure, Military base - Intelligent traffic monitoring Airborne Large-scale environment - National Infrastructure, Battlefield Space Global/outspace - Battlefield, Environment monitoring, Mars 5
6 Example: Intelligent Traffic Monitoring Concept of Operations Sensor network Information processing Information access Distributed sensor network - Rectilinear CCD, omnidirectional, IR cameras - Location sensor - GPS - Fixed, active, and mobile - Networked wired and wireless Dynamic event detection & analysis - Target detection/tracking/recognition - Incident detection/classification/reporting 3D environment - 3D scene model (city 3D digital map) - Target 3D geo-localization - Immersive 3D visualization Real-time information access - Control center and drivers 6
7 Vision Processing Issues Camera modeling and calibration - Perspective, panoramic cameras - Allows automatic and on-site Dynamic image analysis - Dynamic target detection/tracking - Vehicle and people - Target recognition - Classification approximately - Active vision - Fixed and mobile platforms 3D processing - 3D scene modeling: - City model (building and road) - Target 3D geo-localization - Tracking and positioning in 3D world - Visualization - Immersive 3D (base station) - Abstract and full data (Web, drivers) 7
8 Challenges Camera modeling and calibration - Basic techniques are pretty as is - Main challenges are automatic and on-site calibrations - Model based approach given 3D model - Self-calibration vision approach included in the tracking module Dynamic image analysis - Outdoor imaging environment lighting, weather - Dynamic background modeling approach - Visual modeling finding imaging invariant (lighting, geometry) - Target detection/tracking long sequence, drifts, self-motion - Model based approach 3D scene - Distributed vision approach multi-view/camera geometry - Hybrid approach Active sensor (GPS/INS) aided vision - Active sensors aid video system - Reduces frame-frame vision processing - Video processing aids sensor performance - Allows estimate of camera attitude - Improves speed and accuracy 3D scene modeling - Urban site model (building and road) city scale, accuracy to level of street block, less manual interaction - Stereo approach still plays a main role - LiDAR is pretty new and promising approach - Ground based laser range finder 8
9 Challenges (con.) Heavy computation load is a main barrier - High resolution sensor better for image analysis (e.g. detection ) - Fast processing - can loose lots of vision processing jobs (e.g. tracking) - Multiple camera arrays huge data needs to be fused and computed - Users want the results what they are seeing Real-time vision computation - Developing fast algorithms e.g. Pyramid technique is a good example - Aided by other sensors e.g. Inertial sensor, GPS... - Hardware - General computer - Special CPU features (low-level programming) - Processor clusters (parallelization programming) - Special processor/board - DSP technique - FPGA technique (cheaper, flexible) - GPU power (CG language) - Smart camera (on-board processors) 9
10 Research Components (image related) Dynamic Global Image Construction and Registration Construct video Mosaic and register mission-collected video frames to previously prepared reference imagery in order to geolocate both moving and stationary targets in real time Multiple Target Surveillance Simultaneously track multiple moving targets in a sensor s field of regard Fixed sensors and active moving platforms (Satellite, UAV, robot) Activity Monitoring The monitoring of several areas of the battle space for distinctive motion activities such as a soldier incursion and vehicle movement 10
11 Motion Estimate A Pyramid-Based Approach Achieved through successive refinement within a multi-resolution pyramid structure - 2D motion flow estimation - Fit motion model (linear/nonlinear) - Warp to align Highly efficient can handle very large camera motions of the field of view, and provide very precise alignment 11
12 Multi-resolution Approach It s simple, but still very useful Target detection Motion tracking Navigation Compression It can handle large motion and be helpful for vision acceleration, but construction of itself needs extra computation Pyramid Vision Processor/Board Single-chip Simultaneous input/processing of up to 2 channels Real-time (30fps), low-latency processing (1-2 frame delay) 12
13 Robust Image Motion Estimation - Hybrid point and region - selecting good points and regions as tracking features - Multi-stage tracking strategy - multiresolution - A closed-loop cooperative manner integrating the feature detection, tracking, and verification Region/Point Detect & Select Multiscale Region Optical Flow Affine Region Warp and SSD Evaluation Linear Point Motion Refinement by Search Affine Region Warp and SSD Evaluation Iteration Control 13
14 Robust Image Motion Estimation (con.) Image i Source Region Affine model defines warp of source region to a confidence frame Affine Warp Image i+1 Target Region R t R t0 Confidence Frame R c SSD Normalized SSD measures the difference between warped source and target regions, thereby measuring the quality of tracking δ=(0,1] δ 1 = 1 + ε Rt ( x, t) Rc ( x, t) ε= 2 max{ R( x, t), R( x, t) t c 2 2 } 14
15 Performance Evaluation (a) detected tracking features (b) estimated motion field Synthetic image sequence (Yosemite-Fly-Through) Technique Average Angle Error Standard Deviation Horn and Schunck Lucas and Kanade Anandan Fleet and Jepson Closed-loop approach
16 Some Applications -Tracking for ground and Aerial image - Movie special effects including X-Men 2, Daredevil, and Dr. Seuss The Cat in the Hat. - Hardware implementation is under way (Olympus): PCMCIA size card 16
17 Video Stabilization/Mosaic Inter-frame image motion estimation (Parameters) Motion compensation and registration (Model) Image alignments and mosaicking (Composition) 17
18 Global Motion Compensation Image stabilization Registering the two images and computing the geometric transformation that warps the source image such that it aligns with the reference image cancel the motion of observer Registration model translation, affine, and perspective u( x, y) = v v( x, y) v 3 + v x+ v + v x+ v Model fitting An over-constrained SVD solution - Motion vector field (every pixel) - Feature based approach - Coarse-fine approach y+ ux 1 y+ u y u xy 2 + u xy 2 18
19 Video Stabilization/Mosaic Frame-to-Mosaic alignment Mosaic reference (first, middle, defined ) Warping each frame to reference Hierarchical alignment Temporal filtering (for mosaic) Intensity blending Weighted average blending function 19
20 Target Detection & Tracking Goal Moving target detection/tracking Vehicle and people Landmark recognition Interested buildings and reference features Platform Stationary sensors Ground cameras (perspective, panoramic cameras) Moving sensors Satellite, UAV, robot carried Image, GPS, and INS data are available 20
21 Stationary & Moving Platforms Stationary cameras Background is static - assumption Foreground is moving BG/FG classification Background matching Matching image Identification Tracking Moving cameras Background is moving camera motion Foreground is moving BG/FG classification Motion compensation Background matching Matching image Identification Tracking Challenges: Background modeling and maintaining Motion compensation (image stabilization) 21
22 Target Detection/Tracking (stationary sensor) Video image Preprocessing Video image Motion compensation Preprocessing Background model Background matching Background model Background matching Detection Tracking Detection Tracking 22
23 Background Modeling It s a challenging problem Appearance changes Time, lighting, weather Waking/sleeping objects BG objects moving, FG object still Color/contrast aperture Subsumed BG/FB, Homogeneous region Waving trees Vacillating BK Apparent Motion Camera motion 23
24 Constant Intensity Model Pre-defined constant BK Blue screen - movie special effect Everything is predefined no need to be estimated on-line Some preprocessing may be necessary log filtering Adjacent Frame Difference (AFD) approach Constant BK, but unknown BK is modeled as intensity constant Need parameter estimate/update on-line Mean Estimate Approach N 1 1 Linear model, i.e. m ( x, y) = mold ( x, y) + I( x, y) N N Mean-Covariance Approach Both m,σ need to be estimated Optimal estimators (Kalman filter) Block Correlation Matching Approach Block-wise median template Correlation matching 24
25 Statistical Feature Model Complex background Feature based approach - matching feature is a 4D Spatio-Temporal vector, i.e. m = BK is modeled as a certain statistical distribution in the 4D vector space 1 1 m ( m m )( m m ) N N = mean m, σ = i N i N 1 i Background update temporal blending i mean i mean T [ I, I x, I y, I t ] Single Gaussian Estimate approach m B new = (1 α ) B + α m old N 1 1 ( x, y) = mold ( x, y) m( x, y) N N new + σ new N N ( x, y) = σ old + ( m mmean)( m mmean) 2 N + 1 ( N + 1) T Mixture of Gaussian Estimate Approach BK is modeled as multiple Gaussian distributions Multiple frequency Gaussian channels Markov Model, EM (Expectation-Maximization) approaches 25
26 26 Motion Estimate Techniques Instead of using intensity constant constraint, BK is modeled as constant motion/optical flow field Matching feature is a 3D-vector, i.e. Background update is an optical estimation problem Extend to Multi-resolution detection and update Motion Field Model = t y x I v I u I ],, [ t y x I I I m = = t y t x y y x y x x I I I I I I I I I I v u 1 2 2
27 Prediction Model Statistical Prediction Techniques The BK pixels are predicted what are expected in next input frame B ( x, y) = i= 1 Linear estimation problem - LS, Wiener filtering E t i ( x, y) More complex prediction model is possible Motion/optical filed prediction model Non-linear prediction model t a 2 2 [ e ] = E[ I ] a E[ I I ] t t i I + i= 1 i t t i 27
28 Statistical Recognition Model Estimate as a Recognition Problem Training motionless background frames Feature extraction statistical image feature Eigenbackground PCA (Principal Component Analysis) Matching PCA projection Image space PCA Space Image space Live video projection Background training Foreground Background 28
29 More Techniques - The problem of above approaches is to separate three detection/tracking processes into independent phases - Low level pixel-wise detection/segmentations - Middle level labeling pixels as grouped targets - High level temporal-tracking, Spatio-recognition - An Integrated Approach integrating the pixel classification, region detection, and interframe tracking in closed-loop manner Pixel-wise processing (Segmentation) Region-wise processing (Clustering) Frame-wise processing (Matching) Linear prediction K-means clustering 29
30 More Techniques (con.) - Illumination invariant I = x, y ( x, y x, y x, y α ρ ϕ L ) γ Surface lighting model log( I x y ) = log( α ) + γ log( L x, y ) + γ log( ρ x, y ) + γ log( ϕ x,, y I ( x, y ) = L 1 ( 2 x, 3 y ) + M 14 ( 2 x 43, y ) illuminati on - dependent illuminati on - invariant ) L ˆ ( x, y ) = Filter ( I ( x, y )) Mˆ ( x, y ) = I ( x, y ) Lˆ ( x, y ) or mˆ ( x, y ) = exp( I ( x, y ) Lˆ ( x, y )) Illumination invariant Strong surface shading: effective Strong illumination gradients: less effective Low intensity: none or worst 30
31 Target Detection/Tracking (moving sensor) Moving cameras - Background is moving camera motion - Foreground is moving Motion compensation - Registering images and computing geometric transformation that compensates the source image such that it aligns with reference images Background model Detection Video image Motion compensation Preprocessing Background matching Tracking 31
32 Motion Compensation Parametric model translation, affine, and perspective x = v y v Model fitting v1x + v + v x+ v y + u1x y + u y u 2 + u 2 xy xy An over-constrained optimal estimate problem - It s hard BK contains moving objects - Motion vector field vs. Feature based approaches - Iterative vs. Non-iterative approaches 32
33 Motion Vector Field Estimation Parametric model Affine transformation x y Optical flow tracking and warping = Frame i-1 Source point R t0 v v 0 2,, v v 1 3 Affine Warp x y + v v 4 5 Affine model defines warp of source frame to a reference frame R c Frame i Target point R t Multi-resolution Iterative refinement SSD Normalized SSD measures the difference between warped source and target 33
34 Dynamic Object Tracking: Results Hand-held camera: Multiple objects Tracked object visualized in 3D Hand-held camera: Integration of mosaic, image stabilization, and object tracking 34
35 Dynamic Object Tracking: Results UAV sensor: Integration of mosaic, image stabilization, and object tracking 35
36 Feature Matching Parametric model Affine transformation x y = v v 0 2,, v v 1 3 x y + v v 4 5 Feature tracking and warping Affine (T1) Frame i-1 Frame i Feature selection (N) & SSD Affine (T2) Affine Warp Selection optimal T Affine (TM) Multi-resolution Iterative refinement 36
37 Others Perceptual Grouping Methodology - Tensor Voting - A simulation of perceptual organization infer what we perceive from noise/missing data - A Computational Framework for Segmentation and Grouping (formalized by USC Prof. Gérard Medioni) - Tensor Voting - Description data is represented as tensors to generate descriptions in terms of surface, regions, curves, and labeled junctions, from sparse, noisy, binary data in 2D/3D - Voting how the tensors communicate and propagate information between neighbors - Has been apply to many vision problems, including - Segmentation/detection - Motion tracking, Trajectory extraction - Stereo vision - Epipolar geometry estimation 37
38 Others (con.) Multi-view Cameras - Continuous cross-view tracking - Stationary platform Stationary platform - Stationary platform Moving platform - Moving platform Moving platform - Requires continuous and complete tracking trajectories - Requires trajectories and view points registrations 38
39 Others (con.) Omnidirectional Image - Wide (360 degree) horizontal FOV - Less partial occlusions - Less motion ambiguities (pure translation and rotation) - Limited resolution used for close range objects 39
40 Benefits Using Panoramic Imaging Wide FOV ensures: - A sufficient number of features for tracking - Less partial occlusion Accurate estimates for large motion - Provides sufficient information for distinguishing motion ambiguities (pure translation and rotation) 40
41 Others (con.) Integration of Imagery and Range Data - Wide coverage - Rapidness and robustness - Direct recover of 3D models and geolocations Camera parameters Live images Image warping Residual estimate Reference images DEM Space Filtering Detected targets LiDAR has accuracy typically as ~ m ground-spacing and centimeters height 41
Motion Tracking and Event Understanding in Video Sequences
Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!
More informationAdaptive Multi-Stage 2D Image Motion Field Estimation
Adaptive Multi-Stage 2D Image Motion Field Estimation Ulrich Neumann and Suya You Computer Science Department Integrated Media Systems Center University of Southern California, CA 90089-0781 ABSRAC his
More informationDense Image-based Motion Estimation Algorithms & Optical Flow
Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion
More informationOverview. Related Work Tensor Voting in 2-D Tensor Voting in 3-D Tensor Voting in N-D Application to Vision Problems Stereo Visual Motion
Overview Related Work Tensor Voting in 2-D Tensor Voting in 3-D Tensor Voting in N-D Application to Vision Problems Stereo Visual Motion Binary-Space-Partitioned Images 3-D Surface Extraction from Medical
More informationV-Sentinel: A Novel Framework for Situational Awareness and Surveillance
V-Sentinel: A Novel Framework for Situational Awareness and Surveillance Suya You Integrated Media Systems Center Computer Science Department University of Southern California March 2005 1 Objective Developing
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical
More informationVideo Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934
Video Georegistration: Key Challenges Steve Blask sblask@harris.com Harris Corporation GCSD Melbourne, FL 32934 Definitions Registration: image to image alignment Find pixel-to-pixel correspondences between
More informationVC 11/12 T11 Optical Flow
VC 11/12 T11 Optical Flow Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Optical Flow Constraint Equation Aperture
More informationPeripheral drift illusion
Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video
More informationOptical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationGlobal Flow Estimation. Lecture 9
Global Flow Estimation Lecture 9 Global Motion Estimate motion using all pixels in the image. Parametric flow gives an equation, which describes optical flow for each pixel. Affine Projective Global motion
More informationGlobal Flow Estimation. Lecture 9
Motion Models Image Transformations to relate two images 3D Rigid motion Perspective & Orthographic Transformation Planar Scene Assumption Transformations Translation Rotation Rigid Affine Homography Pseudo
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationStable Vision-Aided Navigation for Large-Area Augmented Reality
Stable Vision-Aided Navigation for Large-Area Augmented Reality Taragay Oskiper, Han-Pang Chiu, Zhiwei Zhu Supun Samarasekera, Rakesh Teddy Kumar Vision and Robotics Laboratory SRI-International Sarnoff,
More informationOptical flow and tracking
EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,
More informationMotion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi
Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationDetecting and Tracking Moving Objects for Video Surveillance. Isaac Cohen and Gerard Medioni University of Southern California
Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California Their application sounds familiar. Video surveillance Sensors with pan-tilt
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationVideo Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin
Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods
More informationPerceptual Grouping from Motion Cues Using Tensor Voting
Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project
More informationModeling and Visualization
Modeling and Visualization 1. Research Team Project Leader: Other Faculty: Graduate Students: Industrial Partner(s): Prof. Ulrich Neumann, IMSC and Computer Science Prof. Suya You, IMSC and Computer Science
More informationCS 4495 Computer Vision Motion and Optic Flow
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationMultiple-Choice Questionnaire Group C
Family name: Vision and Machine-Learning Given name: 1/28/2011 Multiple-Choice naire Group C No documents authorized. There can be several right answers to a question. Marking-scheme: 2 points if all right
More informationBSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy
BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving
More informationComputer Vision Lecture 20
Computer Vision Lecture 2 Motion and Optical Flow Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de 28.1.216 Man slides adapted from K. Grauman, S. Seitz, R. Szeliski,
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationFast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data Xue Mei, Fatih Porikli TR-19 September Abstract We
More informationVideo Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin
Final Report Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This report describes a method to align two videos.
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationCOMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE
COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment
More informationAn Angle Estimation to Landmarks for Autonomous Satellite Navigation
5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian
More informationContinuous Multi-Views Tracking using Tensor Voting
Continuous Multi-Views racking using ensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA 90089-073.
More informationA MOTION MODEL BASED VIDEO STABILISATION ALGORITHM
A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM N. A. Tsoligkas, D. Xu, I. French and Y. Luo School of Science and Technology, University of Teesside, Middlesbrough, TS1 3BA, UK E-mails: tsoligas@teihal.gr,
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationFAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES
FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections
More informationCSE 527: Introduction to Computer Vision
CSE 527: Introduction to Computer Vision Week 5 - Class 1: Matching, Stitching, Registration September 26th, 2017 ??? Recap Today Feature Matching Image Alignment Panoramas HW2! Feature Matches Feature
More informationRobert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method
Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More informationUAV Autonomous Navigation in a GPS-limited Urban Environment
UAV Autonomous Navigation in a GPS-limited Urban Environment Yoko Watanabe DCSD/CDIN JSO-Aerial Robotics 2014/10/02-03 Introduction 2 Global objective Development of a UAV onboard system to maintain flight
More informationMarcel Worring Intelligent Sensory Information Systems
Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationMultiple Model Estimation : The EM Algorithm & Applications
Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Nov. 13, 2007 Harpreet S. Sawhney hsawhney@sarnoff.com Recapitulation Problem of motion estimation Parametric
More informationFinal Exam Study Guide
Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.
More informationSensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016
Sensor Fusion: Potential, Challenges and Applications Presented by KVH Industries and Geodetics, Inc. December 2016 1 KVH Industries Overview Innovative technology company 600 employees worldwide Focused
More informationComputational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --
Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching
More informationVisual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.
Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:
More informationOutline. Data Association Scenarios. Data Association Scenarios. Data Association Scenarios
Outline Data Association Scenarios Track Filtering and Gating Global Nearest Neighbor (GNN) Review: Linear Assignment Problem Murthy s k-best Assignments Algorithm Probabilistic Data Association (PDAF)
More informationNotes 9: Optical Flow
Course 049064: Variational Methods in Image Processing Notes 9: Optical Flow Guy Gilboa 1 Basic Model 1.1 Background Optical flow is a fundamental problem in computer vision. The general goal is to find
More informationMotion Estimation (II) Ce Liu Microsoft Research New England
Motion Estimation (II) Ce Liu celiu@microsoft.com Microsoft Research New England Last time Motion perception Motion representation Parametric motion: Lucas-Kanade T I x du dv = I x I T x I y I x T I y
More informationStructured light 3D reconstruction
Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance
More informationDirect Methods in Visual Odometry
Direct Methods in Visual Odometry July 24, 2017 Direct Methods in Visual Odometry July 24, 2017 1 / 47 Motivation for using Visual Odometry Wheel odometry is affected by wheel slip More accurate compared
More informationLocal Feature Detectors
Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,
More information3D object recognition used by team robotto
3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object
More informationTrimble Engineering & Construction Group, 5475 Kellenburger Road, Dayton, OH , USA
Trimble VISION Ken Joyce Martin Koehler Michael Vogel Trimble Engineering and Construction Group Westminster, Colorado, USA April 2012 Trimble Engineering & Construction Group, 5475 Kellenburger Road,
More informationMotion in 2D image sequences
Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or activities Segmentation and understanding of video sequences
More informationImage stitching. Announcements. Outline. Image stitching
Announcements Image stitching Project #1 was due yesterday. Project #2 handout will be available on the web later tomorrow. I will set up a webpage for artifact voting soon. Digital Visual Effects, Spring
More informationObject Recognition with Invariant Features
Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user
More informationVideo Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen
Video Mosaics for Virtual Environments, R. Szeliski Review by: Christopher Rasmussen September 19, 2002 Announcements Homework due by midnight Next homework will be assigned Tuesday, due following Tuesday.
More informationContinuous Multi-View Tracking using Tensor Voting
Continuous Multi-View Tracking using Tensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California {jinmanka, icohen, medioni}@iris.usc.edu
More informationGeomatica OrthoEngine Orthorectifying VEXCEL UltraCam Data
Geomatica OrthoEngine Orthorectifying VEXCEL UltraCam Data Vexcel s UltraCam digital camera system has a focal distance of approximately 100mm and offers a base panchromatic (black and white) resolution
More informationASTRIUM Space Transportation
SIMU-LANDER Hazard avoidance & advanced GNC for interplanetary descent and soft-landing S. Reynaud, E. Ferreira, S. Trinh, T. Jean-marius 3rd International Workshop on Astrodynamics Tools and Techniques
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More informationLecture 16: Computer Vision
CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods
More informationLearning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009
Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer
More informationUse of Image aided Navigation for UAV Navigation and Target Geolocation in Urban and GPS denied Environments
Use of Image aided Navigation for UAV Navigation and Target Geolocation in Urban and GPS denied Environments Precision Strike Technology Symposium Alison K. Brown, Ph.D. NAVSYS Corporation, Colorado Phone:
More informationState Estimation for Continuous-Time Systems with Perspective Outputs from Discrete Noisy Time-Delayed Measurements
State Estimation for Continuous-Time Systems with Perspective Outputs from Discrete Noisy Time-Delayed Measurements António Pedro Aguiar aguiar@ece.ucsb.edu João Pedro Hespanha hespanha@ece.ucsb.edu Dept.
More informationCS231A Section 6: Problem Set 3
CS231A Section 6: Problem Set 3 Kevin Wong Review 6 -! 1 11/09/2012 Announcements PS3 Due 2:15pm Tuesday, Nov 13 Extra Office Hours: Friday 6 8pm Huang Common Area, Basement Level. Review 6 -! 2 Topics
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationPrecision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation
Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation February 26, 2013 ESRA Fed. GIS Outline: Big picture: Positioning and
More informationCS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow
CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationTraffic Surveillance using Aerial Video
Traffic Surveillance using Aerial Video UGP Project III (EE491) Report By Anurag Prabhakar(11140) Ritesh Kumar(11602) Under the guidance of Dr. K. S. Venkatesh Professor Department of Electrical Engineering
More informationMultiple Model Estimation : The EM Algorithm & Applications
Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Dec. 4, 2008 Harpreet S. Sawhney hsawhney@sarnoff.com Plan IBR / Rendering applications of motion / pose
More informationEE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline
1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation
More informationVisual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation
More informationDense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm
Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers
More informationComputer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki
Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki 2011 The MathWorks, Inc. 1 Today s Topics Introduction Computer Vision Feature-based registration Automatic image registration Object recognition/rotation
More informationAn Overview of Applanix.
An Overview of Applanix The Company The Industry Leader in Developing Aided Inertial Technology Founded on Canadian Aerospace and Defense Industry Expertise Providing Precise Position and Orientation Systems
More informationCS4670: Computer Vision
CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know
More informationRepresenting Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab Goal Represent moving images with sets of overlapping layers Layers are ordered in depth and occlude each other Velocity
More informationCOMPUTER VISION. Dr. Sukhendu Das Deptt. of Computer Science and Engg., IIT Madras, Chennai
COMPUTER VISION Dr. Sukhendu Das Deptt. of Computer Science and Engg., IIT Madras, Chennai 600036. Email: sdas@iitm.ac.in URL: //www.cs.iitm.ernet.in/~sdas 1 INTRODUCTION 2 Human Vision System (HVS) Vs.
More informationSpatio-Temporal Stereo Disparity Integration
Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz
More informationModel-based Visual Tracking:
Technische Universität München Model-based Visual Tracking: the OpenTL framework Giorgio Panin Technische Universität München Institut für Informatik Lehrstuhl für Echtzeitsysteme und Robotik (Prof. Alois
More informationFundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F
Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental
More informationTopics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester
Topics to be Covered in the Rest of the Semester CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Charles Stewart Department of Computer Science Rensselaer Polytechnic
More informationVisual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech
Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The
More informationAll good things must...
Lecture 17 Final Review All good things must... UW CSE vision faculty Course Grading Programming Projects (80%) Image scissors (20%) -DONE! Panoramas (20%) - DONE! Content-based image retrieval (20%) -
More information3D Modeling from Range Images
1 3D Modeling from Range Images A Comprehensive System for 3D Modeling from Range Images Acquired from a 3D ToF Sensor Dipl.-Inf. March 22th, 2007 Sensor and Motivation 2 3D sensor PMD 1k-S time-of-flight
More informationMobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS
Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2
More informationSensor Modalities. Sensor modality: Different modalities:
Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationThe Applanix Approach to GPS/INS Integration
Lithopoulos 53 The Applanix Approach to GPS/INS Integration ERIK LITHOPOULOS, Markham ABSTRACT The Position and Orientation System for Direct Georeferencing (POS/DG) is an off-the-shelf integrated GPS/inertial
More information