Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Similar documents
Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

CHAPTER 5 MOTION DETECTION AND ANALYSIS

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Module 7 VIDEO CODING AND MOTION ESTIMATION

Peripheral drift illusion

Spatial track: motion modeling

Motion Estimation. There are three main types (or applications) of motion estimation:

Multi-stable Perception. Necker Cube

Optic Flow and Basics Towards Horn-Schunck 1

Final Review CMSC 733 Fall 2014

CPSC 425: Computer Vision

Lecture 16: Computer Vision

Lecture 16: Computer Vision

Comparison between Motion Analysis and Stereo

Lecture 20: Tracking. Tuesday, Nov 27

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

Motion Estimation for Video Coding Standards

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Stereo imaging ideal geometry

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Capturing, Modeling, Rendering 3D Structures

Using temporal seeding to constrain the disparity search range in stereo matching

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

CS664 Lecture #18: Motion

Chapter 9 Object Tracking an Overview

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Feature Tracking and Optical Flow

Practice Exam Sample Solutions

Motion detection Computing image motion Motion estimation Egomotion and structure from motion Motion classification

Feature Tracking and Optical Flow

EE795: Computer Vision and Intelligent Systems

A Tutorial Guide to Tribology Plug-in

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

CS 4495 Computer Vision Motion and Optic Flow

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Multi-View Stereo for Static and Dynamic Scenes

Hand-Eye Calibration from Image Derivatives

Edge and local feature detection - 2. Importance of edge detection in computer vision

Constructing a 3D Object Model from Multiple Visual Features

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

EXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003,

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

Chapter 3 Image Registration. Chapter 3 Image Registration

VC 11/12 T11 Optical Flow

Image Differentiation

Notes 9: Optical Flow

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

ELEC Dr Reji Mathew Electrical Engineering UNSW

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Classification and Detection in Images. D.A. Forsyth

Marcel Worring Intelligent Sensory Information Systems

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

NEW CONCEPT FOR JOINT DISPARITY ESTIMATION AND SEGMENTATION FOR REAL-TIME VIDEO PROCESSING

Comparison Between The Optical Flow Computational Techniques

Range Sensors (time of flight) (1)

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Lecture 6: Edge Detection

Stereo Vision. MAN-522 Computer Vision

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

RASNIK Image Processing with a Steepest Ascent Algorithm

3D Modeling of Objects Using Laser Scanning

Fotonic E-series User Manual

Midterm Examination CS 534: Computational Photography

Recall: Derivative of Gaussian Filter. Lecture 7: Correspondence Matching. Observe and Generalize. Observe and Generalize. Observe and Generalize

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

CS-465 Computer Vision

Optic Flow and Motion Detection

Arbib: Slides for TMB2 Section 7.2 1

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Motion in 2D image sequences

Computer Graphics / Animation

Dense 3D Reconstruction. Christiano Gava

Ch. 6: Trajectory Generation

Stereo Matching.

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Comment on Numerical shape from shading and occluding boundaries

Chapter 3: Intensity Transformations and Spatial Filtering

Motion detection Computing image motion Motion estimation Egomotion and structure from motion Motion classification. Time-varying image analysis- 1

Dense Image-based Motion Estimation Algorithms & Optical Flow

C18 Computer vision. C18 Computer Vision. This time... Introduction. Outline.

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Transcription:

Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of 3D object properties. Motion analysis and object tracking combine two separate but inter-related components: Localization and representation of the object of interest (target). Trajectory filtering and data association. One or the other may be more important based on the nature of the motion application. 1 2 Motion analysis A simple method for motion detection is the subtraction of two or more images in a given image sequence. Usually, this method results in a difference image d(i, j), in which non-zero values indicate areas with motion. For given images f 1 and f 2, d(i, j) can be computed as follows: d(i, j) = 0 if f 1 (i, j) f 2 (i, j) ε = 1 otherwise 3 4 Difference Pictures Another example of a difference picture that indicates the motion of objects (ε = 25). 5 6 1

Difference Pictures Difference Pictures Applying a size filter (size 10) to remove noise from a difference picture. 7 The differential method can rather easily be tricked. Here, the indicated changes were induced by changes in the illumination instead of object or camera motion (again, ε = 25). 8 Cumulative Difference Image In order to determine the direction of motion, we can compute the cumulative difference image for a sequence f 1,, f n of more than two images: Here, f 1 is used as the reference image, and the weight coefficients a k can be used to give greater weight to more recent frames and thereby highlight the current object positions. Example: Sequence of 4 images: 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 0 0 1 1 0 a 2 = 1 a 3 = 2 a 4 = 4 Result: 0 7 7 0 0 6 6 0 0 4 4 0 0 7 7 0 9 10 reflects the image changes due to motion during a time interval dt which must be short enough to guarantee small inter-frame motion changes. The optical flow field is the velocity field that represents the three-dimensional motion of object points across a twodimensional image. Generally speaking, while differential motion analysis is well-suited for motion detection, it is not ideal for the analysis of motion characteristics. computation is based on two assumptions: The observed brightness of any object point is constant over time. Nearby points in the image plane move in a similar manner (the velocity smoothness constraint). 11 12 2

The basic idea underlying most algorithms for optical flow computation: Regard image sequence as three-dimensional (x, y, t) space Determine x- and y-slopes of equal-brightness pixels along t-axis The computation of actual 3D gradients is usually quite complex and requires substantial computational power for real-time applications. 13 14 Let us consider the two-dimensional case (one spatial dimension x and the temporal dimension t), with an object moving to the right: Instead of using the gradient methods, one can simply determine those straight lines with a minimum of variation (standard deviation) in intensity along them: t Flow undefined in these areas t x x 15 16 computation will be in error if the constant brightness and velocity smoothness assumptions are violated. In real imagery, their violation is quite common. Typically, the optical flow changes dramatically in highly textured regions, around moving boundaries, at depth discontinuities, etc. Resulting errors propagate across the entire optical flow solution. Global error propagation is the biggest problem of global optical flow computation schemes, and local optical flow estimation helps overcome the difficulties. However, local flow estimation can introduce large errors for homogeneous areas, i.e., regions of constant intensity. One solution of this problem is to assign confidence values to local flow estimations and consider those when integrating local and global optical flow. 17 18 3

19 20 We can compute the local speed gradient to detect certain basic motion patterns. The speed gradient is defined as the direction of the steepest speed increase, regardless of the direction of motion. There are four different basic motion patterns: counterclockwise and clockwise rotation, contraction, and expansion. They can be combined to form spiral motion. movement speed gradient counterclockwise clockwise contraction expansion Idea: A consistent angle between the direction of movement and the orientation of the speed gradient indicates a specific motion pattern. 21 22 Angles other than 0, 90, 180, and 270 degrees correspond to spiral motion patterns. orientation of speed gradient direction of movement Feature Point Correspondence Feature point correspondence is another method for motion field construction. Velocity vectors are determined only for corresponding feature points. Object motion parameters can be derived from computed motion field vectors. Motion assumptions can help to localize moving objects. Frequently used assumptions include, like discussed above: Maximum velocity Small acceleration Common motion 23 24 4

Feature Point Correspondence The idea is to find significant points (interest points, feature points) in all images of the sequence points least similar to their surroundings, representing object corners, borders, or any other characteristic features in an image that can be tracked over time. Basically the same measures as for stereo matching can be used. Point detection is followed by a matching procedure, which looks for correspondences between these points in time. The main difference to stereo matching is that now we cannot simply search along an epipolar line, but the search area is defined by our motion assumptions. The process results in a sparse velocity field. Motion detection based on correspondence works even for relatively long interframe time intervals. Feature Point Correspondence 25 26 And this is The End 27 5