CHAPTER 5 MOTION DETECTION AND ANALYSIS

Similar documents
Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Spatial track: motion modeling

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

Chapter 3 Image Registration. Chapter 3 Image Registration

Structure from Motion. Prof. Marco Marcon

Feature Tracking and Optical Flow

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Feature Tracking and Optical Flow

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

Stereo Vision. MAN-522 Computer Vision

Module 7 VIDEO CODING AND MOTION ESTIMATION

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Dense Image-based Motion Estimation Algorithms & Optical Flow

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Notes 9: Optical Flow

Lecture 20: Tracking. Tuesday, Nov 27

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling

Lecture 16: Computer Vision

Motion Estimation. There are three main types (or applications) of motion estimation:

Optic Flow and Basics Towards Horn-Schunck 1

6. Applications - Text recognition in videos - Semantic video analysis

EE795: Computer Vision and Intelligent Systems

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

The 2D/3D Differential Optical Flow

Comparison between Motion Analysis and Stereo

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Local Image Registration: An Adaptive Filtering Framework

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Low Cost Motion Capture

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

WORCESTER POLYTECHNIC INSTITUTE

Lecture 16: Computer Vision

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING

Intelligent Robotics

Multi-stable Perception. Necker Cube

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17

Real-Time Human Detection using Relational Depth Similarity Features

Face Detection and Recognition in an Image Sequence using Eigenedginess

Detecting and Identifying Moving Objects in Real-Time

Motion Tracking and Event Understanding in Video Sequences

ELEC Dr Reji Mathew Electrical Engineering UNSW

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections.

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Module 7 VIDEO CODING AND MOTION ESTIMATION

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Stereo imaging ideal geometry

ON THE VELOCITY OF A WEIGHTED CYLINDER DOWN AN INCLINED PLANE

Self-calibration of a pair of stereo cameras in general position

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

SIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS

Marcel Worring Intelligent Sensory Information Systems

Aberrations in Holography

Elastic Bands: Connecting Path Planning and Control

Estimating the wavelength composition of scene illumination from image data is an

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT

Spatial track: motion modeling

Mariya Zhariy. Uttendorf Introduction to Optical Flow. Mariya Zhariy. Introduction. Determining. Optical Flow. Results. Motivation Definition

Computer Vision for HCI. Motion. Motion

Automatic Partiicle Tracking Software USE ER MANUAL Update: May 2015

Motion detection Computing image motion Motion estimation Egomotion and structure from motion Motion classification

Product information. Hi-Tech Electronics Pte Ltd

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Tuning an Algorithm for Identifying and Tracking Cells

CS 664 Segmentation. Daniel Huttenlocher

Local Feature Detectors

International Journal of Advance Engineering and Research Development

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

An Intuitive Explanation of Fourier Theory

Face detection, validation and tracking. Océane Esposito, Grazina Laurinaviciute, Alexandre Majetniak

Ulrik Söderström 16 Feb Image Processing. Segmentation

Motion Estimation and Optical Flow Tracking

extracted occurring from the spatial and temporal changes in an image sequence. An image sequence

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Arbib: Slides for TMB2 Section 7.2 1

Generating Dynamic Projection Images for Scene Representation and Understanding

Motion Detection Algorithm

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation

VELOCITY FIELD MEASUREMENT FROM IMAGE SEQUENCES

Peripheral drift illusion

Motion detection Computing image motion Motion estimation Egomotion and structure from motion Motion classification. Time-varying image analysis- 1

CS4733 Class Notes, Computer Vision

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Multimedia Systems Video II (Video Coding) Mahdi Amiri April 2012 Sharif University of Technology

Chapter 9 Object Tracking an Overview

Processing of distance measurement data

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Projectile Trajectory Scenarios

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Transcription:

CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series of images taken at different time is used as an input to the motion analysis system. It is used in real time applications. The objective of motion analysis is to attain complete data of static and dynamic objects present in the image. The motion analysis problems are based on set of postulations: i. Information of data like camera motion, time interval between successive images makes analysis simpler. ii. Choosing the suitable motion analysis techniques is based on this information. Problems encountered in Motion analysis: i. Motion detection: This group uses only one still camera and records the detected motion. It is applied for security functions. ii. Moving object detection and location: This problem defines a stationary camera and the dynamic objects in the image or camera may be in motion state and the objects are static in image. Motion based segmentation techniques are used to detect the moving objects. The other problems are moving object detection, the recognition of flight-path of its motion and prediction of its fore coming flight-paths. Object matching techniques are used to resolve this problem. iii. Deriving 3D features of an object from its 2D features of moving object at different period of time. Motion analysis is also termed as dynamic image analysis. The approaches for motion analysis are classified into three categories: i. Matching based motion analysis. ii. Optical flow computation iii. Computation of motion field and vector field Matching based motion analysis is based on two or three successive images in a sequence. The motion analysis is performed at higher level and based on the measurement of resemblance between set of important pixels and is reliable on a set of consecutive images. 51

To analyse 3D motion, a 3D motion is represented as 2D motion which is called as a motion field. A velocity vector corresponding to the motion direction, velocity and distance from an observer at a proper image position is assigned to individual pixel in the motion field. The main objective of the optical flow image analysis is to determine the motion field. Optical flow computation is an advantageous approach for motion analysis which presumes less time interval between successive images and no significant changes occurs between two consecutive images. Motion orientation (direction) and velocity are the two parameters determined by the optical flow computation for each pixel in an image. The main limitations of optical flow computation are: i. False motion field may be detected due variation in illumination. ii. The measured optical flow vectors give the orientation and velocity of the object motion. The approximation of optical flow or pixel similarity is noisy, but the 3D analysis of motion needs high accuracy of the optical flow. To surmount these problems approaches that are not dependable on optical flow or pixel matching are used since optical flow and pixel matching are not computed and hence avoiding probable errors. The approach with the measurement of Motion field and vector is better. It is based on images attained at periods that are sufficient to guarantee minute variation due to motion. Velocity fields can also be acquired if less number of images is present in the sequence. This approach represent object dependent analysis. In object dependent analysis, investigation on the similarity between pixels of significance or between the regions is made. Active contour model called snake is a new method for motion analysis. The following object motion assumptions are made to restrict the moving objects if motion analysis is based on detection of moving objects: i. The moving object is scanned at time interval of dt. The definite object pixel s probable position in an image lies within a circle with its centre at the object pixel location in the prior frame and its radius Vdt where V is the maximum velocity of moving object. ii. The variation of velocity in time t is delimited by a constant. iii. All object pixels shift in a same manner. 52

iv. Stiff objects shows steady pattern points where each pixel of an object matches exactly to one pixel in the next image in series and vice-versa with exclusions due to occlusion and object rotary motion. Two approaches used for extracting the 2D motion from images are: i. Optical flow ii. Motion correspondence. 5.2. Differential motion analysis methods: Motion detection is a process of simple subtraction of images obtained at different instant of time with stationary camera and constant illumination. A difference image dd (i, j) is a binary image where 1 s represent the image region with motion. Mathematically it is represented as dd (i,j) = 0; f 1 (i, j) f 2 (i, j) Ɛ...1 = 1; otherwise. where Ɛ isa small positive number. The direction of the motion can be computed by creating cumulative difference image. The cumulative difference images dc (i, j) contain information on motion direction, displacement, distance, velocity, acceleration etc. The slow motion and small object motions are also detected. The cumulative difference image is build from a progression of n images, with f1 being considered as a reference image. Cumulative difference image values indicate the difference between the image gray level and the reference image gray level. It is represented as: n dc(i, j) = k=1 a k f 1 (i, j) f 2 (i, j)...2 ak weight coefficients, gives the significance of images in the sequence of n images. The present motion is given greater weights to indicate its importance. Motion analysis based on sequence of difference images uses an image with static scene and stationary object as reference image. The difference image removes the stationary areas and detects the motion area in the scene corresponding to the actual position of the moving object in the scene. The major limitation of this method may be the unattainable of the static reference image if the motion is continuous with no end. A learning stage is used to create a reference image in such cases. The solution to this is to superimpose moving image objects on non moving backgrounds from other images in a different phase of the motion. The above approaches find the motion paths out of which the centre of gravity path is essential. To simplify the task, the object segmentation is performed from the first 53

image of the sequence. The realistic setback is the forecast of the motion trajectories if the object position in preceding images is recognized. The other parameters of the motion that can be detected from other images are whether the object is proceeding towards or moving back, overlapping of one image over other image, etc. Difference images do not carry sufficient information to work consistently in real time. Few problems are common to all detection approaches such as: i. A rectangular object moving parallel to the object border line, the motion analysis can detect motion of only two sides of a rectangular. ii. An aperture problem may cause uncertainty of motion information as only a part of the object border is visible and the complete motion detection is not possible. The characteristics of motion acquired from the difference image, which holds the information regarding the presence of motion, are not reliable. The motion parameters can be estimated more efficiently if the intensity characteristics of a region or group of pixels are compared in a pair of image frames. Such related superpixels are produced by separating rectangular regions. The size of rectangular region is obtained from camera aspect ratio. Then the superpixels are matched in the compared frames using correlation or likelihood approaches. Detecting moving edges approach overcomes the several disadvantages of differential motion analysis. The differential motion analysis can be applied to detect the slow moving edges and weak edges with high speed by coalescing spatial and temporal image gradients. Logical AND operations are performed to detect the moving edges of the spatial and temporal image edges. The moving edge image ded (i,j) can be determined as Ded (i,j) = S(i,j) D(i,j)...3 Where S(i, j) Edge magnitude and D(i, j) absolute difference image 5.3 Optical flow in motion analysis Optical flow can be used to study motions such as still image and camera in motion; motionless camera and mobile image; or both image and camera in motion. Optical flow estimation results in the detection of motion properties which increases the accuracy of real time image analysis. Motion is combination of following fundamentals as shown in fig 5.1 i. Transformation at constant distances from the camera (Fig 5.1 a) ii. Transformation in depth relative to the camera(fig 5.1 b) iii. Rotation at constant distance about the optical axis. (Fig 5.1 c) 54

iv. Rotation of flat object vertical to the optical axis. (Fig 5.1 d) Motion detection is built on the following aspects: i. A set of parallel motion vectors represents transformation at steady distance. ii. A set of vectors with common focus of expansion (FOE) results from the transformation in depth information of the image. iii. A set of coaxial motion vectors results from constant distance rotation. iv. Rotation vertical to the optical axis produces a single or multiple sets of vectors originating from flat section. 5.3.1. Motion type recognition If the transformation has variable depth information, the optical flow vectors are converging. The directions of the vectors have single focus of expansion (FOE). If the transformation is characterized with constant depth information, the FOE is at infinity. If an image has different objects in motion, the optical vector of each movement of an object has an unique FOE. Fig a Fig b Fig c Fig d Fig: 5.1. Motion type recognition. The parameters included are: i. Mutual velocity: Let image coordinates be xʹ, yʹ, (x0, y0,z0) be the position of the a point at time t = 0. The velocities in x,y,z direction is given by u,v,w respectively. The position of same point at t= t1 with unit focal distance from the camera and constant velocity is given by (xʹ, yʹ) = ( x 0 + ut z 0 + wt, yx 0 + vt z 0 + wt ).4 ii. FOE determination: FOE in a 2D image can be computed from the equation xʹfoe = ( u w, v w ) 5 55

with assumption that movement of object towards the camera as time t tending to infinity and the movement can be drawn back to the origin from the camera at a boundless distance. iii. Distance determination: The equation to find distances between moving objects is D(t) V(t) given by = z(t) w(t)..6 where D(t) is the distance of a point from the focus of expansion in a 2D image, V(t) = dd/dt, be its velocity. Distances of any different moving point in the image with a velocity w can be computed with the information of distance of an arbitrary moving point in an image with a velocity w in z direction. z 2 = z 1(t)V 1 (t)d 2 (t) V 2 (t)d 1 (t) 7 Where z1(t) and z2(t) are known and unknown distances. The real world coordinates and image coordinates are related to the camera position and velocity is given by x(t) = xʹ(t)w(t)d(t) V(t) y(t) = yʹ(t)w(t)d(t) V(t) z(t) = w(t)d(t) V(t) 8 5.3.2. Collision Prediction: The optical flow approach detects potential collisions with scene objects. The FOE coordinates (u/w, v/w) are determined by the Camera motion. The direction of the image coordinates from the origin is given by s= (u/w, v/w, 1) and follows a path in real world coordinates at each time instant defined as straight line as (x,y,z) = ts = t (u/w, v/w, 1)...9 The camera distance from the observer xobs is given by x obs = s (s. x) s. s 10 d min = (x. x) (x. s)2 s. s 11 56

where dmin is the smallest distance between a point x and camera during the camera movement. Thus circular movement of camera with radius r will collide with objects if dmin< r. 5. 4. Object tracking: Movements of many objects concurrently and separately are analysed with complex approaches with assumption like maximum velocity, small acceleration, common motion, mutual correspondence, smoothness of motion, etc. It is likely to devise the concept of path coherence which means that the movement of an object at any point in an image sequence will alter gradually. The path coherence function ɸ computes the oneness between the obtained paths of the object and movement limitations. The four basics followed by the path coherence function are as follows: i. The function value must be positive. ii. The local absolute angular divergences of path are obtained from the function values. iii. The function must react evenly to both positive and negative velocity changes. iv. The function values must be scaled down to a scale value (0, 1). The path can be expressed in the vector form as Ti = (Xi 1, Xi 2,...,Xi n ) where Xi k represent the path of a point in the kth sequence of an image. Ti = (xi 1, xi 2,...,xi n ) where xi 1 is image coordinates associated with Xi 1. The two functions used in object tracing are: i. Deviation function. ii. Path coherence function. On the whole path deviation D is computed as: m D = i=1 D i...12 where Di is the entire path of the object i. m D i = k=2...13 d i k Where di is the deviation in the path of a pixel in an image k k d i = ɸ(x k 1 k i x i, x k 1 i x k i ) or d k 1 i = ɸ(X k 1 i, X k i, X k+1 i )...14 where x i k 1 x i k is the motion path from a point X i k 1 to X i k and ɸ is the path coherence function. 57

The path coherence function ɸ is defined as ɸ(P i k 1, P i k, P i k+1 ) = w 1 (1 cosθ) + w 2 ( 1 2 s k s k+1 s k + s k+1 ) = w1 ( 1 x i k 1 x i k. x i k x i k 1 x i k 1 x i k x i k x i k 1 ) + w2( 1 2 x i k 1 x k i x k i x k+1 i ) x k 1 i x k i x k i x k+1 i...15 w1 and w2 are the weights introduced to give the importance of the direction and velocity coherence. Many objects with different motion when traced, the object occlusion may occur. The objects may partly or totally disappear in some image frames resulting in errors in object trajectories. If equation 12 is executed using the given path coherence function, similar number of object points are identified in every image series. These identified object points symbolizes the similar objects. This is not possible in case of occluded images. To overcome this occlusion problem additional local trajectory constraints must be considered and if essential trajectories can be truncated or unfinished. Thus maximum velocity must be incorporated. Greedy exchange algorithm presented by Sethi and Jain, determines the complete or unfinished trajectories maximum set and reduces the sum of local smoothness deviation for all recognized trajectories. The local smoothness deviation should be less than the preset maximum ɸmax and distance between two successive trajectory points should be less than preset threshold dmax. Phantom points are established and are used as alternative for incomplete trajectory points. 58