Marcel Worring Intelligent Sensory Information Systems

Similar documents
EE795: Computer Vision and Intelligent Systems

Feature Tracking and Optical Flow

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

VC 11/12 T11 Optical Flow

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Dense Image-based Motion Estimation Algorithms & Optical Flow

Lecture 16: Computer Vision

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Feature Tracking and Optical Flow

Lecture 16: Computer Vision

Comparison between Motion Analysis and Stereo

Optical flow and tracking

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Peripheral drift illusion

EE795: Computer Vision and Intelligent Systems

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Optic Flow and Basics Towards Horn-Schunck 1

CS 4495 Computer Vision Motion and Optic Flow

Motion Tracking and Event Understanding in Video Sequences

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Chapter 9 Object Tracking an Overview

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Motion in 2D image sequences

Motion Estimation. There are three main types (or applications) of motion estimation:

EECS 556 Image Processing W 09

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Comparison Between The Optical Flow Computational Techniques

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Segmentation and Tracking of Partial Planar Templates

Lucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Autonomous Navigation for Flying Robots

Capturing, Modeling, Rendering 3D Structures

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

Global Flow Estimation. Lecture 9

Multi-stable Perception. Necker Cube

Computer Vision Lecture 20

Chapter 3 Image Registration. Chapter 3 Image Registration

Robust Camera Pan and Zoom Change Detection Using Optical Flow

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

ELEC Dr Reji Mathew Electrical Engineering UNSW

Hand-Eye Calibration from Image Derivatives

Mariya Zhariy. Uttendorf Introduction to Optical Flow. Mariya Zhariy. Introduction. Determining. Optical Flow. Results. Motivation Definition

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

CS-465 Computer Vision

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

CS6670: Computer Vision

Computer Vision Lecture 20

Multiple Model Estimation : The EM Algorithm & Applications

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

Feature Based Registration - Image Alignment

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Visual Tracking (1) Feature Point Tracking and Block Matching

Introduction to Computer Vision

Motion. CS 554 Computer Vision Pinar Duygulu Bilkent University

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Multiple Model Estimation : The EM Algorithm & Applications

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Local Image Registration: An Adaptive Filtering Framework

Optical flow. Cordelia Schmid

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

The SIFT (Scale Invariant Feature

Region-based Segmentation

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Using temporal seeding to constrain the disparity search range in stereo matching

Digital Volume Correlation for Materials Characterization

NIH Public Access Author Manuscript Proc Int Conf Image Proc. Author manuscript; available in PMC 2013 May 03.

Robotics Programming Laboratory

Introduction to Medical Imaging (5XSA0) Module 5

Motion Estimation and Optical Flow Tracking

Visual Tracking (1) Pixel-intensity-based methods

9.913 Pattern Recognition for Vision. Class 8-2 An Application of Clustering. Bernd Heisele

Computer Vision Lecture 20

Center for Automation Research, University of Maryland. The independence measure is the residual normal

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Representing Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab

Adaptive Multi-Stage 2D Image Motion Field Estimation

Find the Correspondences

Notes 9: Optical Flow

CS4670: Computer Vision

Image Features. Work on project 1. All is Vanity, by C. Allan Gilbert,

MASTER THESIS. Optical Flow Features for Event Detection. Mohammad Afrooz Mehr, Maziar Haghpanah

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

CS231A Section 6: Problem Set 3

arxiv: v1 [cs.cv] 2 May 2016

EE 701 ROBOT VISION. Segmentation

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Spatial track: motion modeling

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data

Displacement estimation

Transcription:

Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam

Information and Communication Technology archives of documentaries, film, or training material, video conferencing Surveillance guarding of gates, parking lot management Autonomous vehicles automatically driving a car on the road, Mars explorations Visual inspection crop monitoring, car crash analysis Medical diagnostics heart wall motion, cell growth

There is a wide range of different applications, hence we have to categorize them into classes of applications Categorization dimensions acquisition device object class motion class application class

Acquired with a camera camera mounted on a robot, train, car etc. professional film equipment (digital) consumer cameras Digitized from video tape archives acquired with a camera, but stored on videotape Acquired with special medical equipment MRI, echo Camera mounted on a microscope time-lapse microscopy

What is moving? only the objects only the camera both In which space is the movement? 2D, or 3D-space Is there a projection involved? Are the individual frames in the image sequence a 2Dprojection of a 3D-scene or is a full 2D or 3D space available?

Rigid objects objects with fixed shapes that do not change when they are moving examples: cars, industrial products

Non-rigid objects objects which can change their shape in time examples: biological cells, organs, rubber objects

Articulated objects objects composed of a set of connected rigid subobjects with limited freedom for changing shape examples: humans, robots

Fluid objects objects which do not have a clearly defined boundary examples: moving clouds, explosions, wind in the trees

Detection of motion presence detect whether somewhere in the image there is an object moving Locating motion detect where in the image this object is moving Measurement of velocity measure in which direction and with what speed the object is moving Tracking follow the object through the motion Interpretation of motion give a semantic interpretation of the movement

Rigid objects solution method: optic flow and motion segmentation Non-rigid objects solution method: optic flow in conjunction with snakes/deformable models Articulated objects solution method: optic flow in conjunction with specialized objects models Fluids solution method: optic flow based methods Focus: generic issues in optic flow, with an emphasis on analyzing rigid objects observed with a camera

Object Center for Image Processing Object (moved to new position) Pinhole camera (fixed position) Pinhole camera Movement in the image plane (projection of true motion)

Optic flow apparent motion of an object in the video Optic flow field a 2D-vector field where at each position in the image an optic flow vector has been computed 3D-motion vector movement of an object in 3D space Image flow 2D projection of 3D-motion vector Center for Image Processing

Camera effects (ideal case) dollying/craning left zoom in pan right tracking no movement fast craning in arbitrary direction tilt up

General observation optic flow is often not the same as image flow examples of differences: rotating sphere with constant light source yielding image flow and a corresponding optic flow, versus a static sphere with a rotating light source which has no image flow, but yields the same optic flow as above camera movement and object movement can be indistinguishable

Bottom-up estimation of local motion properties in the time sequence integration of the measures into complete flowfields for the individual objects, or for the whole image Top-down first analyze the main motion of the whole frame i.e. apparent motion of the static background using parameterized motion model, in effect finding camera motion and/or zooms from there find objects which can have their own motion model as parts in the sequence not conforming to the above found motion model

Inherent problem of local motion estimation only normal motion i.e. motion perpendicular to the edge can be estimated, at corners no problem Each method for local optic flow estimation has to deal in some way with the aperture problem true image flow measured flow due to local estimation

Feature measurement extraction of basic measurements as gradients, spatiotemporal derivatives, local correlation surfaces etc. Feature integration integration of the above measures to produce a complete 2D-flow field involves assumptions about the smoothness of the underlying flowfield, attempting to solve the aperture problem, often uses a confidence measure to focus propagation on the most reliable measures (i.e. propagating from the corners) Selection based on the confidence criterion only relevant measurements are retained for further processing

Based on the following equation, with Ω some neighborhood v( x) = arg min image 1 image 2 v w Ω ( I ( x + w) + I ( x + w + v ) 2 1 2 ) Center for Image Processing Try several motion vectors and select the one for which difference in intensity values in the second image is minimal, can also be applied to derived images like Laplace. yields estimation for motion vector at each position

The (sum of squared differences) surface the 2D-surface that you get when you plot the difference you found at each position Center for Image Processing image 1 image 2 SSD surface

Derived from the shape of SSD surface around minimum flat: low confidence (uniform region) valley: medium confidence (edge) peaked: high confidence (corner)

Feature measurement assumption pure translation of image with conservation of intensity I with velocity vector v at pixel position x this yields I(x,t) = I(x-vt,0) taking the derivative with respect to t on both sides yields the gradient constraint equation I(x,t). v + It(x,t) = 0 v t=1

Feature integration the gradient constraint equation yields one equation in the two unknown elements of the velocity vector v (due to the aperture problem) solution is to gather the estimations in a larger neighborhood Ω assume constant velocity within Ω and solve the following equation in least squares sense, with W a weighting factor gradually decreasing from the center we have: x Ω 2 ( x) ( I( x, t). v + I (, t W t x )) 2

Feature selection based on the eigenvectors of A T W 2 A,where A contains the intensity gradients at every position in the neighborhood, indicating whether: gradients align: hence we are at an edge and can hence compute normal flow only gradients do not align: hence we are at a corner and true optic flow can be computed

Feature integration assume smoothness of entire flowfield make local assumption that velocity vectors change very slowly propagate reliable information from coarse to more detailed levels Selection based directly on the confidence factors

Image alignment find correspondence between two subsequent frames in the time sequence requires to find the relative position of the two frames (in 3D world space) with respect to each other and with respect to the camera Assumptions camera motion is the dominant motion no scene activity apart from camera motion activity small part of the image field camera motion can be estimated from some dominant plane in the 3D-world in general the background

Object Plane in the world Object Center for Image Processing Plane in the world Pinhole camera (fixed position) Pinhole camera (changed position) Parameterized transformation Movement in the image plane

Any 2D motion field of a plane moving in 3D space can be written in the form u(x) = p1 x + p2 y + p5 + p7 x^2 + p8 xy v(x) = p3 x + p4 y + p6 + p7 xy + p8 y^2 where x=(x,y) and u,v are the components of the motion vector v Characteristics describes the change of position of each point in the image which is part of the planar surface each instantiation of the 8 parameters corresponds to a different flow field

Examples with origin at the center of gravity of the object u(x) = 1,v(x)=2 i.e. p5 and p6 set all other 0 u(x) = 0, v(x)=0. 1y i.e. only p4 non 0

Align the two image frames based on the motion model corresponds to a virtual camera movement if the two images are properly aligned i.e p1-8 are correct then there should be no motion in the common part I i parameterized transform p1-8 I i+1 Frames in common coordinate system

Goal: find the best values of p1-8 I i I i+1 parameterized transform p1-8 matching function (correlation) choose initial transform try to maximize matching function by choosing different p1-8 use a spatial pyramid: first coarse alignment than more precise

Decomposition segmentation of the images sequence into objects using the observation that parts of an objects move in a coherent way Indexing we will look at methods that parameterize the motion of an object so they form a proper basis for further processing Methods top-down or bottom-up

Compute global motion model Put pixels that fit the model well (small residuals) into one layer Collect points with high residuals

Combine pixels with the same color Merge groups of pixels if they can be described using the same motion model

Causes of residuals change of content object movement illumination change inaccuracies misalignments interpolation errors during warping noise

Optic flow optic flow forms the basis for almost all of the applications in image sequence analysis From application to method selection there is a lot of variety in image sequence analysis problems and applications, categorization of your problem according to the criteria is an important step in selection of an appropriate method

B.K.P. Horn: Computer Vision, MIT Press,1986 J.L. Barron et. al: Performance analysis of optical flow techniques, International Journal of Computer Vision 12,1994 H-T. Nguyen and M. Worring and A. Dev: Detection of moving objects in video using a robust similarity measure, IEEE Transactions on Image Processing 9-1, 2000