Motion Estimation. There are three main types (or applications) of motion estimation:

Save this PDF as:

Size: px
Start display at page:

Download "Motion Estimation. There are three main types (or applications) of motion estimation:"

Transcription

1 Members: D 朱威達 R 林聖凱 R 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion is to combine two or more picture of the same place with different small sections (such as Fig.1 and Fig.2), and combine them in to a whole picture of the place (such as Fig.3). In order to combine these photos, we have to find the motion of the camera, and make the view point of these photos the same, and then we can perfectly combine these photos. Fig.1 Fig.2 Fig.3 Tracking Tracking is to track the motion of specified point from one picture to another. Usually these points will have special characteristics that allow us to have higher accuracy of their motions. Optical flow Optical flow is to determine the motion of every point from one photo to another, and these motions must have consistency. Therefore their accuracy might be worse than tracking. For the motion estimation, there exist tree assumptions that can help us solve this problem: Brightness consistency Image measurements (e.g. brightness) in a small region remain the same although their 1

2 location may change. 2005/03/23 Motion Estimation by Chu, Lin, and Hsien Spatial coherence Neighboring points in the scene typically belong to the same surface and hence typically have similar motions. Since they also project to nearby points in the image, we expect spatial coherence in image flow. Temporal persistence The image motion of a surface patch changes gradually over time. Image registration: Goal: register a template image J(x) and an input image I(x), where x = (x, y) T For 3 problems above, I and J are defined as followed: Image alignment: I(x) and J(x) are two images Tracking: J(x) is the image at time t, and I(x) is a small patch around the point p in the image at t+1. Optical flow: J(x) and I(x) are image of t and t+1. Simple approach: We can solve this problem simply by minimizing the brightness difference that is use the difference of brightness as our measurement. Simple SSD algorithm: For each offset (u, v) Compute E(u,v); Choose (u, v) which minimizes E(u,v); E( uv, ) = ( I( x+ uy, + v) J( xy, )) 2 xy, Problems: Not efficient, we have to calculate it pixel by pixel and repeat many times. No sub-pixel accuracy Lucas-Kanade algorithm This algorithm was proposed by B.D. Lucas and T. Kanade, An Iterative Image Registration Technique with an Application to Stereo Vision, Proceedings of the 1981 DARPA Image Understanding Workshop, 1981, pp First we consider Newton s method of finding root for f(x) = 0, by Taylor s expansion, the 2

3 ' 1 '' 2 function f(x) can be approximated by f( x+ ε) = f( x0) + f ( x0) ε + f ( x0) ε So in each iteration, set the approximate the root of f(x)=0. In the 2-dimension case we want to minimize f ( x ) ε = n n ' f ( xn ) and n+ 1 n f ' by using the same technique as Newton s method to find the root of E 0= = 2 I x( I( x, y) J( x, y) + uix + viy) and u xy, xy, x f ( xn ) = x, we can the ( x ) E( uv, ) = I( x+ uy, + v) J( xy, ), and E 0= = 2 I y( I( x, y) J( x, y) + uix + viy), we can have Lucas-Kanade algorithm. v xy, n ( ) 2 into For parametric model, we need to transfer Euv (, ) = ( I( x+ uy, + v) J( xy, )) 2 x ( ) 2 E( p) = I( W(x;p) ) J( x ) xy, And finally the algorithm becomes: iterate warp I with W(x;p) compute error image J(x,y)-I(W(x,p)) compute gradient image evaluate Jacobian at (x;p) W compute I p compute Hessian W I J( ) I( ) x p x W(x;p) solve p update p by p+ p compute [ ] W x p until converge 1 p= H I [ J( x) I( W(x;p) )] T Tracking Goal of Tracking Given two images, we d like to know where the corresponding point of image1 is in the image2. 3

4 brightness constancy: The basic assumption behind SSD minimization is brightness constancy. I( x+ u, y+ v, t+ 1) I( x, y, t) = 0 At a point a small distance away, and a small time later, the intensity is I( xyt,,) + ui( xyt,,) + vi( xyt,,) + I(, xyt,) I( xyt,,) 0 x y t ui( xyt,, ) + vi( xyt,, ) + I( xyt,, ) = 0 x y t Iu+ Iv+ I = 0 x y t The derived equation is also called Optical flow constraint equation. If only one image pixel is used (one equation in two unknowns), the result will be a line. Normal flow could be derived as following: r Normal speed: S = v nˆ (projection of velocity onto normal) uuv Normal Velocity: v s nˆ = r It I Since v = and nˆ =, I I 2 uuv v It I I v ˆ ˆ = v n n = I I 2 I 2 uuv v It I v v nˆ nˆ = = 2 I 2 In order to get the accurate result, the multiple constraints are combined to get an estimate of the velocity. 4

5 Area-based method Because of the disadvantage of the aforementioned methods, the area based method is used. Under the assumption of Spatial smoothness, we could use this technique to find out the best match. The method is similar to multiple constraint, and the best match can be easily found. However, the size of block and the range of the block are important issues. Larger window reduces ambiguity, but easily violates spatial smoothness assumption. Beside where to search is also an important factor. 5

6 In the above graph, the middle image reveals insufficient information about the original image and hard to find out the best match. At the meanwhile, the rightmost image is good enough to find the best match. In the remaining section, the math equations are derived. Assume spatial smoothness ( ) 2 Euv (, ) = Iu x + Iv y + It xy, must be invertible The eigenvalues tell us about the local image structure and how well we can estimate the flow in both directions. KLT tracking The KLT tacking is selecting features by min( λ1, λ2) > λ and the features are monitored by measuring dissimilarity. The details of KLT algorithm are not covered in the class. KLT is quite similar to SIFT but the motion is considered. However, in the practical use, SIFT can get the better result than KLT even if SIFT doesn t take motion into the consideration. There re two main reasons: 1. KLT accumulates errors. 6

7 2. KLT doesn t consider affine transformations. Optical Flow Multiple motions via late many assumptions Single motion assumption underlies the common correlation and gradient-based approaches and is violated when a region contains transparency, specular reflections, shadows, or fragmented occlusion. Spatial coherence constraint suffers from the same problem. It assumes that the flow within a neighborhood changes gradually since it is caused by a single motion. Least-square estimation is not robust If two motions are present in a neighborhood, two sets of constraints will be consistent with different motions. When examining one of the motions, the constraints fro the other motion will appear as gross errors which we refer to as outliers. Unfortunately, least-square estimation is known to lack robustness in the presence of outliers. As shown in Figure 1, least-square estimation performs badly when some outliers exist. Figure 2 further shows the examples of using it to fit a result after compromising two different motions. 7

8 (a) (b) Figure 1. (a) Use least-square to fit a line with coherent data. (b) Use least-square to fit a line some outliers. (a) Figure 2. Least square estimation in multiple motions (b) Robust statistical method The main goals of robust statistics are to recover the structure that best fits the majority of the data while identifying and rejecting outliers or deviating substructures. The problem of least-squares solution is that outlying measurements are assigned a high weight by a quadratic function, as shown in Figure 3. The influence of data points, which is proportional to the derivative of the quadratic function, increases linearly and without bound. Therefore, to reduce the influence of outliers, weighting function like truncated quadratic or other variations (Figure 4) could be used. Figure 3. Weight and influence functions in least-square estimation. 8

9 Figure 4. An example of weighting function to reduce the influence of outliers. Sample result of robust statistical method In the sequence of Figure 5, the camera is stationary and a person is moving behind a plant resulting in fragmented occlusion. The robust estimation framework allows the dominant motion (of the plant and the background) to be accurately estimated while ignoring the motion of the person. Figures 6 and 7 show the sample results of detecting the major and secondary motions in this image sequence. Figure 5. Sample image sequence. 9

10 Figure 6. Robustly determining the dominant motion. (a) Violations of the single motion assumption shown as dark regions. (b) Constraint lines corresponding to the dominant motion. Figure 7. Estimating multiple motions. (a) White regions correspond to a second consistent motion (the person and his shadow). (b) Constraint lines corresponding to the secondary motion. Regularization and dense optical flow Assumption 1: Neighboring points in the scene typically belong to the same surface and hence typically have similar motions. Assumption 2: Since they also project to nearby points in the image, we expect spatial coherence in image flow. Regularization First, we want to formulate the regularization problem. Given a 1-D signal, as shown in Figure 8(a), we want to find a smoothed function that best fits this signal (Figure 8(b)). We both consider the data conservation and spatial smoothness. The regularization term is written as the formula in Figure 8(c), where λ controls the relative importance of the data conservation and spatial coherence terms. Further, we replace the least-square form with a robust function ρ to perform robust regularization (Figure 8(d)). That s how we robustly deal with motion boundary problem. 10

11 (a) 2005/03/23 Motion Estimation by Chu, Lin, and Hsien (b) (c) Figure 8. Formulation of regularization. (d) Problems posted by motion boundaries Figure 9 illustrates the situation which occurs at a motion boundary. With the least-square formulation the local flow vector u i,j is forced to be close to the average of its neighbors. When a motion discontinuity is present this results in smoothing across the boundary which reduces the accuracy of the flow field and obscures important structural information about the presence of an object boundary. Figure 9. Smoothing across a flow discontinuity. An example of oversmoothing Figure 10(a) shows a synthetic image sequence in which the left half of the image is stationary and the right half is moving one pixel to the left between frames. The horizontal and vertical components of the flow are shown with the magnitude of the flow represented by intensity, where black indicates motion to the left and up and gray indicates no motion. The true horizontal and vertical motions are shown in Figures 10(b) and (c), respectively. 11

12 (a) (b) (c) Figure 10. Random noise example. (a) First random noise image in the sequence. (b) True horizontal motion (black = -1 pixel, white = 1 pixel, gray = 0 pixels). (c) True vertical motion. Figure 11 shows a plot of the horizontal motion where the height of the plot corresponds to the recovered image velocity. Figure 11(a) shows that the application of the least-squares formulation results in motion estimates which very smoothly thus obscuring the motion boundary. When we recast the problem in the robust estimation framework, the problems of oversmoothing are reduced (Figure 11(b)). Figure 11. Horizontal displacement. The horizontal component of motion is interpreted as height and plotted. Plotting the results illustrates the oversmoothing of the least-square solution (a), and the sharp discontinuity which is preserved by robust estimation (b). In another kind of presentation, we can see the estimation differences between quadratic and robust methods in Figure 12. The quadratic method introduces blur effects in motion boundaries, while robust method keeps sharp estimation in boundaries. 12

13 Quadratic Horizontal motion Vertical motion Robust Horizontal motion Vertical motion Figure 12. Estimation differences between the quadratic and robust methods. Applications of optical flow impressionist effects One application of optical flow is to produce impressionist effect for images or videos. Figure 13(b) is obtained after generating brush strokes which orientation is 45. Brush generation method is modified so that brush strokes are clipped to edges in Figure 13(c). Next, an artist may not want to have all strokes drawn in the same direction. Therefore, the effect is improved that the stroke direction is decided by the gradient (Figure 13(d)). Furthermore, techniques for Figure 13(d) is modified such that with vanishing gradient magnitude are interpolated from surrounding regions (Figure 13(e)). Brush strokes may also be rendered with textured brush images that have r,g,b and alpha components. Figure 13(f) shows an example of applying texture brush. Details of this application please see Peter Litwinowicz, Processing Images and Video for An Impressionist Effects, SIGGRAPH Temporal coherence In a video sequence, the optical flow vector field is used as a displacement field to move the brush strokes to new locations in a subsequent frame. We must make sure to generate new strokes near the image boundaries to maintain temporal coherence. As shown in Figure 14, new brushes will be generated when regions are too sparse. Details of this application please see Peter Litwinowicz, Processing Images and Video for An Impressionist Effects, SIGGRAPH

14 (a) Input image (b) Brush (c) Edge clipping (d) Gradient (e) Smooth gradient (f) Texture brush Figure 13. Process of producing impressionist effect for images. 14

15 Figure 14. The process of generating new brushes. 15

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in

EECS 556 Image Processing W 09

EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit

Comparison between Motion Analysis and Stereo

MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

Computer Vision II Lecture 4

Computer Vision II Lecture 4 Color based Tracking 29.04.2014 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Single-Object Tracking Background modeling

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

Displacement estimation

Displacement estimation Displacement estimation by block matching" l Search strategies" l Subpixel estimation" Gradient-based displacement estimation ( optical flow )" l Lukas-Kanade" l Multi-scale coarse-to-fine"

Optical Flow CS 637. Fuxin Li. With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun

Optical Flow CS 637 Fuxin Li With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun Motion and perceptual organization Sometimes, motion is the only cue Motion and perceptual

Optical Flow Estimation

Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax

Image processing and features

Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

Mixture Models and EM

Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

Kanade Lucas Tomasi Tracking (KLT tracker)

Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,

Tracking Computer Vision Spring 2018, Lecture 24

Tracking http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 24 Course announcements Homework 6 has been posted and is due on April 20 th. - Any questions about the homework? - How

Motion Estimation (II) Ce Liu Microsoft Research New England

Motion Estimation (II) Ce Liu celiu@microsoft.com Microsoft Research New England Last time Motion perception Motion representation Parametric motion: Lucas-Kanade T I x du dv = I x I T x I y I x T I y

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods

The Robust Estimation of Multiple Motions: Parametric and Piecewise-Smooth Flow Fields

COMPUTER VISION AND IMAGE UNDERSTANDING Vol. 63, No. 1, January, pp. 75 104, 1996 ARTICLE NO. 0006 The Robust Estimation of Multiple Motions: Parametric and Piecewise-Smooth Flow Fields MICHAEL J. BLACK*

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

Perceptual Grouping from Motion Cues Using Tensor Voting

Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project

Optic Flow and Basics Towards Horn-Schunck 1

Optic Flow and Basics Towards Horn-Schunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information.

Mariya Zhariy. Uttendorf Introduction to Optical Flow. Mariya Zhariy. Introduction. Determining. Optical Flow. Results. Motivation Definition

to Constraint to Uttendorf 2005 Contents to Constraint 1 Contents to Constraint 1 2 Constraint Contents to Constraint 1 2 Constraint 3 Visual cranial reflex(vcr)(?) to Constraint Rapidly changing scene

Exploring Curve Fitting for Fingers in Egocentric Images

Exploring Curve Fitting for Fingers in Egocentric Images Akanksha Saran Robotics Institute, Carnegie Mellon University 16-811: Math Fundamentals for Robotics Final Project Report Email: asaran@andrew.cmu.edu

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

Motion estimation. Lihi Zelnik-Manor

Motion estimation Lihi Zelnik-Manor Optical Flow Where did each piel in image 1 go to in image 2 Optical Flow Pierre Kornprobst's Demo ntroduction Given a video sequence with camera/objects moving we can

Adaptive Multi-Stage 2D Image Motion Field Estimation

Adaptive Multi-Stage 2D Image Motion Field Estimation Ulrich Neumann and Suya You Computer Science Department Integrated Media Systems Center University of Southern California, CA 90089-0781 ABSRAC his

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

Color Characterization and Calibration of an External Display

Color Characterization and Calibration of an External Display Andrew Crocker, Austin Martin, Jon Sandness Department of Math, Statistics, and Computer Science St. Olaf College 1500 St. Olaf Avenue, Northfield,

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

Evaluating Error Functions for Robust Active Appearance Models

Evaluating Error Functions for Robust Active Appearance Models Barry-John Theobald School of Computing Sciences, University of East Anglia, Norwich, UK, NR4 7TJ bjt@cmp.uea.ac.uk Iain Matthews and Simon

Towards the completion of assignment 1

Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment

Dominic Filion, Senior Engineer Blizzard Entertainment Rob McNaughton, Lead Technical Artist Blizzard Entertainment Screen-space techniques Deferred rendering Screen-space ambient occlusion Depth of Field

Kanade Lucas Tomasi Tracking (KLT tracker)

Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

CS231A Section 6: Problem Set 3

CS231A Section 6: Problem Set 3 Kevin Wong Review 6 -! 1 11/09/2012 Announcements PS3 Due 2:15pm Tuesday, Nov 13 Extra Office Hours: Friday 6 8pm Huang Common Area, Basement Level. Review 6 -! 2 Topics

Tracking in image sequences

CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Tracking in image sequences Lecture notes for the course Computer Vision Methods Tomáš Svoboda svobodat@fel.cvut.cz March 23, 2011 Lecture notes

Multi-Frame Correspondence Estimation Using Subspace Constraints

International Journal of Computer ision 483), 173 194, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Multi-Frame Correspondence Estimation Using Subspace Constraints MICHAL IRANI

Midterm Exam Solutions

Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Corner Detection Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu April 11, 2006 Abstract Corners and edges are two of the most important geometrical

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 2 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

2D vector fields 3. Contents. Line Integral Convolution (LIC) Image based flow visualization Vector field topology. Fast LIC Oriented LIC

2D vector fields 3 Scientific Visualization (Part 8) PD Dr.-Ing. Peter Hastreiter Contents Line Integral Convolution (LIC) Fast LIC Oriented LIC Image based flow visualization Vector field topology 2 Applied

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 8 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

Performance study on point target detection using super-resolution reconstruction

Performance study on point target detection using super-resolution reconstruction Judith Dijk a,adamw.m.vaneekeren ab, Klamer Schutte a Dirk-Jan J. de Lange a, Lucas J. van Vliet b a Electro Optics Group

Panoramic Image Stitching

Mcgill University Panoramic Image Stitching by Kai Wang Pengbo Li A report submitted in fulfillment for the COMP 558 Final project in the Faculty of Computer Science April 2013 Mcgill University Abstract

More Texture Mapping. Texture Mapping 1/46

More Texture Mapping Texture Mapping 1/46 Perturbing Normals Texture Mapping 2/46 Perturbing Normals Instead of fetching a texture for color, fetch a new perturbed normal vector Creates the appearance

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

Schedule. Tuesday, May 10: Thursday, May 12: Motion microscopy, separating shading and paint min. student project presentations, projects due.

Schedule Tuesday, May 10: Motion microscopy, separating shading and paint Thursday, May 12: 5-10 min. student project presentations, projects due. Computer vision for photography Bill Freeman Computer

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Recall: Edge Detection Image processing

Subpixel accurate refinement of disparity maps using stereo correspondences

Subpixel accurate refinement of disparity maps using stereo correspondences Matthias Demant Lehrstuhl für Mustererkennung, Universität Freiburg Outline 1 Introduction and Overview 2 Refining the Cost Volume

Chapter 2 Optical Flow Estimation

Chapter 2 Optical Flow Estimation Abstract In this chapter we review the estimation of the two-dimensional apparent motion field of two consecutive images in an image sequence. This apparent motion field

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

Towards a visual perception system for LNG pipe inspection

Towards a visual perception system for LNG pipe inspection LPV Project Team: Brett Browning (PI), Peter Rander (co PI), Peter Hansen Hatem Alismail, Mohamed Mustafa, Joey Gannon Qri8 Lab A Brief Overview

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo

Optical Flow Estimation with CUDA. Mikhail Smirnov

Optical Flow Estimation with CUDA Mikhail Smirnov msmirnov@nvidia.com Document Change History Version Date Responsible Reason for Change Mikhail Smirnov Initial release Abstract Optical flow is the apparent

Stitching and Blending

Stitching and Blending Kari Pulli VP Computational Imaging Light First project Build your own (basic) programs panorama HDR (really, exposure fusion) The key components register images so their features

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM N. A. Tsoligkas, D. Xu, I. French and Y. Luo School of Science and Technology, University of Teesside, Middlesbrough, TS1 3BA, UK E-mails: tsoligas@teihal.gr,

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation Michael J. Black and Allan D. Jepson Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto,

Technion - Computer Science Department - Tehnical Report CIS

Over-Parameterized Variational Optical Flow Tal Nir Alfred M. Bruckstein Ron Kimmel {taln, freddy, ron}@cs.technion.ac.il Department of Computer Science Technion Israel Institute of Technology Technion

Part II: Modeling Aspects

Yosemite test sequence Illumination changes Motion discontinuities Variational Optical Flow Estimation Part II: Modeling Aspects Discontinuity Di ti it preserving i smoothness th tterms Robust data terms

Multimedia Technology CHAPTER 4. Video and Animation

CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University

Q-warping: Direct Computation of Quadratic Reference Surfaces

Q-warping: Direct Computation of Quadratic Reference Surfaces Y. Wexler A. Shashua wexler@cs.umd.edu Center for Automation Research University of Maryland College Park, MD 20742-3275 shashua@cs.huji.ac.il

Subpixel Corner Detection Using Spatial Moment 1)

Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

BCC Textured Wipe Animation menu Manual Auto Pct. Done Percent Done

BCC Textured Wipe The BCC Textured Wipe creates is a non-geometric wipe using the Influence layer and the Texture settings. By default, the Influence is generated from the luminance of the outgoing clip

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

Final Exam Study Guide CSE/EE 486 Fall 2007

Final Exam Study Guide CSE/EE 486 Fall 2007 Lecture 2 Intensity Sufaces and Gradients Image visualized as surface. Terrain concepts. Gradient of functions in 1D and 2D Numerical derivatives. Taylor series.

Skew Detection and Correction of Document Image using Hough Transform Method

Skew Detection and Correction of Document Image using Hough Transform Method [1] Neerugatti Varipally Vishwanath, [2] Dr.T. Pearson, [3] K.Chaitanya, [4] MG JaswanthSagar, [5] M.Rupesh [1] Asst.Professor,

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

Image stitching. Digital Visual Effects Yung-Yu Chuang. with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac

Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac Image stitching Stitching = alignment + blending geometrical registration

Pedestrian counting in video sequences using optical flow clustering

Pedestrian counting in video sequences using optical flow clustering SHIZUKA FUJISAWA, GO HASEGAWA, YOSHIAKI TANIGUCHI, HIROTAKA NAKANO Graduate School of Information Science and Technology Osaka University

Digital Volume Correlation for Materials Characterization

19 th World Conference on Non-Destructive Testing 2016 Digital Volume Correlation for Materials Characterization Enrico QUINTANA, Phillip REU, Edward JIMENEZ, Kyle THOMPSON, Sharlotte KRAMER Sandia National

Finding 2D Shapes and the Hough Transform

CS 4495 Computer Vision Finding 2D Shapes and the Aaron Bobick School of Interactive Computing Administrivia Today: Modeling Lines and Finding them CS4495: Problem set 1 is still posted. Please read the

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

Depth Estimation with a Plenoptic Camera

Depth Estimation with a Plenoptic Camera Steven P. Carpenter 1 Auburn University, Auburn, AL, 36849 The plenoptic camera is a tool capable of recording significantly more data concerning a particular image

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

Module 7 VIDEO CODING AND MOTION ESTIMATION

Module 7 VIDEO CODING AND MOTION ESTIMATION Lesson 20 Basic Building Blocks & Temporal Redundancy Instructional Objectives At the end of this lesson, the students should be able to: 1. Name at least five

Texture Sensitive Image Inpainting after Object Morphing

Texture Sensitive Image Inpainting after Object Morphing Yin Chieh Liu and Yi-Leh Wu Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan

Computer Vision: Homework 3 Lucas-Kanade Tracking & Background Subtraction

16-720 Computer Vision: Homework 3 Lucas-Kanade Tracking & Background Subtraction Instructor: Deva Ramanan TAs: Achal Dave, Sashank Jujjavarapu Siddarth Malreddy, Brian Pugh See course website for deadline:

Computer Vision I - Basics of Image Processing Part 1

Computer Vision I - Basics of Image Processing Part 1 Carsten Rother 28/10/2014 Computer Vision I: Basics of Image Processing Link to lectures Computer Vision I: Basics of Image Processing 28/10/2014 2

Nonlinear Multiresolution Image Blending

Nonlinear Multiresolution Image Blending Mark Grundland, Rahul Vohra, Gareth P. Williams and Neil A. Dodgson Computer Laboratory, University of Cambridge, United Kingdom October, 26 Abstract. We study

Specular Reflection Separation using Dark Channel Prior

2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

Stereo Vision II: Dense Stereo Matching

Stereo Vision II: Dense Stereo Matching Nassir Navab Slides prepared by Christian Unger Outline. Hardware. Challenges. Taxonomy of Stereo Matching. Analysis of Different Problems. Practical Considerations.

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,

Online Video Registration of Dynamic Scenes using Frame Prediction

Online Video Registration of Dynamic Scenes using Frame Prediction Alex Rav-Acha Yael Pritch Shmuel Peleg School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem,

Structure from Motion. Prof. Marco Marcon

Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

Fast HDR Image-Based Lighting Using Summed-Area Tables

Fast HDR Image-Based Lighting Using Summed-Area Tables Justin Hensley 1, Thorsten Scheuermann 2, Montek Singh 1 and Anselmo Lastra 1 1 University of North Carolina, Chapel Hill, NC, USA {hensley, montek,

Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

Other Linear Filters CS 211A

Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin