The 2D/3D Differential Optical Flow
|
|
- Horatio Allison
- 6 years ago
- Views:
Transcription
1 The 2D/3D Differential Optical Flow Prof. John Barron Dept. of Computer Science University of Western Ontario London, Ontario, Canada, N6A 5B7 Phone: x86896 Canadian Conference on Computer and Robot Vision (CRV2009) Kelowna, British Columbia, May 24 th, 2009
2 2D Optical Flow: An Example (a) (b) Figure 1: (a) The middle frame from the Yosemite Fly-Through sequence and (b) its correct flow field.
3 2D Optical Flow: An Overview P(t) Image Plane (side view) V δt v δt Y(t) P(t ) Y(t ) Figure 2: V is the 3D velocity of 3D point P(t) = (X, Y, Z). v = (u, v) is the 2D image of V, i.e. v is the perspective projection of V. If P(t) moves with displacement V δt to P(t ) from time t to time t, then its image Y (t) = (x, y, f) moves with displacement vδt to Y (t ) = (x, y, f) from times t to t. f is the sensor focal. v is known as image velocity or optical flow.
4 The Image Velocity Equations The 3D instantaneous velocity V = (U, V, W) of a 3D point P = (X, Y, Z), where the sensor moves relative to P is V = (U, V, W) = T ω P, (1) where T = (T 0, T 1, T 2 ) is the instantaneous sensor translation and ω = (ω 0, ω 1, ω 2 ) is the sensor s instantaneous rotation. The components of V can be written out in full as: U = T 0 ω 1 Z + ω 2 Y (2) V = T 1 ω 2 X + ω 0 Z (3) W = T 2 ω 0 Y + ω 1 X (4) For a rigid object under perspective projection we can write x = f X Z, y = f Y Z and z = f Z Z = f, (5)
5 where f is the focal length of the sensor. The time derivatives of (x, y, z), (ẋ, ẏ, ż) = (u, v, 0), yield the two non-zero components of instantaneous image velocity, which can be written as ) (Ẋ ẋ = u = f Z XŻ Z 2 (6) ) (Ẏ ẏ = v = f Z Y Ż Z 2 (7) Substituting equations (2), (3) and (4) into equations (6) and (7) and using the definition of perspective projection in equation (5) we obtain the standard image velocity equations (see Longuet-Higgins and Prazdny, Proc. R. Soc. London B208, 1981): u = 1 Z ( ft 0 + xt 2 ) + ω 0 ( xy f ) ) ω 1 (f + x2 + ω 2 y f
6 and v = 1 ( ) Z ( ft 1 + yt 2 ) + ω 0 f + y2 f ω 1 ( xy f ) ω 2 x Given the sensor s instantaneous 3D translation and 3D rotation plus an image point s 2D coordinates (x, y) and its 3D depth, Z [hence we know X and Y as well via equations (5)] we can specify the correct 2D image velocity, (u, v), for this image point. Conversely, if we can measure (u, v) values at image points, we can recover the 3D sensor translation scaled by 3D depth and 3D sensor rotation. We can also recover relative depth at image points in a scene. So, while we can t know from a single monocular sensor that object 1 is 4m away while object 2 is 8m away (absolute depth) we can tell that object 1 is twice as close as object 2 (relative depth). Of course, with a binocular sensor setup we can recover absolute motion and depth parameters.
7 The 2D Aperture Problem We can estimate optical flow locally from spatial and temporal image intensity derivatives measured in local neighbourhoods. The aperture problem (Marr and Ullman 1981) is relevant when image velocities are measured locally. v Aperture n v n Contour Figure 3: ˆn is the unit normal vector in the normal velocity direction, i.e. it is perpendicular to the local contour structure and having length 1. Thus normal velocity v n is the component of full velocity v projected in the normal direction ˆn, ( v ˆn)ˆn = v n.
8 The Significance of the Aperture Problem So, due to the aperture problem, we can measure just v n and not v t (the tangential velocity) or v (the full velocity) locally. The aperture problem does not depend on the size of the aperture but rather on local contour structure: Is there enough local structure to recover full image velocity? No! Yes! Figure 4: We can recover full image velocity in neighbourhoods of corner points but not in neighbourhoods when the local contour is straight.
9 The Motion Constraint Equation Assume I(x, y, t) moves by δx, δy in time δt to I(x + δx, y + δy, t + δt). x,y x,y x+ δx,y+ δy t t+ δt Since I(x, y, t) and I(x + δx, y + δy, t + δt) are the images of the same point: I(x, y, t) = I(x + δx, y + δy, t + δt). We can perform a 1 st order Taylor series expansion about I(x, y, t): I(x+δx, y+δy, t+δt) = I(x, y, t)+ I I I δx+ δy+ x y t δt+h.o.t.
10 The Motion Constraint Equation Continued: Because I(x, y, t) = I(x + δx, y + δy, t + δt) we obtain: Here u = δx δt I x and v = δy δt I x = I x, I y = I y and I t = I t I I I δx + δy + δt x y t = 0, δx δt + I δy y δt + I δt t {z} δt = 0, =1 I x u + I y v + I t = 0 and I v + I t = 0. are the x and y components of image velocity and are image intensity derivatives at I(x, y, t).
11 Motion Constraint Line I v + I t = 0 is 1 equation in 2 unknowns (a line), the correct velocity is some unknown point on this line. The velocity with the smallest magnitude if the normal velocity v n. u (u nx,v ny) v n θ v Figure 5: The motion constraint line. v n = (u nx, v ny ) is the velocity with the smallest magnitude that is on this line.
12 The Relationship between Normal Velocity and the Motion Constraint Equation Consider a straight contour moving up/down and to the right with true full velocity v. Since it is viewed through an aperture we can only see the motion perpendicular to the contour. v n Since the normal velocity is the smallest of all potential velocities it
13 is the point on the motion constraint line closest to the origin. We can compute v n (the magnitude of v n ) and ˆn (its direction) and hence, the normal velocity, v n = v nˆn, very simply using the motion constraint equation I v = I t and the fact that v ˆn = v n as: I v = I t I v = I t I 2 I 2 ˆn v = v n ˆn = I I 2 and v n = I t I 2.
14 Resolving the Aperture Problem We need to impose additional constraints to recover the full velocity v everywhere. We could assume that each local image neighbourhood has constant velocity [Lucas and Kanade IJCAI1981]. Then 2 or more different normal velocities yield the true full velocity (in the least squares sense). We could assume that velocity varies smoothly everywhere [Horn and Schunck AI1981] and regularize a smoothness term across the image.
15 2D Lucas and Kanade 1981 Given I x, I y and I t at a single pixel, we can compute v n as v n = v ˆn, where v n = I t I 2 and ˆn = I I 2 are as before. Given an k = n n neighbourhood with the same velocity v we can write n x1 n x2. n xk n y1 n y2. n yk } {{ } N which we can write as }{{} N }{{} v k u v }{{} v = B }{{} k 1. = v n1 v n2. v nk } {{ } B For k 2 we can solve this system as v = (N T N) 1 N T B.
16 2D Horn and Schunck 1981 Horn and Schunck combined the motion constraint equation with a global smoothness term to constrain the estimated velocity field v = (u, v), minimizing: f = ( I v + I t ) 2 + λ ( 2 u 2 x + u 2 y + vx 2 + vy) 2 dxdy D defined over a domain D (the image), where the magnitude of λ reflects the relative influence of the smoothness term. We use the Euler-Lagrange equations: f u df u x dx df u y dy f v df v x dx df v y dy = 0 = 0,
17 where: f u = 2I x (I x u + I y v + I t ) f v = 2I y (I x u + I y v + I t ) f ux = 2α 2 u x f uy = 2α 2 u y f vx = 2α 2 v x f vy = 2α 2 v y df ux dx df uy dy df vx dx df vy dy = 2α 2 u xx = 2α 2 u yy = 2α 2 v xx = 2α 2 v yy.
18 Since 2 u = u xx + u yy and 2 v = v xx + v yy we can rewrite the Euler-Lagrange equations as: I 2 xu + I x I y v + I x I t = α 2 2 u I x I y u + I 2 yv + I y I t = α 2 2 v. Using 2 u ū u and 2 v v v we get We can solve for u and v as: (α 2 + I 2 x)u + I x I y v = (α 2 ū I x I t ) I x I y u + (α 2 + I 2 y)v = (α 2 v I y I t ), (α 2 + I 2 x + I 2 y)u = (α 2 + I 2 y)ū I x I y v I x I t (α 2 + I 2 x + I 2 y)v = I x I y ū + (α 2 + I 2 y) v I y I t,
19 which can be written as: (α 2 + Ix 2 + Iy)(u 2 ū) = I x [I x ū + I y v + I t ] (α 2 + Ix 2 + Iy)(v 2 v) = I y [I x ū + I y v + I t ]. Gauss-Seidel iterative equations that minimize these equations are: u k+1 = ū k I x[i x ū k + I y v k + I t ] α 2 + I 2 x + I 2 y and v k+1 = v k I y[i x ū k + I y v k + I t ] α 2 + I 2 x + I 2 y Here k denotes the iteration number, v 0 y and v 0 y denote initial velocity estimates which are typically set to zero and ū k and v k denote neighbourhood averages of u k and v k. Iterations are stopped if k reaches a preset value, i.e. 100, or if the norm of the velocity field differences at iterations k and k + 1 is less than some preset threshold..
20 Intensity Differentiation One way to compute intensity derivatives is via convolution of Simoncelli s Matched/balanced filters [ICIP1994] compute I x, I y and I t. The filter coefficients are: n p 5 d Table 1: Simoncelli s 5-point Matched/Balanced Kernels
21 Convolution Steps for Computing I x To compute I x in 2D for frame i is some sequence, we first convolve the smoothing kernel, p 5, in the t dimension to reduce the 5 images to 1 image, then convolve the smoothing kernel p 5 on that result in the y dimension and finally convolve the differentiation kernel, d 5, on that 2 nd result in the x dimension to obtain I x. I y is computed in a similar way. i 2 i 1 Convolve p in t ==> 1 image i i+2 i+1 5 i Convolve p in y ==> 1 image ==> ==> i Convolve d in x 5 5 ==> I image x
22 Convolution Steps for Computing I t To compute I t in 2D for frame i in some sequence, we first convolve p 5 in the x dimension and then on that result in the y dimension for each of frames i 2, i 1, i, i + 1 and i + 2. This yields 5 smoothed images in x and y. We then differentiate these images in the t dimension using d 5 to get I t at frame i. i+2 Convolve 5 images in x by p Convolve 5 images in y by p Convolve 5 images in t by d i+1 i i 1 i ==> ==> ==> i+2 i 1 i 2 i i i i+1 i+1 i+2 i 1 i 2 I t image
23 3D Optical Flow Usually, by 3D optical flow we mean 3D volumetric flow: at voxel, (x, y, z), what is the 3D velocity (U, V, W)? An example of volumetric data: 20 volumes of gated MRI data of 1 beat of a human heart. The 3D motion constraint equations uses intensity derivatives in x, y and z, as well as t. 3D optical flow on a moving surface (such as a growing leaf) is called range flow and also produces 3D velocity (U, V, W) on all surface points of some scanned object that is moving over time. Now the 3D constraint equation uses x and y derivatives of Z, the depth value (either scanned or computed) at each surface point. Range flow is also often called scene flow.
24 The 3D Intensity Motion Constraint Equation is a simple extension of the 2D motion constraint equation. Consider a small 3D n n n block at (x, y, z) at time t moving to (x + δx, y + δy, z + δz) at time t + δt. (x+ δx,y+ δy,z+ δz) (x,y,z) Time t Time t+ δt Figure 6: A small n n n 3D neighbourhood of voxels centered at (x, y, z) at time t moving to (x + δx, y + δy, z + δz) at time t + δt.
25 3D Intensity Motion Constraint Equation and the 3D Aperture Problem We assume a 3D voxel I(x, y, z) at time t moves with a displacement (δx, δy, δz) over time δt. Since I(x, y, z, t) and I(x + δx, y + δy, z + δz, t + δt) are the same we can perform a 1 st order Taylor series expansion and obtain (as in the 2D case): I x U + I y V + I z W = I V = I t, I x, I y, I z and I t are 3D spatio-temporal intensity derivatives computed via Simoncelli convolution. The 3D velocity is V = (U, V, W) This equation describes a plane in 3D space. Any point on that plane is possibly the correct 3D velocity. The velocity on the plane that is
26 closest to the origin is called the plane normal velocity (the velocity normal to a local intensity planar structure) The line normal velocity is the velocity on the line caused by the intersection of 2 planes that is closest to the origin. of course, is three planes intersect at a single point, that point is the full 3D velocity.
27 V L V P 3D Plane Normal Velocity 3D Line Normal Velocity Figure 7: Graphical illustrations of the 3D plane and line normal velocities.
28 3D Lucas and Kanade From the 3D the motion constraint equation we have: I x U + I y V + I z W = I t, where I x, I y, I z and I t are the 3D intensity derivatives in a n n n neighbourhood centered at voxel (x, y, z) and V = (U, V, W) is that neighbourhood s (assumed) constant 3D velocity. Given I x, I y, I z and I t at a single pixel, we can compute V n as V n = V ˆn, where V n = I t I 2, ˆn = I I 2 and I = (I x, I y, I z ). Given an k = n n n neighbourhood with the same velocity V we
29 can write n x1 n y1 n z1 n x2 n y2 n z2... } n xk n yk {{ n zk } N U V W }{{} V = V n1 V n2. V nk } {{ } B which we can write as }{{} N }{{} V k = B }{{} k 1 For k 3 we can solve this system as V = (N T N) 1 N T B..
30 3D Horn and Schunck 3D Horn and Schunck regularization becomes: F = Z D (I xu+iyv +IzW+I t ) 2 +α 2 Ux 2 + U2 y + U2 z + V x 2 + V y 2 + V z 2 + W2 x + W2 y + W2 z. We use the Euler-Lagrange equations: F U F V F W d dx F U X d dx F V X d dx F W X d dy F U Y d dz F U Z d dt F U t = 0, d dy F V Y d dz F V Z d dt F V t = 0, d dy F W Y d dz F W Z d dt F W t = 0.
31 where: F U = 2Z X (Z X U + Z Y V + Z Z W + Z t ) F V = 2Z Y (Z X U + Z Y V + Z Z W + Z t ) F W = 2Z Z (Z X U + Z Y V + Z Z W + Z t ) F UX = 2α 2 U X F UY = 2α 2 U Y F UZ = 2α 2 U Z F Ut = 2α 2 U t F VX = 2α 2 V X F VY = 2α 2 V Y F VZ = 2α 2 V Z F Vt = 2α 2 V t F WX = 2α 2 W X F WY = 2α 2 W Y F WZ = 2α 2 W Z F Wt = 2α 2 W t
32 and df UX dx df UY dy df UZ dz df Ut dt df VX dx df VY dy = 2α 2 U XX, = 2α 2 U Y Y, = 2α 2 U ZZ, = 2α 2 U tt, = 2α 2 V XX, = 2α 2 V Y Y, df VZ dz df Vt dt df WX dx df WY dy df WZ dz = 2α 2 V ZZ = 2α 2 V tt = 2α 2 W XX = 2α 2 W Y Y = 2α 2 W ZZ df Wt dt = 2α 2 W tt. Since 2 U = U XX + U Y Y + U ZZ + U tt, 2 V = V XX + V Y Y + V ZZ + V tt and 2 W = W XX + W Y Y + W ZZ + W tt we can rewrite the
33 Euler-Lagrange equations as: Z 2 XU + Z X Z Y V + Z X Z Z W + Z X Z t = α 2 2 U Z X Z Y U + Z 2 Y V + Z Y Z Z W + Z Y Z t = α 2 2 V Z X Z Z U + Z Y Z Z V + Z 2 ZW + Z Z Z t = α 2 2 W Using 2 U Ū U, 2 V V V and 2 W W W we can write: (α 2 + Z 2 X)U + Z X Z Y V + Z X Z Z W = (α 2 Ū + Z X Z t ) Z X Z Y U + (α 2 + Z 2 Y )V + Z Y Z Z W = (α 2 V + ZY Z t ) Z X Z Z U + Z Y Z Z V + (α 2 + Z 2 Z)W = (α 2 W + Z Z Z t ).
34 The Gauss-Seidel iterative equations can be written as: U k+1 = Ū k I [ ] x Ix Ū k + I y V k + I z W k + I t (α 2 + Ix 2 + Iy 2 + Iz) 2, V k+1 = V k I [ ] y Ix Ū k + I y V k + I z W k + I t and (α 2 + Ix 2 + Iy 2 + Iz) 2 W k+1 = W k I [ ] z Ix Ū k + I y V k + I z W k + I t (α 2 + Ix 2 + Iy 2 + Iz) 2. Ū k, V k and W k are n n n averages of neighbourhoods of velocities at iteration k. Ū 0, V 0 and W 0 are typically set to 0.0.
35 3D L&K OF for Gated MRI Cardiac Data LK-XY-9-36 (5phase) LK-XZ-9-36 (5phase) LK-XY (10phase) LK-XZ (10phase) Figure 8: The Lucas and Kanade XY and XZ flow fields superimposed on the 36 th slice of the 9 th and 16 th volumes of the 5phase and 10phase datasets. The eigenvalue threshold λ 1 was 1.0.
36 3D H&S OF for Gated MRI Cardiac Data HS-XY-9-36 (5phase) HS-XZ-9-36 (5phase) HS-XY (10phase) HS-XZ (10phase) Figure 9: The Horn and Schunck XY and XZ flow fields superimposed on the 36 th slice of the 9 th and 16 th volumes of the 5phase and 10phase datasets for 100 iterations. α = 1.0.
37 3D Range Motion Constraint Equation We can compute 3D Range Flow (3D Optical Flow on a 3D Surface) from 3D depth data measured by a ShapeGrabber range scanner on rigid and non-rigid surfaces [CVUI2002]. One example is the range flow for a plant leaf (the surface can be deformable) [ECCV2000]. The 3D Range Constraint Equation is Z x U + Z y V + W 1 = 0. Here Z x and Z y are depth derivatives and not intensity derivatives. Lucas and Kanade and Horn and Schunck like 3D surface optical flow can be computed [DAGM1999]. Range flow (from time-varying depth data) can be fused with 2D optical flow (from time-varying intensity data) to produce full 3D velocity on moving flat surfaces, where neither range or optical flow by themselves, can do this [ICIP2000].
38 Figure 10: Example 3D range flow for Castor oil plant leaves.
39 3D Scene Flow A typical approach: In a stereo image sequence, compute the time-varying depth maps using a stereo algorithm, Compute range or scene flow from these time-varying depth maps. One point of view: optical flow is not practical on real imaginary but stereo is much better: so use stereo to get depth maps and then compute range flow on these depth maps and their derivative values. One example of such work: Efficient Dense Scene Flow from Sparse or Dense Stereo Data, Andreas Wedel, Clemens Rabe, Tobi Vaudrey, Thomas Brox, Uwe Franke and Daniel Cremers, ECCV, October, 2008.
CS-465 Computer Vision
CS-465 Computer Vision Nazar Khan PUCIT 9. Optic Flow Optic Flow Nazar Khan Computer Vision 2 / 25 Optic Flow Nazar Khan Computer Vision 3 / 25 Optic Flow Where does pixel (x, y) in frame z move to in
More informationTracking Weather Storms using 3D Doppler Radial Velocity Information
Tracking Weather Storms using 3D Doppler Radial Velocity Information X. Tang 1 and J.L. Barron 1,R.E.Mercer 1,andP.Joe 2 1 Department of Computer Science University of Western Ontario London, Ontario,
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationVC 11/12 T11 Optical Flow
VC 11/12 T11 Optical Flow Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Optical Flow Constraint Equation Aperture
More informationCOMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE
COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment
More informationVisual Tracking (1) Feature Point Tracking and Block Matching
Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationC18 Computer vision. C18 Computer Vision. This time... Introduction. Outline.
C18 Computer Vision. This time... 1. Introduction; imaging geometry; camera calibration. 2. Salient feature detection edges, line and corners. 3. Recovering 3D from two images I: epipolar geometry. C18
More informationVisual Tracking (1) Tracking of Feature Points and Planar Rigid Objects
Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationOptic Flow and Basics Towards Horn-Schunck 1
Optic Flow and Basics Towards Horn-Schunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information.
More informationCS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow
CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Henrik Malm, Anders Heyden Centre for Mathematical Sciences, Lund University Box 118, SE-221 00 Lund, Sweden email: henrik,heyden@maths.lth.se Abstract. In this
More informationMariya Zhariy. Uttendorf Introduction to Optical Flow. Mariya Zhariy. Introduction. Determining. Optical Flow. Results. Motivation Definition
to Constraint to Uttendorf 2005 Contents to Constraint 1 Contents to Constraint 1 2 Constraint Contents to Constraint 1 2 Constraint 3 Visual cranial reflex(vcr)(?) to Constraint Rapidly changing scene
More informationComparison Between The Optical Flow Computational Techniques
Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.
More informationCPSC 425: Computer Vision
1 / 49 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 49 Menu March 10, 2016 Topics: Motion
More informationComputer Vision for HCI. Motion. Motion
Computer Vision for HCI Motion Motion Changing scene may be observed in a sequence of images Changing pixels in image sequence provide important features for object detection and activity recognition 2
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationLeow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1
Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape
More informationMassachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II
Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior
More informationCS 4495 Computer Vision Motion and Optic Flow
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationEvent-based Computer Vision
Event-based Computer Vision Charles Clercq Italian Institute of Technology Institut des Systemes Intelligents et de robotique November 30, 2011 Pinhole camera Principle light rays from an object pass through
More informationSpatial track: motion modeling
Spatial track: motion modeling Virginio Cantoni Computer Vision and Multimedia Lab Università di Pavia Via A. Ferrata 1, 27100 Pavia virginio.cantoni@unipv.it http://vision.unipv.it/va 1 Comparison between
More informationNotes 9: Optical Flow
Course 049064: Variational Methods in Image Processing Notes 9: Optical Flow Guy Gilboa 1 Basic Model 1.1 Background Optical flow is a fundamental problem in computer vision. The general goal is to find
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationHorn-Schunck and Lucas Kanade 1
Horn-Schunck and Lucas Kanade 1 Lecture 8 See Sections 4.2 and 4.3 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information. 1 / 40 Where We
More informationSURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD
SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications
More informationDense Image-based Motion Estimation Algorithms & Optical Flow
Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion
More informationCOMPUTING MOTIONS OF (LOCALLY) PLANAR SURFACES FROM SPATIO-TEMPORAL CHANGES IN IMAGE BRIGHTNESS: A NOTE
Ii TR-1090 DAAG-53-76C-0138 August 1981 COMPUTING MOTIONS OF (LOCALLY) PLANAR SURFACES FROM SPATIO-TEMPORAL CHANGES IN IMAGE BRIGHTNESS: A NOTE K. Prazdny T Computer Vision Laboratory. ' ' Computer Science
More informationStereo Scene Flow for 3D Motion Analysis
Stereo Scene Flow for 3D Motion Analysis Andreas Wedel Daniel Cremers Stereo Scene Flow for 3D Motion Analysis Dr. Andreas Wedel Group Research Daimler AG HPC 050 G023 Sindelfingen 71059 Germany andreas.wedel@daimler.com
More informationMotion Estimation (II) Ce Liu Microsoft Research New England
Motion Estimation (II) Ce Liu celiu@microsoft.com Microsoft Research New England Last time Motion perception Motion representation Parametric motion: Lucas-Kanade T I x du dv = I x I T x I y I x T I y
More informationEECS 556 Image Processing W 09
EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)
More informationVisual Tracking (1) Pixel-intensity-based methods
Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationCHAPTER 5 MOTION DETECTION AND ANALYSIS
CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series
More information.ps11 INTERPRETATION OF IMAGE FLOW: A SPATIO-TEMPORAL APPROACH. Muralidhara Subbarao. Abstract
.ps11 INTERPRETATION OF IMAGE FLOW: A SPATIO-TEMPORAL APPROACH Muralidhara Subbarao Abstract Research on the interpretation of image flow (or optical flow) until now has mainly focused on instantaneous
More informationSpatial track: motion modeling
Spatial track: motion modeling Virginio Cantoni Computer Vision and Multimedia Lab Università di Pavia Via A. Ferrata 1, 27100 Pavia virginio.cantoni@unipv.it http://vision.unipv.it/va 1 Comparison between
More informationMidterm Exam Solutions
Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow
More informationPeripheral drift illusion
Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video
More information3D Motion Analysis Based on 2D Point Displacements
3D Motion Analysis Based on 2D Point Displacements 2D displacements of points observed on an unknown moving rigid body may provide information about - the 3D structure of the points - the 3D motion parameters
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationMotion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi
Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationAnnouncements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting
Announcements Motion Introduction to Computer Vision CSE 152 Lecture 20 HW 4 due Friday at Midnight Final Exam: Tuesday, 6/12 at 8:00AM-11:00AM, regular classroom Extra Office Hours: Monday 6/11 9:00AM-10:00AM
More informationA Quantitative Comparison of 4 Algorithms for Recovering Dense Accurate Depth
A Quantitative Comparison o 4 Algorithms or Recovering Dense Accurate Depth Baozhong Tian and John L. Barron Dept. o Computer Science University o Western Ontario London, Ontario, Canada {btian,barron}@csd.uwo.ca
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationBasilio Bona DAUIN Politecnico di Torino
ROBOTICA 03CFIOR DAUIN Politecnico di Torino Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The
More informationf xx (x, y) = 6 + 6x f xy (x, y) = 0 f yy (x, y) = y In general, the quantity that we re interested in is
1. Let f(x, y) = 5 + 3x 2 + 3y 2 + 2y 3 + x 3. (a) Final all critical points of f. (b) Use the second derivatives test to classify the critical points you found in (a) as a local maximum, local minimum,
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationFinally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field
Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik
More informationMatching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.
Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.
More informationCS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka
CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.
More informationBackground for Surface Integration
Background for urface Integration 1 urface Integrals We have seen in previous work how to define and compute line integrals in R 2. You should remember the basic surface integrals that we will need to
More informationextracted occurring from the spatial and temporal changes in an image sequence. An image sequence
Motion: Introduction are interested in the visual information that can be We from the spatial and temporal changes extracted in an image sequence. An image sequence occurring of a series of images (frames)
More informationMarcel Worring Intelligent Sensory Information Systems
Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationTechnion - Computer Science Department - Tehnical Report CIS
Over-Parameterized Variational Optical Flow Tal Nir Alfred M. Bruckstein Ron Kimmel {taln, freddy, ron}@cs.technion.ac.il Department of Computer Science Technion Israel Institute of Technology Technion
More informationAdaptive Multi-Stage 2D Image Motion Field Estimation
Adaptive Multi-Stage 2D Image Motion Field Estimation Ulrich Neumann and Suya You Computer Science Department Integrated Media Systems Center University of Southern California, CA 90089-0781 ABSRAC his
More informationAnnouncements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting
Announcements Motion HW 4 due Friday Final Exam: Tuesday, 6/7 at 8:00-11:00 Fill out your CAPES Introduction to Computer Vision CSE 152 Lecture 20 Motion Some problems of motion 1. Correspondence: Where
More informationImage features. Image Features
Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in
More informationMotion detection Computing image motion Motion estimation Egomotion and structure from motion Motion classification
Time varying image analysis Motion detection Computing image motion Motion estimation Egomotion and structure from motion Motion classification Time-varying image analysis- 1 The problems Visual surveillance
More informationMotion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)
Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization
More informationEE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline
1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation
More informationMotion detection Computing image motion Motion estimation Egomotion and structure from motion Motion classification. Time-varying image analysis- 1
Time varying image analysis Motion detection Computing image motion Motion estimation Egomotion and structure from motion Motion classification Time-varying image analysis- 1 The problems Visual surveillance
More informationRegularised Range Flow
Regularised Range Flow Hagen Spies 1,2, Bernd Jähne 1, and John L. Barron 2 1 Interdisciplinary Center for Scientific Computing, University of Heidelberg, INF 368, 69120 Heidelberg, Germany, {Hagen.Spies,Bernd.Jaehne}@iwr.uni-heidelberg.de
More informationDominant plane detection using optical flow and Independent Component Analysis
Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,
More informationLecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20
Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit
More informationImage Processing 1 (IP1) Bildverarbeitung 1
MIN-Fakultät Fachbereich Informatik Arbeitsbereich SAV/BV (KOGS) Image Processing 1 (IP1) Bildverarbeitung 1 Lecture 20: Shape from Shading Winter Semester 2015/16 Slides: Prof. Bernd Neumann Slightly
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More information3D Motion from Image Derivatives Using the Least Trimmed Square Regression
3D Motion from Image Derivatives Using the Least Trimmed Square Regression Fadi Dornaika and Angel D. Sappa Computer Vision Center Edifici O, Campus UAB 08193 Bellaterra, Barcelona, Spain {dornaika, sappa}@cvc.uab.es
More informationLOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION
LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION Ammar Zayouna Richard Comley Daming Shi Middlesex University School of Engineering and Information Sciences Middlesex University, London NW4 4BT, UK A.Zayouna@mdx.ac.uk
More informationZhongquan Wu* Hanfang Sun** Larry S. Davis. computer Vision Laboratory Computer Science Cente-.r University of Maryland College Park, MD 20742
........ TR-11BY December 1981 Zhongquan Wu* Hanfang Sun** Larry S. Davis computer Vision Laboratory Computer Science Cente-.r University of Maryland College Park, MD 20742 %Debto COMPUTER SCIENCE TECHNICAL
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More informationComputing Slow Optical Flow By Interpolated Quadratic Surface Matching
Computing Slow Optical Flow By Interpolated Quadratic Surface Matching Takashi KUREMOTO Faculty of Engineering Yamaguchi University Tokiwadai --, Ube, 755-8 Japan wu@csse.yamaguchi-u.ac.jp Kazutoshi KOGA
More informationOptical Flow Estimation
Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax
More informationOptical Flow. Adriana Bocoi and Elena Pelican. 1. Introduction
Proceedings of the Fifth Workshop on Mathematical Modelling of Environmental and Life Sciences Problems Constanţa, Romania, September, 200, pp. 5 5 Optical Flow Adriana Bocoi and Elena Pelican This paper
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationELEC Dr Reji Mathew Electrical Engineering UNSW
ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion
More informationQ-warping: Direct Computation of Quadratic Reference Surfaces
Q-warping: Direct Computation of Quadratic Reference Surfaces Y. Wexler A. Shashua wexler@cs.umd.edu Center for Automation Research University of Maryland College Park, MD 20742-3275 shashua@cs.huji.ac.il
More informationLecture 16: Computer Vision
CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods
More informationOptical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationGeneral Principles of 3D Image Analysis
General Principles of 3D Image Analysis high-level interpretations objects scene elements Extraction of 3D information from an image (sequence) is important for - vision in general (= scene reconstruction)
More informationVisual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.
Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:
More informationEfficient Dense Scene Flow from Sparse or Dense Stereo Data
Efficient Dense Scene Flow from Sparse or Dense Stereo Data Andreas Wedel 1,2, Clemens Rabe 1, Tobi Vaudrey 3, Thomas Brox 4, Uwe Franke 1, and Daniel Cremers 2 1 Daimler Group Research {wedel,rabe,franke}@daimler.com
More informationMulti-Frame Scene-Flow Estimation Using a Patch Model and Smooth Motion Prior
POPHAM, BHALERAO, WILSON: MULTI-FRAME SCENE-FLOW ESTIMATION 1 Multi-Frame Scene-Flow Estimation Using a Patch Model and Smooth Motion Prior Thomas Popham tpopham@dcs.warwick.ac.uk Abhir Bhalerao abhir@dcs.warwick.ac.uk
More informationVisual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys
Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the
More informationMulti-Frame Correspondence Estimation Using Subspace Constraints
International Journal of Computer ision 483), 173 194, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Multi-Frame Correspondence Estimation Using Subspace Constraints MICHAL IRANI
More informationOptical Flow at Occlusion
Optical Flow at Occlusion Jieyu Zhang and John L. Barron ept. of Computer Science University of Western Ontario London, Ontario, Canada, N6A 5B7 barron@csd.uwo.ca Abstract We implement and quantitatively/qualitatively
More informationNon-Rigid Image Registration III
Non-Rigid Image Registration III CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS6240) Non-Rigid Image Registration
More informationThe Lucas & Kanade Algorithm
The Lucas & Kanade Algorithm Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Registration, Registration, Registration. Linearizing Registration. Lucas & Kanade Algorithm. 3 Biggest
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationAbstract. Keywords. Computer Vision, Geometric and Morphologic Analysis, Stereo Vision, 3D and Range Data Analysis.
Morphological Corner Detection. Application to Camera Calibration L. Alvarez, C. Cuenca and L. Mazorra Departamento de Informática y Sistemas Universidad de Las Palmas de Gran Canaria. Campus de Tafira,
More information3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava
3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationInternational Journal of Advance Engineering and Research Development
Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative
More informationCan Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes?
Can Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes? V. Couture and M. S. Langer McGill University (School of Computer Science), Montreal, Quebec, H3A 2A7 Canada email: vincent.couture@mail.mcgill.ca
More informationComputer Vision Lecture 20
Computer Vision Lecture 2 Motion and Optical Flow Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de 28.1.216 Man slides adapted from K. Grauman, S. Seitz, R. Szeliski,
More informationObtaining Feature Correspondences
Obtaining Feature Correspondences Neill Campbell May 9, 2008 A state-of-the-art system for finding objects in images has recently been developed by David Lowe. The algorithm is termed the Scale-Invariant
More information