A Quantitative Comparison of 4 Algorithms for Recovering Dense Accurate Depth

Size: px
Start display at page:

Download "A Quantitative Comparison of 4 Algorithms for Recovering Dense Accurate Depth"

Transcription

1 A Quantitative Comparison o 4 Algorithms or Recovering Dense Accurate Depth Baozhong Tian and John L. Barron Dept. o Computer Science University o Western Ontario London, Ontario, Canada {btian,barron}@csd.uwo.ca Abstract: We report on 4 algorithms or recovering dense depth maps rom long image sequences, where the camera motion is known a priori. All methods use a Kalman ilter to integrate intensity derivatives or optical low over time to increase accuracy. 1 Introduction This comparison work is motivated by a real-world application: we wish to play records (SP and LP) by computing dynamic depth maps o the groove walls o a record and then converting the derived time-varying wall orientations into sound. In this way, we can play old vinyl records in a noncontact manner, minimizing urther deterioration o the records. We look at 4 prominent dense depth algorithms in the literature that appear to give good results and perorm a quantitative error analysis on them using ray-traced textured objects. The irst 2 algorithms use time-varying intensity derivative data, I x, I y and I t, computed by Simoncelli s lowpass and highpass ilters [8] while the last 2 algorithms use optical low computed by Lucas and Kanade s least squares method [6]. The idea here is that our quantitative analysis o the algorithms may provide us with a ramework to compute dense depth maps or optical record playing. 2 The Algorithms We did a survey o recent algorithms or dense depth maps (rom image velocity or intensity derivatives) the 4 algorithms we present below appeared to give the best results. A 5 th algorithm by Xiong and Shater [11] is under implementation. All o these algorithms assume known camera translation and rotation (or can be made to have this assumption). We irst present brie summaries o the 4 algorithms by Heel [3], Matthies et al. [7], Hung and Ho [4] and Barron et al. [2], ollowed by experimental results and conclusions. 2.1 Heel s Algorithm Heel [3] proposed the recovery o motion and dense depth maps using intensity gradients. In our work, we assume the 3D sensor translation U = (U 1, U 2, U 3 ) and 3D sensor rotation ω = (ω 1, ω 2, ω 3 ) is known. Heel assumed that Z is constant within a small neighbourhood around a given pixel and that the recovery o Z could be posed as a least squares problem: min Z x y ( ) 2 s U Z + q ω + I t, (1) where s = ( I x, I y, xi x +yi y ) and q = (xyi x +(1+ y 2 )I y, xyi y (1+x 2 )I x, yi x xi y ), which yields the ollowing Z: Z = 2 x y ( s U) x y ( q ω + I t)( s U) (2) The job o the update stage is to predict a new depth Z + k and variance p+ k using a new measurement Z k with variance p k and the current estimate Z k with variance p k. The update equation is: Z + k = p k Z k + p k Z k p k + p k, (3) which is weighted average o the depth values using the variances as weights. The new estimated depth variance is: p k p k p + k = p k + p k (4)

2 Heel computes the measured variance as; c p = Σ x Σ y, (5) [(I t + q ω)( s U)] 2 where c is a scaling constant. 2.2 Hung and Ho s Algorithm Hung and Ho s approach [4] is a dense depth calculation rom intensity derivatives with known sensor motion. They assume that the image intensity o corresponding points in the 3D scene is not changed by motion over the image sequence. The standard image velocity equation can be written as: Then Z can be computed as: s U Z + q ω = I t. (6) s U Z = (7) q ω + I t Over space and time, Hung and Ho assumes Z varies as: Z k+1 = G k Z k + u k + θ k, (8) where G k = 1 ω 1 ty + ω 2 tx, u k = U 3 t Z Z x x y y and θ k is taken to include e(x, y, k+ 1), the error in the Taylor series expansion used in the derivation o this equation, as well as the error generated when estimating the terms Z x and Z y Z Z y in u(k). The terms x and y can only be estimated ater the depth map has attained some degree o smoothness. θ is approximately Gaussian random noise with zero mean and variance Q. By introducing a measurement noise n with variance R 1 equation (6) can be re-written as: x Y 1k = H 1k Z k + n k (9) where Y 1k = s U and H 1k = q ω + I t Incorporating Surace Structure and Smoothing Hung and Ho s approach assumes that the depth Z(x, y) or every particular point (x, y) in the image has some local structural property among its neighbouring pixels. The model depth as; Y 2 = Z(x, y) + n 2, (1) where n 2 is an error term. Hung and Ho compute Y 2 (x, y) or a pixel (x, y) as ollows: Y 2 (x, y) = 1 2 [Z e(x 1, y) + Z e (x, y 1)], (11) where Z e is an estimated Z value. They note this measurement is spatially biased and may produce propagation eects due to the diagonalization o Y 2 values. To overcome this, each image rame is iltered our times, top to bottom or bottom to top and let to right or right to let according to which o our possible corners o the image the calculation is started at or each time to produce our dierent versions o Y 2, Y2 1, Y2 2, Y2 3 and Y2 4. The inal smoothed estimate or Y 2 is then taken to be: Y 2 = 1 4 [ Y Y2 2 + Y Y 2 4 ]. (12) The Kalman Filter equations With the extra measurement Y 2, equations (9) and (1) can be combined to give: where Y = [ Y1 Y 2 ], H = Y = HZ + n, (13) [ H1 1 ] and n = [ n1 n 2 ]. (14) Based on equation (13), a set o standard Kalman ilter equations or generating an estimate or the depth Z k in Hung and Ho s approach can be then determined. Please reer to Barron, Ngai and Spies [2] or details. 2.3 Matthies, Kanade and Szeliski Matthies, Kanade and Szeliski [7] proposed a pixelbased (iconic) algorithm that estimated depth and depth uncertainty at each pixel and incrementally reines these estimates over time using Kalman ilter. This algorithm has our main stages: disparity measuring, disparity updating, smoothing and disparity predicting Measuring Disparity First we must compute a displacement vector (disparity) at each pixel. Matthies et al. suggested a simple correlation-based matching (the sum o squared dierences (SSD)) algorithm plus cubic interpolation o scan lines be used to compute disparities. However, since our image motions are small we replaced correlation by Lucas and Kanade s optical low to obtained more accurate displacements [1]. Indeed, since the sensor motion is let to right, we used the x component o the optical low as the scalar value o the disparity d. Thus or this setup, d

3 is the inverse depth 1 Z. The variance o the disparity measurement is computed as: var(e) = 2σ2 n a, (15) where σ 2 n is the variance o the image noise process. Since we used σ n = 1 and a = 1 the measurement variance var(e) is always Updating the Disparity Map Similar to that we have showed in other algorithms, the updated disparity estimate is a linear combination o the predicted and measured values, inversely weighted by their respective variances. To update a disparity, the disparity variance is computed as: p k σ2 d p + k = [(p k ) 1 + (σd 2 ) 1 ] 1 = p k + σ2 d (16) where σd 2 is the measured variance using equation (15) and p k is the previous estimated variance. The Kalman ilter gain K is computed as: K = p+ k σ 2 d Then the disparity is updated as: = p k p k +. (17) σ2 d u + k = u k + K(d u k ), (18) where u t and u + t are the predicted and updated disparity estimates and d is the new disparity measurement Smoothing the map Matthies et al. use a generalized piecewise spline under tension in a inite element ramework to smooth their correlation ields. Since we are using the x component o optical low, we replace this smoothing by an application o a 5 5 median ilter Predicting the Next Disparity Map In the prediction stage o the Kalman ilter, both disparity and its uncertainty must be predicted. Given x component o optical low, we can predict the new pixel location in the next rame as x k+1 = x k + x k and y k+1 = y k + y k. Combining this with computed depth and camera motion, we can predict the new disparity as: where α = 1 ω x y i + ω y x i, and ω x, ω y and U 1 are camera rotation (zero in our case) and translation speed (as U 2 = U 3 = in our case). Since new positions are normally subpixel values in the next image, we need to resample them to obtain predicted disparities at integer pixel locations. The predicted variance is inlated by a multiplicative actor: p k+1 = (1 + ǫ)p+ k. (2) We use ǫ = Barron, Ngai and Spies Barron, Ngai and Spies [2] proposed a Kalman ilter ramework or recovering dense depth map rom the time-varying optical low ields generated by a camera translating over a scene by a known amount. They assumed local neighbourhood planarity to avoid having to compute non-pixel correspondences. That is, surace orientation (o a plane) is what is tracked over time. The standard image velocity equations [5] relate a velocity vector measured at image location Y = (y 1, y 2, ) = P/Z, to the 3D sensor translation U and 3D sensor rotation ω: v( Y, t) = v T ( Y, t) + v R ( Y, t) where v T and v R are the translational and rotational components o image velocity: v T ( Y, t) = A 1 ( Y ) U X 3 with A 2 ( Y ) = A 1 ( Y ) = and ṽ R (Ỹ, t) = A2(Ỹ) ω(t), ( ) y1 y 2 ( y1y 2 ( + y2 1 ) y 2 ( + y2 2 ) y1y2 y 1 (21) and (22) ). (23) We deine the depth scaled camera translation as u( Y, t) = U(t) P(t) 2 = ûµ( Y, t), (24) where û = Û = (u 1, u 2, u 3 ) is the normalized direction o translation and µ( Y, t) = U 2 P 2 = U 2 X 3 Y 2 is the depth scaled sensor speed at Y at time t. The ocal length is assumed to be known. I we deine 2 vectors: r( Y ) = (r 1, r 2 ) = v A 2 ( Y ) ω and (25) u + k u k+1 = α U 1 u + k (19) d( Y ) = (d 1, d 2 ) = A 1 ( Y )û Y 2, (26)

4 (a) (b) (c) Figure 1: Synthetic test data: (a) A marble-texture cube (b) A marble-texture cylinder and (c) A marbletexture sphere. where A means each element in the vector is replaced by its absolute value. Then we can solve or µ as: ( ) r1 v 1 d 1 + r2 v2 d 2 µ =. (27) v 1 + v Planar Orientation rom Relative Depth We compute the local surace orientation as a unit normal vector, ˆα = (α 1, α 2, α 3 ) rom µ values as: ˆα Y = cµ Y 2 U 2 (28) We can solve or ˆα c by setting up a linear system o equations, one or each pixel in a n n neighbourhood where planarity has been assumed and using a standard least squares solution method The Overall Calculation At the initial time, t = 1: 1. We compute all the µ s as described in equation (27). 2. In each n n neighbourhood centered at a pixel (i, j) we compute ( ˆα c ) (i,j) at that pixel using equation (28). We call these computed ˆα c s the measurements and denote them as g M(i,j). 3. Given these measurements, we use the g M(i,j) to recompute the µ (i,j) s as: µ(i, j) = ( g M (i,j) Y (i,j) ) U 2 Y (i,j) 2 (29) We apply a median ilter to the µ(i, j) within 5 5 neighbourhoods to remove outliers. We repeat step 2 with these values. At time t 2: 1. We compute µ at each pixel location and then compute all g M(i,j) s in the same way described above or the new optical low ield. Using the image velocity measurements at time t = i, we use the best estimate o surace orientation at time t = i 1 at location Y v ( t = 1) plus the measurement at Y and its covariance matrix to obtain a new best estimate at Y at time t = i. We do this at all Y locations (where possible), recompute the µ values via equation (29) and output these as the 3D shape o the scene. At time t = i we proceed as or time t = 2, except we use the best µ estimates rom time t = i 1 instead o time t = 1 in the Kalman ilter updating The Kalman Filter Equations Note that the components o ˆα c in equation (28) are not independent, thus we have a covariance matrix with non-zero o-diagonal elements in the Kalman ilter equations. We use a standard set o Kalman ilter equations to integrate the surace orientations (and hence depth) over time. Please reer to Barron, Ngai and Spies [2] or details o the Kalman ilter equations.

5 3 Experimental Technique We generated ray-traced cube, cylinder and sphere image sequences with the camera translating to the let by ( 1,, ), as shown in Figure 1. We marble textured this sequence so that optical low could be used. The texture is kept ixed to the object. We also generated a second set o image sequences with the same objects but with sinusoidal patterns instead o marble texture. These sequences allowed the correct derivatives to be computed, we use these or the correct optical low to conirm the correctness o our implementations. We compute error distributions or number o depth ranges 5%, between 5% and 15% and 15% or 4 rames (the 7 th rame at the beginning o the sequences, the 19 th (just beore Hung and Ho turn on their smoothing) and the 27 th and 36 th rames near the end o the sequences). We also compute the average error (as a percentage) and its standard deviation or the 4 rames. Finally, we show raw (un-textured, unsmoothed) depth maps computed or rame 27 or the 4 methods. 4 Error Distributions and Depth Maps 7 th cu ± th cu ± th cu ± th cu ± th cy ± th cy ± th cy ± th cy ± th sp ± th sp ± th sp ± th sp ±17.27 Table 2: The percentage o the estimated depth values the experiments or the cube, cylinder and sphere using Hung and Ho s algorithm. In the table, cu =cube, cy =cylinder and sp =sphere. The last column shows the mean error and its standard deviation σ. 7 th cu ± th cu ± th cu ± th cu ± th cy ± th cy ± th cy ± th cy ± th sp ± th sp ± th sp ± th sp ±13.92 Table 1: The percentage o the estimated depth values the experiments or the cube, cylinder and sphere using Heel s algorithm. In the table, cu =cube, cy =cylinder and sp =sphere. The last column shows the mean error and its standard deviation σ. Tables 1 to 4 show the error distributions or the 4 methods while Table 5 show the error distribution or Hung and Ho with their smoothing calculation or Y 2 turned o. Figures 2 to 4 show the raw depth maps or the 3 objects or the 4 methods. 7 th cu ± th cu ± th cu ± th cu ± th cy ± th cy ± th cy ± th cy ± th sp ± th sp ± th sp ± th sp ±14.71 Table 3: The percentage o the estimated depth values the experiments or the cube, cylinder and sphere using Matthies et al. s algorithm. In the table, cu =cube, cy =cylinder and sp =sphere. The last column shows the mean error and its standard deviation σ.

6 (a) (b) (c) (d) Figure 2: (a) Heel, (b) Barron et al., (c) Matthies et al., (d) Hung and Ho depth map or marble cube at the 27 th step in the Kalman iltering. 7 th cu ± th cu ± th cu ± th cu ± th cy ± th cy ± th cy ± th cy ±8.5 7 th sp ± th sp ± th sp ± th sp ±9.75 Table 4: The percentage o the estimated depth values the experiments or the cube, cylinder and sphere using Barron et al. s algorithm. In the table, cu =cube, cy =cylinder and sp =sphere. The last column shows the mean error and its standard deviation σ. 7 th cu ± th cu ± th cu ± th cu ± th cy ± th cy ± th cy ± th cy ± th sp ± th sp ± th sp ± th sp ±19.39 Table 5: The percentage o the estimated depth values experiments or the marble sphere using Hung and Ho s algorithm. Smoothing in the Kalman ilter was turned o. In the table, cu =cube, cy =cylinder and sp =sphere. The last column shows the mean error and its standard deviation σ.

7 (a) (b) (c) (d) Figure 3: (a) Heel, (b) Barron et al., (c) Matthies et al., (d) Hung and Ho depth map or marble cylinder at the 27 th step in the Kalman iltering. 5 Discussion and Conclusion Quantitative results in Tables 4 and 3 show that the methods o Barron et al. [2] and Heel [3] are about the same and the best over all. Interestingly, results or Hung and Ho with their smoothing in Table 2 are worst than those when their smoothing is turned o in Table 5. However the smoothed depth maps always look better than the unsmoothed depth maps (not shown here due to space limitations): there is an obvious bias in the smoothed values. Overall. the recovered depth maps look quite good, with the possible exception o Heel, where there are some outliers at the object boundaries (simple iltering could remove this). This leaves Barron et al s as the best algorithm. We are currently testing our better algorithms on synthetic record groove images and on real groove images with encouraging results. Because the groove wall orientation can be described by 2 angles, one o which is constrained and because the vertical component o image velocity is always very small, we anticipate these constraints will yield better even result. For example Barron et al. s method could be modiied to use only horizontal velocities, like Matthies et al. [7] and eectively have one angle to track in the Kalman ilter. We inally note that we are using a 1X microscope to obtain our images and that there is suicient texture in the images to allow or a good optical low calculation. The results in this paper are only preliminary and analysis o time-varying synthetic and real imagery o a record groove wall is now our priority. Reerences [1] Barron J.L., D.J. Fleet and S.S. Beauchemin (1994), Perormance o Optical Flow Techniques, Int. Journal o Computer Vision (IJCV1994), 12:1, pp [2] Barron J.L., W.K.J. Ngai and H. Spies (23), Quantitative Depth Recovery rom Time- Varying Optical Flow in a Kalman Filter Framework, LNCS 2616 Theoretical Foundations o Computer Vision: Geometry, Morphology, and Computational Imaging, (Editors: T. Asano, R. Klette, and C. Ronse), pp

8 (a) (b) (c) (d) Figure 4: (a) Heel, (b) Barron et al., (c) Matthies et al., (d) Hung and Ho depth map or marble sphere at the 27 th step in the Kalman iltering. [3] Heel J. (199), Direct Dynamic Motion Vision, Proc. IEEE Con. Robotics and Automation, pp [4] Hung Y.S. and Ho H.T. (1999), A Kalman Filter Approach to Direct Depth Estimation Incorporating Surace Structure, IEEE PAMI, June, pp [5] Longuet-Higgins H.C. and K. Prazdny (198), The Interpretation o a Moving Retinal Image, Proceeding o the Royal Society o London, B28: [6] Lucas, B. and Kanade, T. (1981), An iterative image registration technique with an application to stereo vision, Proc. DARPA IU Workshop, pp [7] Matthies L., R. Szeliski and T. Kanade (1989), Kalman Filter-Based Algorithms or Estimating Depth rom Image Sequences, International Journal o Computer Vision IJCV 3:3, pp [8] Simoncelli E.P. (1994), Design o multidimensional derivative ilters, IEEE Int. Con. Image Processing, Vol. 1, pp [9] Tian B., J. Barron, W.K.J. Ngai and H. Spies (23), A Comparison o 2 Methods or Recovering Dense Accurate Depth Using Known 3D Camera Motion, Vision Interace, pp [1] Weng J., T.S. Huang, and N. Ahuja (1989), Motion and Structure rom Two Perspective Views: Algorithms, Error Analysis, and Error Estimation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(5): [11] Xiong Y. and Shaer S. (1995), Dense Structure rom a Dense Optical Flow Sequence, Int. Symposium on Computer Vision, Coral Gables, Florida, pp1-6.

Spatio-Temporal Stereo Disparity Integration

Spatio-Temporal Stereo Disparity Integration Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

Dominant plane detection using optical flow and Independent Component Analysis

Dominant plane detection using optical flow and Independent Component Analysis Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,

More information

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives

More information

The 2D/3D Differential Optical Flow

The 2D/3D Differential Optical Flow The 2D/3D Differential Optical Flow Prof. John Barron Dept. of Computer Science University of Western Ontario London, Ontario, Canada, N6A 5B7 Email: barron@csd.uwo.ca Phone: 519-661-2111 x86896 Canadian

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION Ammar Zayouna Richard Comley Daming Shi Middlesex University School of Engineering and Information Sciences Middlesex University, London NW4 4BT, UK A.Zayouna@mdx.ac.uk

More information

Visual Tracking (1) Pixel-intensity-based methods

Visual Tracking (1) Pixel-intensity-based methods Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Efficient Block Matching Algorithm for Motion Estimation

Efficient Block Matching Algorithm for Motion Estimation Efficient Block Matching Algorithm for Motion Estimation Zong Chen International Science Inde Computer and Information Engineering waset.org/publication/1581 Abstract Motion estimation is a key problem

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

CS-465 Computer Vision

CS-465 Computer Vision CS-465 Computer Vision Nazar Khan PUCIT 9. Optic Flow Optic Flow Nazar Khan Computer Vision 2 / 25 Optic Flow Nazar Khan Computer Vision 3 / 25 Optic Flow Where does pixel (x, y) in frame z move to in

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

The Lucas & Kanade Algorithm

The Lucas & Kanade Algorithm The Lucas & Kanade Algorithm Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Registration, Registration, Registration. Linearizing Registration. Lucas & Kanade Algorithm. 3 Biggest

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

where ~n = ( ) t is the normal o the plane (ie, Q ~ t ~n =,8 Q ~ ), =( ~ X Y Z ) t and ~t = (t X t Y t Z ) t are the camera rotation and translation,

where ~n = ( ) t is the normal o the plane (ie, Q ~ t ~n =,8 Q ~ ), =( ~ X Y Z ) t and ~t = (t X t Y t Z ) t are the camera rotation and translation, Multi-Frame Alignment o Planes Lihi Zelnik-Manor Michal Irani Dept o Computer Science and Applied Math The Weizmann Institute o Science Rehovot, Israel Abstract Traditional plane alignment techniques are

More information

Realtime Depth Estimation and Obstacle Detection from Monocular Video

Realtime Depth Estimation and Obstacle Detection from Monocular Video Realtime Depth Estimation and Obstacle Detection rom Monocular Video Andreas Wedel 1,2,UweFranke 1, Jens Klappstein 1, Thomas Brox 2, and Daniel Cremers 2 1 DaimlerChrysler Research and Technology, REI/AI,

More information

3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY

3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY 3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY Bin-Yih Juang ( 莊斌鎰 ) 1, and Chiou-Shann Fuh ( 傅楸善 ) 3 1 Ph. D candidate o Dept. o Mechanical Engineering National Taiwan University, Taipei, Taiwan Instructor

More information

Can Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes?

Can Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes? Can Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes? V. Couture and M. S. Langer McGill University (School of Computer Science), Montreal, Quebec, H3A 2A7 Canada email: vincent.couture@mail.mcgill.ca

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement

More information

Marcel Worring Intelligent Sensory Information Systems

Marcel Worring Intelligent Sensory Information Systems Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video

More information

arxiv: v1 [cs.cv] 2 May 2016

arxiv: v1 [cs.cv] 2 May 2016 16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer

More information

Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques

Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques Digital Image Computing: Techniques and Applications. Perth, Australia, December 7-8, 1999, pp.143-148. Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques Changming Sun CSIRO Mathematical

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1 Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Optical Flow Estimation

Optical Flow Estimation Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax

More information

Fig. 3.1: Interpolation schemes for forward mapping (left) and inverse mapping (right, Jähne, 1997).

Fig. 3.1: Interpolation schemes for forward mapping (left) and inverse mapping (right, Jähne, 1997). Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes - 17-3. Spatial transorms 3.1. Geometric operations (Reading: Castleman, 1996, pp. 115-138) - a geometric operation is deined as

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

CS664 Lecture #18: Motion

CS664 Lecture #18: Motion CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

3D Model Acquisition by Tracking 2D Wireframes

3D Model Acquisition by Tracking 2D Wireframes 3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Comparison Between The Optical Flow Computational Techniques

Comparison Between The Optical Flow Computational Techniques Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.

More information

Perceptual Grouping from Motion Cues Using Tensor Voting

Perceptual Grouping from Motion Cues Using Tensor Voting Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project

More information

C18 Computer vision. C18 Computer Vision. This time... Introduction. Outline.

C18 Computer vision. C18 Computer Vision. This time... Introduction. Outline. C18 Computer Vision. This time... 1. Introduction; imaging geometry; camera calibration. 2. Salient feature detection edges, line and corners. 3. Recovering 3D from two images I: epipolar geometry. C18

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Uncertainties: Representation and Propagation & Line Extraction from Range data

Uncertainties: Representation and Propagation & Line Extraction from Range data 41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty

More information

1-2 Feature-Based Image Mosaicing

1-2 Feature-Based Image Mosaicing MVA'98 IAPR Workshop on Machine Vision Applications, Nov. 17-19, 1998, Makuhari, Chibq Japan 1-2 Feature-Based Image Mosaicing Naoki Chiba, Hiroshi Kano, Minoru Higashihara, Masashi Yasuda, and Masato

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Stereo Observation Models

Stereo Observation Models Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative

More information

Neighbourhood Operations

Neighbourhood Operations Neighbourhood Operations Neighbourhood operations simply operate on a larger neighbourhood o piels than point operations Origin Neighbourhoods are mostly a rectangle around a central piel Any size rectangle

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Direct Plane Tracking in Stereo Images for Mobile Navigation

Direct Plane Tracking in Stereo Images for Mobile Navigation Direct Plane Tracking in Stereo Images for Mobile Navigation Jason Corso, Darius Burschka,Greg Hager Computational Interaction and Robotics Lab 1 Input: The Problem Stream of rectified stereo images, known

More information

High Accuracy Depth Measurement using Multi-view Stereo

High Accuracy Depth Measurement using Multi-view Stereo High Accuracy Depth Measurement using Multi-view Stereo Trina D. Russ and Anthony P. Reeves School of Electrical Engineering Cornell University Ithaca, New York 14850 tdr3@cornell.edu Abstract A novel

More information

Adaptive Multi-Stage 2D Image Motion Field Estimation

Adaptive Multi-Stage 2D Image Motion Field Estimation Adaptive Multi-Stage 2D Image Motion Field Estimation Ulrich Neumann and Suya You Computer Science Department Integrated Media Systems Center University of Southern California, CA 90089-0781 ABSRAC his

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska 1 Krzysztof Krawiec IDSS 2 The importance of visual motion Adds entirely new (temporal) dimension to visual

More information

Optical Flow. Adriana Bocoi and Elena Pelican. 1. Introduction

Optical Flow. Adriana Bocoi and Elena Pelican. 1. Introduction Proceedings of the Fifth Workshop on Mathematical Modelling of Environmental and Life Sciences Problems Constanţa, Romania, September, 200, pp. 5 5 Optical Flow Adriana Bocoi and Elena Pelican This paper

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003 Image Transfer Methods Satya Prakash Mallick Jan 28 th, 2003 Objective Given two or more images of the same scene, the objective is to synthesize a novel view of the scene from a view point where there

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Computing Slow Optical Flow By Interpolated Quadratic Surface Matching

Computing Slow Optical Flow By Interpolated Quadratic Surface Matching Computing Slow Optical Flow By Interpolated Quadratic Surface Matching Takashi KUREMOTO Faculty of Engineering Yamaguchi University Tokiwadai --, Ube, 755-8 Japan wu@csse.yamaguchi-u.ac.jp Kazutoshi KOGA

More information

CS485/685 Computer Vision Spring 2012 Dr. George Bebis Programming Assignment 2 Due Date: 3/27/2012

CS485/685 Computer Vision Spring 2012 Dr. George Bebis Programming Assignment 2 Due Date: 3/27/2012 CS8/68 Computer Vision Spring 0 Dr. George Bebis Programming Assignment Due Date: /7/0 In this assignment, you will implement an algorithm or normalizing ace image using SVD. Face normalization is a required

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Multi-scale 3D Scene Flow from Binocular Stereo Sequences

Multi-scale 3D Scene Flow from Binocular Stereo Sequences Boston University OpenBU Computer Science http://open.bu.edu CAS: Computer Science: Technical Reports 2004-11-02 Multi-scale 3D Scene Flow from Binocular Stereo Sequences Li, Rui Boston University Computer

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

521466S Machine Vision Exercise #1 Camera models

521466S Machine Vision Exercise #1 Camera models 52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in

More information

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen Video Mosaics for Virtual Environments, R. Szeliski Review by: Christopher Rasmussen September 19, 2002 Announcements Homework due by midnight Next homework will be assigned Tuesday, due following Tuesday.

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can

More information

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing Larry Matthies ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies" lhm@jpl.nasa.gov, 818-354-3722" Announcements" First homework grading is done! Second homework is due

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov Real-Time Scene Reconstruction Remington Gong Benjamin Harris Iuri Prilepov June 10, 2010 Abstract This report discusses the implementation of a real-time system for scene reconstruction. Algorithms for

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss Robot Mapping Least Squares Approach to SLAM Cyrill Stachniss 1 Three Main SLAM Paradigms Kalman filter Particle filter Graphbased least squares approach to SLAM 2 Least Squares in General Approach for

More information

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General Robot Mapping Three Main SLAM Paradigms Least Squares Approach to SLAM Kalman filter Particle filter Graphbased Cyrill Stachniss least squares approach to SLAM 1 2 Least Squares in General! Approach for

More information

Announcements. Computer Vision I. Motion Field Equation. Revisiting the small motion assumption. Visual Tracking. CSE252A Lecture 19.

Announcements. Computer Vision I. Motion Field Equation. Revisiting the small motion assumption. Visual Tracking. CSE252A Lecture 19. Visual Tracking CSE252A Lecture 19 Hw 4 assigned Announcements No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Motion Field Equation Measurements I x = I x, T: Components

More information

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47 Flow Estimation Min Bai University of Toronto February 8, 2016 Min Bai (UofT) Flow Estimation February 8, 2016 1 / 47 Outline Optical Flow - Continued Min Bai (UofT) Flow Estimation February 8, 2016 2

More information

Moving Object Tracking in Video Using MATLAB

Moving Object Tracking in Video Using MATLAB Moving Object Tracking in Video Using MATLAB Bhavana C. Bendale, Prof. Anil R. Karwankar Abstract In this paper a method is described for tracking moving objects from a sequence of video frame. This method

More information

first order approx. u+d second order approx. (S)

first order approx. u+d second order approx. (S) Computing Dierential Properties of 3-D Shapes from Stereoscopic Images without 3-D Models F. Devernay and O. D. Faugeras INRIA. 2004, route des Lucioles. B.P. 93. 06902 Sophia-Antipolis. FRANCE. Abstract

More information