A Specialized Multibaseline Stereo Technique for Obstacle Detection

Size: px
Start display at page:

Download "A Specialized Multibaseline Stereo Technique for Obstacle Detection"

Transcription

1 A Specialized Multibaseline Stereo Technique for Obstacle Detection Todd Williamson and Charles Thorpe Robotics Institute Carnegie Mellon University Pittsburgh PA Abstract This paper presents a multibaseline stereo technique specially suited to detecting obstacles. A method is described for weakly calibrating a set of multibaseline stereo cameras with high accuracy. This method is then used to tailor the stereo search space to the special case where the world in front of the cameras consists mainly of nearly horizontal planar surfaces (the ground), where we are interested in deviations from those planar surfaces (obstacles). The resulting disparity maps are presented and compared to the output of a traditional stereo algorithm. 1. Introduction Obstacle detection is one of the fundamental problems of mobile robotics. In order to navigate in the world, it is necessary to detect those portions of the world that are dangerous or impossible to traverse. For a large class of such navigation problems, the world in front of the robot can be modeled as a flat plane, and any points that deviate from the planar model can be considered to be obstacles. Examples of problems in this class are Automated Highway Systems (AHS) vehicles, and practically all indoor mobile robots. Many different sensors have been used for obstacle detection, with varying degrees of success. Many groups (e.g. [3][4]) have chosen stereo vision because of the low cost and high reliability of the sensors and the fact that the sensing is passive. We have chosen multibaseline stereo because of those reasons and the improved reliability and accuracy of the recovered structure of the environment achievable with more cameras. Conventional stereo systems for navigation often perform three steps. The first step is to find matching points Copyright 1998 IEEE. Published in the Proceedings of CVPR 98, June 1998 Santa Barbara, CA. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ , USA. Telephone: + Intl between images. From these matches, it is then possible to compute the 3D coordinates of each point in the image. Finally, these points are arranged in a map so that possible vehicle motions can be evaluated. In this paper we present a method, based on weakly calibrated stereo, that attempts to identify obstacles directly from the stereo imagery. The issue of camera calibration for multibaseline stereo has been addressed in a variety of ways. In this paper we first present a weak calibration technique that produces very accurate results in the general case. Then we present two variations to this technique that tailor the stereo search space for the obstacle detection problem. Finally, we compare the output of all three methods and discuss implementation issues. 2. Stereo Calibration It is a well-known result of camera geometry that images from different pinhole cameras of the same planar surface, represented in 3D homogeneous coordinates, are related by a 3x3 homography matrix. Furthermore, this homography matrix has a specific structure: x Hx A 1 ( R + tn T 1 = = )A 0 x Where the A matrices are 3x3 matrices containing information about the intrinsic parameters of the two cameras (focal length, aspect ratio, and image center). R is the 3x3 rotation matrix between the two camera axes, t is the 3D translation vector between the two camera focal points, and n = nˆ d is the 3D normal vector to the plane being observed, divided by the normal distance of the plane from the focal point of camera 0 [6][7][8]. Thus H encodes the relationship between the points in the image from camera 0, represented by x, and the points from camera 1, represented by x Given such homography matrices (H 1 and H 2 ) for two such planes, it is possible to weakly calibrate the set of cameras by simply taking the difference between the two matrices: (1)

2 H 2 H 1 = = T T 1 A 1 ( R R + tn 2 tn 1 )A 0 T T 1 A 1 ( tn ( 2 n 1 ))A 0 T T 1 = ( A 1 t) [( n 2 n 1 )A 0 ] (2) This matrix is rank 1, being the outer product of two vectors. Intuitively, A 1 t is the position of the focal point of camera 0 in the image of camera 1 (which simply means that it is just the epipole of camera 1). The other half of the expression encodes the information about how the two planes being observed are different. To perform the stereo computation using this result, we can simply compute the set of homographies: H α = H 1 + α( H 2 H 1 ) (3) Where α is a scalar that controls what plane is actually represented by the homography. The normal vector n α of this plane for H α is given by: n α = n 1 + α( n 2 n 1 ) In practice, it is often convenient to aim the camera toward points that are so far away that their disparity is effectively zero. The distance at which this is the case depends on the focal length of the lenses and the length of the longest baseline. For most configurations, the sky near the horizon is sufficiently far away. In this case, n 1 goes to zero, and equation (4) becomes n α = αn 2 and α has the normal stereo relationship to 1/d. For the remainder of this paper, we will derive equations for the general case, but in the actual experiments we use the homography of the plane at infinity for H 1. Thus, for any of the planes n α we can compute the corresponding point in camera 1 for any point in camera 0. By stepping α through a range of values, we can get a series of planes and their point correspondences to use in the heart of the stereo matching algorithm. The extension of this technique for multiple baselines is straight-forward. If the same two planes are viewed by multiple cameras, the corresponding homography matrices can be interpolated as above. No measurements of camera geometry are necessary. In the following sections, we will first describe how we can compute homography matrices from image data, and then we will discuss three different methods for choosing the planes to use to perform the calibration. (4) (5) 2.1. Computing Homography Matrices Since any given point in a 2D image can be represented by any of an infinite number of homogeneous coordinates, all scalar multiples of each other, we cannot expect to directly solve equation (1) for H. One way to represent this is to write the cross-product of the two homogeneous coordinates that are supposed to be equal, and set it equal to zero. This has the effect of constraining the two coordinates to be scalar multiples of each other. Thus equation (1) becomes: x Hx = 0 (6) Since, given any solution H to this problem, all scalar multiples of H are also solutions, we can arbitrarily set one element of H to whatever value we like and solve for the other 8. Doing this we get two linear equations per image point, and a total of 8 unknowns, so four point correspondences are required to compute H. Ideally, though, we would like to know the parameters of H to very high precision so that we can accurately compute depth using sub-pixel interpolation template matching. For an AHS vehicle, the goal is to recognize small obstacles (as small as 20 cm or so) at long range ( m in front of the vehicle). In order to accomplish this, a combination of telephoto lenses and a large baseline becomes necessary. In this situation, small inaccuracies in the calibration can cause large errors. In one particular situation that we have studied, using a 1 m baseline and 35 mm lenses, a 1 mm error (1 part in 1000) in the computed position of the camera can cause the epipolar line to be off by as much as 2 pixels in certain parts of the image at extreme disparities. Accurate computation of homography matrices is therefore essential. Thus, some method of accurately determining calibration parameters is necessary. The most obvious way to do this is to minimize the residual error between one image and the other image when warped by the homography. Since the relationship between a pixel s location in one image and its position after transformation by the homography is a rational function, some type of nonlinear optimization must be used. For this, we use a program that has been in use at Carnegie Mellon for several years [4][6]. It asks the user to select four matching points to compute a starting point for a Levenberg-Marquart nonlinear optimization. The results of this computation are shown in Figure 1. For this case, the residual error was reduced by a factor of around 50%.

3 Left Image Right Image Intensity Image Disparity Output Figure 2: output with depth calibration Residual after choosing 4 points Residual after optimization Figure 1: results of homography computation After accurately computing the homography matrices of two planes, one issue remains. Since we were able to compute the homographies only up to a scale factor, the cancellation of R from the second line of equation (2) is not possible unless we compute the relative scale factors of the two matrices. To accomplish this, we use the fact that the difference between the two homographies is rank 1 for the correct scale factor. So we simply have to find β such that H 1 βh 2 is rank 1. In general, because of rounding errors and imperfect assumptions made by our model, there will be no β that accomplishes this exactly. We evaluate how good any given β is by computing the Singular Value Decomposition of H 1 βh 2 and taking the ratio of the largest and second largest singular values. Finding the best value of β then becomes a simple 1D optimization problem. In the following sections, we will assume that all homography matrices have already been normalized by β as computed by the above method Depth Calibration Method If H 1 and H 2 are computed by viewing planes that are parallel to each other, then equation (4) becomes nˆ n α n 1 + α( n 2 n 1 ) ---- α nˆ nˆ α α = = + = nˆ (7) d 1 d 2 d 1 d 1 d 2 where nˆ is the common normal vector to both planes. So we get a set of planes, all of which are parallel to the Right Image Left Image Difference Figure 3: depth calibration method applied to a road surface original planes. If the initial planes are chosen such that they are roughly perpendicular to the camera axis, then nˆ will be parallel to the camera axis, and the set of planes from equation (7) will be planes of constant depth. The geometry that is recovered allows us to compute the distance of points from camera 0. This corresponds exactly to the most common stereo vision method. The multibaseline stereo output from a set of 3 cameras calibrated with this technique is shown in Figure 2. Unfortunately, this method runs into problems in an environment consisting mostly of surfaces whose normals are not parallel to the camera axis. This is illustrated in Figure 3. The s being matched in the left image have a different geometry arising from the fact that points that are higher in the image are actually farther away. This leads to a different shape within the image, which in turn leads to imprecise matching. The method described here describes how to derive pixel-by-pixel differences which can then be summed over a window in order to compute the disparity. However, the same effect could be achieved when computing SSD (sum

4 little as α increases, while pixels that are far from the horizon will move much more. Pixels that are above the horizon will actually move backwards (as if looking at the ground behind the cameras). Intensity Image of squared differences) over templates, simply by warping the templates using the homography for a given disparity level before computing the SSD Height Calibration Method Disparity Output Figure 4: output with height calibration An interesting property of equation (7) is that the direction of the normal vectors of the planes being viewed do not have to be parallel to the camera axis. We can use this property to get around the problem with depth calibration described in the previous section. An obvious adaptation would be to choose two planes that are parallel to the ground (note that the plane at infinity can be used for H 1 in this case as well). In this configuration, the output of the stereo algorithm would be the height of each point in the image. The output of this height calibration is shown in Figure 4. The triangular section on the lower right represents pixels for which all possible matches lay outside the image for this calibration method. The scene being viewed in this figure is different from the one used for calibration. The results of this simple height calibration are reasonably good, but it seems to have problems at long distances. To get at the core of why this is, it is helpful to go back to equations (1) and (2). x H 1 + α( H 2 H 1 )x H 1 x αa 1 tn ( 2 n 1 ) T 1 = = + A 0 x (8) Note that the quantity (n 2 -n 1 ) T A -1 0 x is a scalar. This scalar is the dot product of the direction in world coordinates corresponding to the pixel x, A -1 0 x, with the direction of the ground plane. If the horizon appears in the image, pixels at the horizon will be perpendicular to the direction of the ground plane, and this scalar will be zero. The effect of this will be that pixels on the horizon will all map to the same location, regardless of α. Another effect is that pixels that are close to the horizon will move very It is clear from this that no single choice of step size for α will give good results for this method, and the result of that can be seen in the upper portion of Figure 4. On the other hand, the method has several good features: the s being compared between the images automatically have the correct shape if the surface being viewed is a plane parallel to the ground the mapping from the output of this method (height) to whether a should be considered an obstacle or not is obvious -- s that are too high (or too low) should be considered to be obstacles Combined Method Although the height calibration method is promising, the problems mentioned above seem to be too extreme make it practical. Ideally, there would be some combined method which captures the strong points of the height method with none of the drawbacks. The problems associated with the height calibration method mostly seem to stem from the fact that the direction of nˆ in equation (7) is far from the camera axis. The benefits, on the other hand, stem from the fact that the planes involved are similar to the ground plane. Thus, we define a new set of homographies H α = H 3 + α( H 2 H 1 ) (9) Where H 3 is computed for the ground plane, but H 2 and H 1 are computed from planes such that the difference between normal vectors lies near the camera axis (such as with the depth calibration method). With this set of homographies, the corresponding planes look like: 1 α α n α = n (10) d 1 d 2 nˆ Where nˆ is the common normal of H 2 and H 1, lying near the camera axis. An example of this set of planes for a simple camera geometry is shown in Figure 5. The output of this scheme is shown in Figure 6. Again, the scene used for output is significantly different from that used for calibration. This sampling method has a number of good properties:

5 Height (m) Distance (m) z Right Image Left Image Difference Figure 7: combined calibration method output for the situation shown in Figure Figure 5: Planes of constant disparity for the second calibration method. Parameters are 1m baseline, 35mm lenses, 1/2 CCD, cameras aligned perfectly and 2m above the ground Error depth combined Disparity Figure 8: error for the cases shown in Figure 3 Intensity Image Disparity Output Figure 6: output with combined calibration the template shape is correct for objects whose surface corresponds to one of these planes the different planes are all distinct everywhere in the image the allocation of the limited number of disparity search levels is optimized for road geometry (since we do not expect the pixels at the bottom of the image to be 200m away, nor do we expect objects at the top of the image to be close) even if the ground plane does not match up with one of the constant disparity planes exactly (because of the curvature of the road and the fact that the bottom of the image may in fact be very far away from the vehicle), a planar road surface must also be roughly planar in the (row,column,disparity) space of the output of this algorithm (this is not true for the height calibration case). In Figure 7, we present the s used by this method, for comparison with those used by the depth method (shown in Figure 3). Since the s are transformed appropriately for the expected surface normal direction, the shape of features in the image is correct, and matching is much more likely to be accurate. 3. Comparison of Results In Figure 8 we have plotted the error as a function of disparity for the case considered in Figure 3 and Figure 7. Here the benefit of this method becomes clear. The depth calibration method (shown with a solid line) achieves a local minimum of error at the correct value, but it is not less than the value attained in s where the road surface has no texture. Thus, the result in this case is an incorrect disparity value. The combined method properly and strongly identifies the correct minimum. Another means of evaluating the results of these methods is to look closely at the shapes of the s that are identified by each algorithm as having a particular disparity value. We can compare the output of the two methods by mapping the output shown in Figure 2 into the space of the output in Figure 6. The results of this mapping are shown in Figure 9, with the s corresponding to four disparity levels of the ground displayed as four different shades of gray. As can be seen in the figure, the combined method maps the plane of the ground into around five s, representing five different planes of the type shown in Figure 5. Details like the fact that the ground slopes down-

6 Depth Method results Combined Method converted to Combined Method space Figure 9: comparison of results for ground plane pixels -- four disparity levels for the ground are shown in levels of gray ward from left to right (relative to the original calibration plane) are clearly visible. Given the results shown in Figure 8, it is not surprising that the depth method does not do as well on the road surface. Although the general trends are present, the accuracy is affected strongly by the incorrect feature shape used in matching. 4. Implementation The algorithms described in this paper have been implemented both using special purpose hardware, the Video-Rate Multi-baseline Stereo Machine, and in C code on a Sun workstation Video-Rate Multi-baseline Stereo Machine The stereo machine consists of a number of custombuilt 9U VME boards connected in a system. The system is described in more detail in [2]. The algorithm used by the stereo machine works by first digitizing the images from each of the cameras (up to 6 in the current design), and then passing the image through an 11x11 LOG (Laplacian of Gausssian) filter. The resulting data is then compressed to 4 bits by a nonlinear weighting function. All subsequent processing is done with 4-bit pixel data. The 4-bit LOG filtered data is then passed on to the geometry compensation unit. This unit is of particular importance because it performs a very general transformation on each of the input images, to rectify these images before performing the SAD computation which comes next. For each postulated distance from the cameras, all of the images are effectively reprojected as if all objects in the scene were at that distance. This reprojection is accomplished via a 3-dimensional lookup table whose inputs are the postulated disparity ζ and the image coordinates (i,j), and which outputs the corresponding image coordinate for each camera (I(i,j,ζ), J(i,j,ζ)). These output image coordinates are given to sub-pixel accuracy (in 1/16ths of a pixel), and the four adjacent pixels are linearly interpolated to produce the reprojected result pixel. Note that the lookup table can contain any values whatsoever, so it is possible to correct for lens distortion, or to operate with one camera upside down, or to use cameras with lenses of different focal lengths. The primary limitation of the geometry compensation circuit is that it assumes that each 4x4 pixel of the base image will be offset by the same amount. For the kind of extreme geometry that has been presented in this paper, this assumption is often not a very good one, and the quality of the output suffers. Calibration for the stereo machine consists entirely of computing the values to load into these lookup tables. These values can be computed directly from the homography matrices by the simple formula ai( i, j, ζ) aj( i, j, ζ) a = H ζ (11) and normalization by a to convert from homogeneous coordinates to 2D coordinates. In the next stage of the stereo machine, the absolute value of difference (AD) is performed pixel-by-pixel for the base camera (camera #0) paired with each of the other cameras. The results of the AD computation are summed over all of the camera pairs, resulting in a sum of absolute differences (SAD) value for each pixel for each disparity level. The resulting SAD values are now smoothed by summing over a window, producing the SSAD. In the final stage, for each pixel, the disparity level with the minimum SSAD value is found, and the SSAD values of the minimum and its neighbors are sent to the C40 DSP processing board, where the disparity levels can be interpolated for higher accuracy. The stereo machine processes images at a constant rate of roughly 30 million pixel-disparities per second (counting pixels processed in camera #0), regardless of the number of cameras in use. Thus the frame rate depends on the number of pixels processed and the number of disparities searched. When using the maximum values for each (256x200 image, 60 disparity levels searched), the frame rate is roughly 6 Hz. For the experiments presented in this paper, the stereo machine was mounted on a vehicle and driven to a variety of different locations. i j 1

7 4.2. C Program Implementation The implementation in C code emulates the stereo machine algorithm completely, except for the geometry compensation. Instead of using a lookup table, the geometry compensation is done by computing image coordinates directly from the homography matrices, for maximum accuracy. With the exception of Figure 4, all of the output shown in this paper was computed with this program. 5. Conclusion A system has been presented that allows very precise calibration of multi-baseline stereo cameras. Such calibration is necessary in the case where long focal lengths and large baselines are necessary to observe objects at long range, or when maximum depth precision is required. Building on this method, we have shown two possible modifications that allow stereo vision to be used to accurately detect planar surfaces in front of a vehicle. This then allows us to detect s that do not lie on those planar surfaces, which can therefore be classified as obstacles. A key principle of these methods is that distances to objects whose surface normals are very different from the camera axis direction can not be measured accurately with traditional stereo vision algorithms. If these surface normals are different enough, then the distance to the object may vary significantly over the areas being considered, thus violating a key assumption of traditional stereo algorithms. Other methods to cope with similar situations have been presented in [1] and [7]. The former presents a method intended to reduce stereo vision processing while accounting for vehicle motion, and adaptively adjusts the parameters of the ground plane as the vehicle moves. The latter attempts to compute the height of objects in the scene using a weakly calibrated stereo system by making some simplifying assumptions. The methods we have described in this paper have been tested in a variety of real-world environments and appear to be very robust to different situations. In future work we will be building a new frame-rate stereo machine that is capable of computing point correspondences instead of relying on a sparse, imprecise lookup table. 6. Acknowledgment This research was sponsored by the collaborative agreement between Toyota Motor Corporation egie Mellon University. Most of the work described in section 2.1. was performed by the stereo machine group at Carnegie Mellon, primarily Kazuo Oda and Tak Yoshigahara. In addition, we want to thank Martial Hébert for many hours of constructive conversation about the mathematics of stereo imaging. 7. References [1] P. Burt, P. Anandan, K. Hanna, G. van der Wal, and R. Bassman. A Front-End Vision Processor for Vehicle Navigation, Proceedings of the International Conference on Intelligent Autonomous Systems (IAS-3), [2] T. Kanade, A. Yoshida, K. Oda, H. Kano, and M. Tanaka. A Stereo Machine for Video Rate Dense Depth Mapping and its New Applications, Computer Vision and Pattern Recognition Conference (CVPR 96), June, [3] Q.T. Luong, J. Weber, D. Koller and J. Malik, An integrated stereo-based approach to automatic vehicle guidance, Fifth International Conference on Computer Vision (ICCV 95), Cambridge, Mass, June 1995, pp [4] L. Mathies and P. Grandjean. Stereo Vision for Planetary Rovers: Stochastic Modeling to Near Real-Time Implementation, International Journal of Computer Vision, 8:1, pp , [5] K. Oda. personal correspondence. [6] K. Oda. Calibration Method for Multi-Camera Stereo Head for NavLab II, Internal Carnegie Mellon Document, [7] L. Robert, M. Buffa, M. Hébert. Weakly-Calibrated Stereo Perception for Rover Navigation, Proceedings of the International Conference on Computer Vision (ICCV 95), [8] L. Robert, C. Zeller, O. Faugeras, and M. Hébert. Applications of Nonmetric Vision to Some Visually Guided Robotics Tasks, chapter from Visual Navigation: From Biological Systems to Unmanned Ground Vehicles, Lawrence Erlbaum Associates, 1997.

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003 Image Transfer Methods Satya Prakash Mallick Jan 28 th, 2003 Objective Given two or more images of the same scene, the objective is to synthesize a novel view of the scene from a view point where there

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

Wide-Baseline Stereo Vision for Mars Rovers

Wide-Baseline Stereo Vision for Mars Rovers Proceedings of the 2003 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems Las Vegas, Nevada October 2003 Wide-Baseline Stereo Vision for Mars Rovers Clark F. Olson Habib Abi-Rached Ming Ye Jonathan

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

A Real-Time Catadioptric Stereo System Using Planar Mirrors

A Real-Time Catadioptric Stereo System Using Planar Mirrors A Real-Time Catadioptric Stereo System Using Planar Mirrors Joshua Gluckman Shree K. Nayar Department of Computer Science Columbia University New York, NY 10027 Abstract By using mirror reflections of

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Lecture 14: Basic Multi-View Geometry

Lecture 14: Basic Multi-View Geometry Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Quasi-Euclidean Uncalibrated Epipolar Rectification

Quasi-Euclidean Uncalibrated Epipolar Rectification Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report September 2006 RR 43/2006 Quasi-Euclidean Uncalibrated Epipolar Rectification L. Irsara A. Fusiello Questo

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

A Novel Stereo Camera System by a Biprism

A Novel Stereo Camera System by a Biprism 528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel

More information

The end of affine cameras

The end of affine cameras The end of affine cameras Affine SFM revisited Epipolar geometry Two-view structure from motion Multi-view structure from motion Planches : http://www.di.ens.fr/~ponce/geomvis/lect3.pptx http://www.di.ens.fr/~ponce/geomvis/lect3.pdf

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Cameras and Stereo CSE 455. Linda Shapiro

Cameras and Stereo CSE 455. Linda Shapiro Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X

More information

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy

Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Gideon P. Stein Ofer Mano Amnon Shashua MobileEye Vision Technologies Ltd. MobileEye Vision Technologies Ltd. Hebrew University

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University Stereo Vision 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University What s different between these two images? Objects that are close move more or less? The amount of horizontal movement is

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

Long-term motion estimation from images

Long-term motion estimation from images Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16 Announcements Stereo III CSE252A Lecture 16 HW1 being returned HW3 assigned and due date extended until 11/27/12 No office hours today No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in

More information

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a 96 Chapter 7 Model-Based Stereo 7.1 Motivation The modeling system described in Chapter 5 allows the user to create a basic model of a scene, but in general the scene will have additional geometric detail

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

An Embedded Calibration Stereovision System

An Embedded Calibration Stereovision System 2012 Intelligent Vehicles Symposium Alcalá de Henares, Spain, June 3-7, 2012 An Embedded Stereovision System JIA Yunde and QIN Xiameng Abstract This paper describes an embedded calibration stereovision

More information

STEREO-VISION SYSTEM PERFORMANCE ANALYSIS

STEREO-VISION SYSTEM PERFORMANCE ANALYSIS STEREO-VISION SYSTEM PERFORMANCE ANALYSIS M. Bertozzi, A. Broggi, G. Conte, and A. Fascioli Dipartimento di Ingegneria dell'informazione, Università di Parma Parco area delle Scienze, 181A I-43100, Parma,

More information

STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING

STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING Yuichi Ohta Institute of Information Sciences and Electronics University of Tsukuba IBARAKI, 305, JAPAN Takeo Kanade Computer Science Department Carnegie-Mellon

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993 Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video

More information

Self-calibration of a pair of stereo cameras in general position

Self-calibration of a pair of stereo cameras in general position Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Perception II: Pinhole camera and Stereo Vision

Perception II: Pinhole camera and Stereo Vision Perception II: Pinhole camera and Stereo Vision Davide Scaramuzza Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Mobile Robot Control Scheme knowledge, data base mission commands Localization

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Projective Rectification from the Fundamental Matrix

Projective Rectification from the Fundamental Matrix Projective Rectification from the Fundamental Matrix John Mallon Paul F. Whelan Vision Systems Group, Dublin City University, Dublin 9, Ireland Abstract This paper describes a direct, self-contained method

More information

Announcements. Stereo

Announcements. Stereo Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture

More information

EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES

EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES Camillo RESSL Institute of Photogrammetry and Remote Sensing University of Technology, Vienna,

More information

Lecture'9'&'10:'' Stereo'Vision'

Lecture'9'&'10:'' Stereo'Vision' Lecture'9'&'10:'' Stereo'Vision' Dr.'Juan'Carlos'Niebles' Stanford'AI'Lab' ' Professor'FeiAFei'Li' Stanford'Vision'Lab' 1' Dimensionality'ReducIon'Machine'(3D'to'2D)' 3D world 2D image Point of observation

More information

Camera Calibration Using Line Correspondences

Camera Calibration Using Line Correspondences Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of

More information

Development of a Video-Rate Stereo Machine. Abstract

Development of a Video-Rate Stereo Machine. Abstract Thi d d i h F M k 4 2 Proceedings of 94 ARPA Image Understanding Workshop, Nov 14-16, 1994, Monttey Ca. Development of a Video-Rate Stereo Machine Takeo Kanade School of Computer Science, Carnegie Mellon

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

Epipolar geometry. x x

Epipolar geometry. x x Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

Lecture 9 & 10: Stereo Vision

Lecture 9 & 10: Stereo Vision Lecture 9 & 10: Stereo Vision Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? IntroducEon to stereo vision Epipolar geometry: a gentle intro Parallel images Image receficaeon Solving

More information