DEPTH ESTIMATION USING STEREO FISH-EYE LENSES

Size: px
Start display at page:

Download "DEPTH ESTIMATION USING STEREO FISH-EYE LENSES"

Transcription

1 DEPTH ESTMATON USNG STEREO FSH-EYE LENSES Shishir Shah and J. K. Aggamal Computer and Vision Research Center Department of Electrical and Computer Engineering, ENS 520 The University of Texas At Austin Austin, TX ABSTRACT This paper presents the estimation of depth in an indoor, structured environment based on a stereo setup consisting of two fish-eye lenses, with parallel optical axes, mounted on a robot platform. The use of fish-eye lenses provides for a large field of view to estimate better the depth of features very close to the lens. To extract significant information from the fish-eye lens images, we first correct for the distortion before using a special line detector, based on vanishing points, to extract significant features. We use a relaxation procedure to achieve correspondence between features in the left and right images. The process of prediction and recursive verification of the hypotheses is utilized to find a one-to-one correspondence. Experimental results obtained on several stereo images are presented, and an accuracy analysis is performed. Further, the algorithm is tested using a pair of wide-angle lenses, and the accuracy and difference in the spatial information obtained are compared. 1. NTRODUCTON Stereopsis is a common technique used for passive computation of range information about a scene. The objective of scene analysis by stereopsis is to recover the threedimensional (3D) location of features from their projection in two-dimensional (2D) images. The central problem in realizing this objective is the ability to accurately determine the correspondence between the left and right images. Researchers have proposed various solutions to this problem. A common stereo system includes a nonconvergent stereo imaging system with two fixed cameras, separated by some baseline distance, and having parallel optical axes whose parameters are calculated in the calibration stage. With this setup, a left and a right image are obtained. By computing the displacement, or disparity, between two corresponding features in the left and right image, the 3D coordinates of an image point in the scene are found. The computational stereo paradigm then consists of three major steps: stereo camera modeling, feature detection, and matching. The major problem lies in establishing the correspondence between the two stereo images. Generally, two broad classes of techniques have been used: feature-based and area-based. This research wa8 supported in part by the DoD Joint Services Electronic Program through the Air Force Office of Scientific Research (AFSC) Contract F C-0027, and in part by the Army Research Ofice under contract DAAL03-91-G-0050 Featurebased solutions employ simple, geometric primitives such as line segments and planar patches [l]. Such models are appropriate for simple, well-structured environments consisting of man-made objects. These techniques have generally been more successful overall as the matching process is much faster than the area-based techniques and there are fewer feature points to be considered [2]. The area-based algorithms represent depth at each pixel in the image. These techniques promise more generality; however, much remains to be done on both the mathematical and system aspects of this approach [3, 41. So far, many techniques have proved to be unsatisfactory as they produce poorly defined matches, and thus it becomes difficult to de termine when a match has been established. They are also highly sensitive to distortion in gray level and geometry, and are computationally very expensive. A survey of stereo algorithms is found in [5]. Various techniques that have been used include dynamic programming [6] and relaxation methods [2, 7, 8, 91. n this paper we present a stereo vision procedure for the guidance and navigation of autonomous mobile robots. We use a parallel stereo geometry consisting of two fish-eye lenses mounted on a robot platform. Fish-eye lenses provide a large field of view and are important in applications where an accurate estimate of the distance of features very close to the lens is crucial. This would not be possible using a normal lens. Figure 1 contrasts a scene as imaged by the fish-eye lens, on the left, and as imaged by a wide-angle lens, on the right, from the same position. Distortion seen in the fish-eye lens image has to be corrected before linear edge segments can be detected. n this paper, we discuss methods for distortion correction of fish-eye lens images, segmentation, and stereo correspondence to estimate the spatial information of a scene. This paper has been organized in the following five sections: Section 2 briefly describes the stereo vision setup and geometry used. Section 3 discusses the algorithms for distortion correction, segmentation, and stereo correspondence. Section 4 presents the experimental results obtained on several pairs of images, and Section 5 discusses the accuracy recorded in estimating the depth of detected features in the scene. Also, we present a comparison between the spatial information obtained by using the fish-eye lenses and the wideangle lenses. Finally, Section 6 summarizes the key points of this paper and offers important conclusions /94 $ EEE 740

2 3. MAGE ANALYSS n the stereo setup described above, a pair of fish-eye lens cameras with a calculated focal length of 3.8- are used. The baseline distance is set to be 398 and the parallel optical axes geometry is calibrated. The procedure followed for calculating the spatial information from the given setup is presented in the following sections. Figure 1: (a)fish-eye and (b)wideangle lens images Figure 2: Stereo setup 2. STEREO SYSTEM n an image, a detected 2D feature is the perspective projection of a 3D feature in the scene. A number of 3D points can project onto the same 2D point, which results in the loss of depth information. To recover this lost information, two images taken from different perspectives are required. The first step involved in recovering 3D spatial information is to establish the perspective transformation relationship between the scene and its projection onto the left and right images. As shown in Figure 2, a point P defined by its coordinates (z, y, z) in the real world, will project as the corresponding 2D image coordinates (zr, yr) and (zr, yr) onto the left and right images respectively. The two cameras are separated by a fixed baseline distance D and have a known focal length f. Considering 0 to be the origin, which coincides with the image center in the left camera, then the perspective projections can be defined through simple algebra. x = xr.d/d (1) where d is defined as the disparity between the two corresponding features in the left and right images: d = (151 - xr (4) These equations provide the basis for deriving the 3D depth from stereo images Distortion Correction n order to take advantage of the large field of view of the fish-eye lens, the inherent distortion has to be corrected so that we can use the data with minimal loss of information. The images obtained exhibit a combination of radial and tangential distortion, and to remove this distortion, we have used a polynomial transform that is a non-linear mapping of points in the image plane. We use an inverse mapping technique to recover a complete gray-scale image. The result is an undistorted image with minimal loss of information. The corrected image results in blank areas in the vertical direction due to the larger field of view in the diagonal direction. Detailed procedure for calibration and distortion correction is given in [lo]. The distortion correction is achieved by determining a mapping of points in the world coordinate system and their corresponding point locations in the image plane. A fifth order polynomial is employed. The coefficients of the polynomial are determined using a Lagrangian minimization technique. Figure 3 shows an image (distorted) acquired with the fish-eye lens and the corrected (distortion removed) image Segmentation mportant line segments are extracted from the images by using a line detector based on vanishing points. The vanishing points are determined from the heading information obtained from the calibration parameters, and are updated based on the horizontal lines in the image. As the goal is to navigate through a structured environment, we look for lines only in three directions: horizontal lines, vertical lines, and lines going into the image plane. The lines going into the image plane are not very significant for the process of correspondence, as the exact beginning and ending of such lines is not known, making 3D recovery difficult. n our calculations, we have ignored these lines, but they can prove to be important in reconstructing a model of the scene or object. For further details, refer to [ll] and [12]. The detected lines are grouped according to their orientations and can be processed separately. Figure 4 shows the segmented images Stereo Correspondence Having grouped the detected lines according to their orientations, the correspondence problem is greatly simplified. As the parallel stereo geometry is used, the depth extraction process is based on triangulation, which is the determination of the 3D point from the intersection of two rays. This procedure makes use of the parallax, which is the displacement of the perspective projection of a point due to 741

3 Figure 5: Disparity gradient Figure 3: (a)distorted and (b)undistorted image pair i J. _-- Figure 4: Detected segments in an image translational change in the position of the lens (131. To get depth information for navigation purposes, it is important to know the depths of the vertical and the horizontal lines in the scene. We consider only two orientations of lines for the matching algorithm, as they provide us with the most significant information. The horizontal lines provide important information when a change in motion direction has to be made by the robot. We are not looking for accurate depth information from horizontal matches, but only a guideline towards making the transition in direction. The correspondence within the two images is achieved by associating weights to the probable matches, which are calculated on the basis of disparity estimate, length estimate, edge intensity value, and the disparity gradient value. The feature with the largest weight value is chosen as the corresponding match. Estimating the disparity value prior to the matching process can lead to a higher speed in feature matching by limiting the search area to only a certain section of the image. n order to achieve this, the baseline length of the lenses is calculated from the hardware setup and the lens calibration. The maximum search area in the stereo algorithm, which is the maximum disparity that can be encountered, is calculated as dispmaz = D.f/Zm,n, (5) where D is the calculated baseline, f is the focal length, and Zmin is the closest possible depth that can be detected. Further, the area is also restricted based on the epipolar line constraint. As the orientation of each line is known, we can isolate the search to lines of similar orientation for the purpose of stereo matching. We use the method of iterative search in our stereo algorithm. Based on the value of maximum disparity, the search is iterated until a match is established. Knowing the end points, we can easily calculate the length of each line. t is very likely that lines in both the left and right images will have almost the same length. Thus the search is also weighted according to lengths recorded for each match. The intensity value around the edge point of the lines is also used to associate weights to each correspondng match. An eight neighborhood of pixels around the edge point of the line is chosen and a summed intensity value is calculated. This is done as: n n i j where a,j is the edge pixel. This process is repeated for the edge of lines in the other image. A value is associated to the absolute difference of the intensity values between the two lines as: ; = llli - Till (7) where ; is the intensity value, 1; and T; are the edge intensities of the left and right line respectively. Weight is associated to the matched pair according to the absolute intensity difference value and the minimum valued pair is chosen as a probable match. This process is repeated for each line in the database and weights are associated with each match. The final criteria for associating weights is based on the disparity gradient between two lines. Of the probable matches, two lines near to each other are selected, as shown in Figure 5, and one edge of each line is taken into account for each iteration. The disparity gradient value is then calculated based on the following equations. At, Z31, A,, and E, are the edge points of the selected lines and the coordinates of each point is given as: A = (az~, ~yl); Ar = (arr, agr) (8) B = (bz1, bgl); Br = (bzr, by?) (9) The average coordinates of the two edge points between the two lines is given by: A = (a,[ + azr)/21 (ayl + ayr)/2 (10) B = (6,1+ 6,,)/2, (b,~ + byr)/2 (11) Now a separation value is calculated according to: S(A,B) = d G (12) 742

4 ~~ where; The disparity gradient is calculated as the ratio of the disparity between the two edge points and the calculated separation value. This is given by: The spatial information for the segments was calculated and then compared to information previously obtained using the fish-eye lens stereo setup. Further, we compared the estimates of segments in two regions. We considered lines which were closer than 4.3 meters and then those further away. The break up in distance is based on the calculated maximun distance that a lens would see based on our setup, and is given by the equation: Z = f.d/dmin The minimum valued match is associated with the greatest weight. The overall criteria for a match is set by examining the sum of the weights associated with each probable match. The match with the highest weight is then chosen as the correct match. The overall aim of the algorithm is to minimize the difference in the endpoints of the line in the left and right images and then reinforce the match by minimizing the difference in the length of each line segment, edge gradient, and disparity gradient. Thus, the approach of prediction and recursive verification of the hypothesis is used [14]. We have found that this procedure results in 98% true matches. Based on the matches we can then successfully calculate the depth of each segment and construct a map. where f is the focal length, D is the baseline distance, and dmin is the minimum disparity observed in the image pair. The results of the comparision are shown in Table 2. We have found that lines closer to the lens can be estimated with higher accuracy by using the fish-eye lens while lines further away are better estimated with the wide-angle lens. 4. RESULTS The algorithm was tested on several sets of images. A left and right image pair as imaged through the fish-eye lens camera is shown in Figure 6. Using the distortion correction algorithm, the images were corrected, and based on the calculated vanishing points, the significant line segments were detected as shown in Figure 7. Stereo correspondence was achieved and matched line segments from the left and right pair were extracted as shown in Figure 8. Based on the triangulation process, the three-dimensional information is recovered by projecting the two-dimensional line segments to their respective spatial coordinates. This can be seen in Figure 9(a). Lines have been recorded according to their spatial coordinates and are joined by lines going into the image plane. Lines at the same depth have been joined by horizontal lines. n order to determine the accuracy of the estimated depth for each line segment, we physically measured the depths of over 100 detected line segments in several image pairs. The average of these measurements is shown in Table 1 and the average error is also presented. 5. ACCURACY Results obtained by using fish-eye lens and those obtained by using wide-angle lens were compared to establish the advantage of using fish-eye lenses for autonomous navigation. A similar stereo setup was achieved using a pair of wide-angle lenses. The difference in the information perceived has already been shown. A pair of images was acquired from the same position as the fish-eye lens. Once again the significant lines were detected and the stereo correspondence established. The reconstructed model is shown in Figure 9(b). For further details refer to [15]. Figure 6: Left and right image pair Figure 7: Segmented left and right image pair Figure 8: Matched left and right image pair 743

5 Segments: Vertical Horizontal Total (a) Figure 9: Spatial maps; (a)fish-eye & (b)wide-angle lens Dist.:(m) Less than 4.3 Detected line classification 1 Average Distance Estimation( ) # Estimated Measured Error % % % Table 1: Estimated distance distribution 8. CONCLUSON n this paper we have presented a simple and effective procedure for estimating depth in a structured, indoor scene using a parallel stereo setup with two fish-eye lenses. The procedure includes the distortion correction of fish-eye lens images, segmentation procedure based on vanishing points which enables the system to extract only the most significant line segments, and, finally, a correspondence algorithm which is based on a recursive hypotheses formulation and verification procedure. Finally, the triangulation process is used to estimate the depth of each line segment in the scene. The system is successfully implemented on real scene images and the accuracy determined. A comparison is made between the use of fish-eye lenses and a similar setup of wide-angle lenses. The viewing range of the lenses is calculated based on the hardware setup. Accuracy in estimating the depth of segments near and far from the lens is evaluated. t is thus established that the fish-eye lens provides more information and higher accuracy when the application requires spatial perception of objects close to the lens. More than 4.3 Average Distance Estimated s Company, Segments Estimated( ) Measured( ) ~.,, Error 8.7 % % Segments Estimated( ) Measured( ) , Error 12.9 % 9.1 % Table 2: Lens error distribution 7. REFERENCES W. E. L. Grimson. Computational Experiments with a Feature Based Stereo Algorithm. EEE Transactions on Pattern Analysis and Machine ntelligence, vol. 7, 17-34, 1985 G. Medioni and R. Nevatia. Segment-based Stereo Matching. Computer Vision, Graphics, and mage Processing, 2-18, 1985 S. T. Barnard. Stochastic Stereo Matching over Scale. nternational Journal of Computer Vision, vol. 3, 17-31, L. H. Matthies, R. Szeliski and T. Kanade. Kalman Filter-based Algorithms for Estimating Depth from mage Sequences. nternational Journal of Computer Vision, vol. 3, , Umesh R. Dhond and J. K. Aggarwal. Structure from Stereo-A Review. EEE Transactions on Systems, Man, and Cybernetics, vol. 19, , H. H. Baker and T. 0. Binford. Depth from Edge and ntensity Based Stereo. Proc. 7th nt. Joint Conf. on Artificial ntelligence, , S. B. Marapane and M. M. Trivedi. Edge Segment Based Stereo Analysis. SPE, Applications of Artificial ntelligence V, vol. 1293, , S. T. Barnard and W. B. Thompson. Disparity Analysis of mages. EEE Transactions on Pattern Analysis and Machine ntelligence, vol. 2, , Y. C. Kim and J. K. Aggarwal. Positioning Three- Dimensional Objects Using Stereo mages. EEE Journal of Robotics and Automation, vol. 3, , Shishir Shah and J. K. Aggarwal. A Simple Calibration Procedure for Fish-Eye (High Distortion) Lens Camera. Proc. EEE nt. Conf. Robotics and Automation, , X. Lebbgue and J. K. Aggarwal. Extraction and nterpretation of Semantically Significant Line Segments for a Mobile Robot. Proc. EEE nt. Conf. Robotics and Automation, , X. Lebbgue and J. K. Aggarwal. Detecting 3-D Parallel Lines for Perceptual Organization. Proc. Second European Conf. on Computer Vision, , R. M. Haralick and L. G. Shapiro. Computer and Robot Vision. Volume 11. Addison-Wesley Publishing N. Ayache and B. Faverjon. A Fast Stereo Vision Matcher based on Prediction and Recursive Verification of Hypotheses. Proc. of the Third Workshop on Computer Vision: Representation and Control, 27-37, Shishir Shah and J. K. Aggarwal. Segment-based Stereo Correspondence using Fish-Eye Lens. Technical Report: Computer and Vision Research Center, TR ,

estimate of change in the motion direction and vertical segment correspondence provides for precise 3D information about objects close to the robot. A

estimate of change in the motion direction and vertical segment correspondence provides for precise 3D information about objects close to the robot. A MOBILE ROBOT NAVIGATION AND SCENE MODELING USING STEREO FISH-EYE LENS SYSTEM SHISHIR SHAH AND J. K. AGGARWAL Computer and Vision Research Center Department of Electrical and Computer Engineering, ENS 522

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Ta Yuan and Murali Subbarao tyuan@sbee.sunysb.edu and murali@sbee.sunysb.edu Department of

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

High Accuracy Depth Measurement using Multi-view Stereo

High Accuracy Depth Measurement using Multi-view Stereo High Accuracy Depth Measurement using Multi-view Stereo Trina D. Russ and Anthony P. Reeves School of Electrical Engineering Cornell University Ithaca, New York 14850 tdr3@cornell.edu Abstract A novel

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Transactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 19, 1997 WIT Press,   ISSN Hopeld Network for Stereo Correspondence Using Block-Matching Techniques Dimitrios Tzovaras and Michael G. Strintzis Information Processing Laboratory, Electrical and Computer Engineering Department, Aristotle

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Geometry of Multiple views

Geometry of Multiple views 1 Geometry of Multiple views CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Multiple views Despite the wealth of information contained in a a photograph, the depth of a scene point along the

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION 2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Recap from Previous Lecture

Recap from Previous Lecture Recap from Previous Lecture Tone Mapping Preserve local contrast or detail at the expense of large scale contrast. Changing the brightness within objects or surfaces unequally leads to halos. We are now

More information

Multiview Image Compression using Algebraic Constraints

Multiview Image Compression using Algebraic Constraints Multiview Image Compression using Algebraic Constraints Chaitanya Kamisetty and C. V. Jawahar Centre for Visual Information Technology, International Institute of Information Technology, Hyderabad, INDIA-500019

More information

A Novel Stereo Camera System by a Biprism

A Novel Stereo Camera System by a Biprism 528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration , pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Extraction and Interpretation of Semantically Significant Line Segments for a Mobile Robot*

Extraction and Interpretation of Semantically Significant Line Segments for a Mobile Robot* Rocecdings of the 1992 EEE ntanatid Contcrcncc m Robotics and Autanatim Nh, Fmw - u ly 1992 Extraction and nterpretation of Semantically Significant Line Segments for a Mobile Robot* Xavier Lebhgue and

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Model Based Perspective Inversion

Model Based Perspective Inversion Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk

More information

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Image Transformations & Camera Calibration. Mašinska vizija, 2018. Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Lecture 14: Computer Vision

Lecture 14: Computer Vision CS/b: Artificial Intelligence II Prof. Olga Veksler Lecture : Computer Vision D shape from Images Stereo Reconstruction Many Slides are from Steve Seitz (UW), S. Narasimhan Outline Cues for D shape perception

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by  ) Readings Szeliski, Chapter 10 (through 10.5) Announcements Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by email) One-page writeup (from project web page), specifying:» Your team members» Project goals. Be specific.

More information

7. The Geometry of Multi Views. Computer Engineering, i Sejong University. Dongil Han

7. The Geometry of Multi Views. Computer Engineering, i Sejong University. Dongil Han Computer Vision 7. The Geometry of Multi Views Computer Engineering, i Sejong University i Dongil Han THE GEOMETRY OF MULTIPLE VIEWS Epipolar Geometry The Stereopsis Problem: Fusion and Reconstruction

More information

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5) Announcements Stereo Project 2 due today Project 3 out today Single image stereogram, by Niklas Een Readings Szeliski, Chapter 10 (through 10.5) Public Library, Stereoscopic Looking Room, Chicago, by Phillips,

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 12 130228 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Panoramas, Mosaics, Stitching Two View Geometry

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. - Aristotle University of Texas at Arlington Introduction

More information

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Stereo Observation Models

Stereo Observation Models Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Depth Estimation with a Plenoptic Camera

Depth Estimation with a Plenoptic Camera Depth Estimation with a Plenoptic Camera Steven P. Carpenter 1 Auburn University, Auburn, AL, 36849 The plenoptic camera is a tool capable of recording significantly more data concerning a particular image

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful. Project 4 Results Representation SIFT and HoG are popular and successful. Data Hugely varying results from hard mining. Learning Non-linear classifier usually better. Zachary, Hung-I, Paul, Emanuel Project

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Camera Calibration for a Robust Omni-directional Photogrammetry System

Camera Calibration for a Robust Omni-directional Photogrammetry System Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Perception II: Pinhole camera and Stereo Vision

Perception II: Pinhole camera and Stereo Vision Perception II: Pinhole camera and Stereo Vision Davide Scaramuzza Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Mobile Robot Control Scheme knowledge, data base mission commands Localization

More information

STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING

STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING Yuichi Ohta Institute of Information Sciences and Electronics University of Tsukuba IBARAKI, 305, JAPAN Takeo Kanade Computer Science Department Carnegie-Mellon

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point Auxiliary Orientation

Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point Auxiliary Orientation 2016 rd International Conference on Engineering Technology and Application (ICETA 2016) ISBN: 978-1-60595-8-0 Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point

More information

Depth. Chapter Stereo Imaging

Depth. Chapter Stereo Imaging Chapter 11 Depth Calculating the distance of various points in the scene relative to the position of the camera is one of the important tasks for a computer vision system. A common method for extracting

More information

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012 Stereo Vision A simple system Dr. Gerhard Roth Winter 2012 Stereo Stereo Ability to infer information on the 3-D structure and distance of a scene from two or more images taken from different viewpoints

More information

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, 3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective

More information

Towards a visual perception system for LNG pipe inspection

Towards a visual perception system for LNG pipe inspection Towards a visual perception system for LNG pipe inspection LPV Project Team: Brett Browning (PI), Peter Rander (co PI), Peter Hansen Hatem Alismail, Mohamed Mustafa, Joey Gannon Qri8 Lab A Brief Overview

More information

Robot Vision: Camera calibration

Robot Vision: Camera calibration Robot Vision: Camera calibration Ass.Prof. Friedrich Fraundorfer SS 201 1 Outline Camera calibration Cameras with lenses Properties of real lenses (distortions, focal length, field-of-view) Calibration

More information

Introduction to 3D Machine Vision

Introduction to 3D Machine Vision Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation

More information

CSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.

CSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D. University of Texas at Arlington CSE 4392/5369 Introduction to Vision Sensing Dr. Gian Luca Mariottini, Ph.D. Department of Computer Science and Engineering University of Texas at Arlington WEB : http://ranger.uta.edu/~gianluca

More information

Optimizing Monocular Cues for Depth Estimation from Indoor Images

Optimizing Monocular Cues for Depth Estimation from Indoor Images Optimizing Monocular Cues for Depth Estimation from Indoor Images Aditya Venkatraman 1, Sheetal Mahadik 2 1, 2 Department of Electronics and Telecommunication, ST Francis Institute of Technology, Mumbai,

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY. Hand Eye Coordination. Glen Speckert

MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY. Hand Eye Coordination. Glen Speckert MASSACHUSETTS NSTTUTE OF TECHNOLOGY ARTFCAL NTELLGENCE LABORATORY Working paper 127 July 1976 Hand Eye Coordination by Glen Speckert This paper describes a simple method of converting visual coordinates

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo Lecture 08: Introduction to Stereo Reading: T&V Section 7.1 Stereo Vision Inferring depth from images taken at the same time by two or more cameras. Basic Perspective Projection Scene Point Perspective

More information

A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES

A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES Yuzhu Lu Shana Smith Virtual Reality Applications Center, Human Computer Interaction Program, Iowa State University, Ames,

More information

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test Announcements (Part 3) CSE 152 Lecture 16 Homework 3 is due today, 11:59 PM Homework 4 will be assigned today Due Sat, Jun 4, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information