An Embedded Calibration Stereovision System
|
|
- Cecil Watts
- 5 years ago
- Views:
Transcription
1 2012 Intelligent Vehicles Symposium Alcalá de Henares, Spain, June 3-7, 2012 An Embedded Stereovision System JIA Yunde and QIN Xiameng Abstract This paper describes an embedded calibration stereovision system that is able to be strongly online and autocalibrated without placing a calibration device in front of the stereo camera, but with the device hidden inside the cavity of the system via a half-mirror. The stereo camera simultaneously observes a scene passing through the half-mirror and the calibration device reflected from the half-mirror, and makes the formation of an embedded calibration stereo pair containing both scene and the calibration device. The features of the calibration patterns are extracted from the embedded calibration stereo pair to estimate the stereo camera s parameters. We use a polyhedral-mirror to generate multiple virtual images of the calibration device to occupy large part of a scene image for accurate estimation. We also use several mirrors to extend the optical path from the calibration device to the stereo camera for depth recovery of distance objects. The system can be easily used in a wide range of applications without considering variation of camera parameters. I. INTRODUCTION Stereovision system is an effective means of 3D scene understanding and obstacle avoidance of autonomous robotic vehicles,and has been traditional one of the most attractive topics in computer vision and robotics. Camera calibration is a necessary step of stereo vision to extract metric information from 2D images. Conventional camera calibration [1], [2], [3] has generally been done by a procedure whereby a known calibration device (whose geometry in 3D space is known with very good precision) is placed in front of a camera, and the imaged positions of features on this device have been used to determine the camera parameters. The advantages of conventional methods are the robustness to estimate camera parameters and the possibility to recover the absolute scale of a scene owing to having some explicit 3D information. Such a calibration must be done at the beginning of working session, and often requires human interaction to place and position the calibration device, and then remove it after calibration. In many applications, the vision system is changing due to mechanical and thermal variation, so such a one-off calibration method is just about useless. Currently in industrial robot vision, re-calibration was integrated into working with data available during normal use [4]. And in unmanned vehicle navigation, the calibration tool was taken with the vehicle for on-site re-calibrating its vision system at intervals. For example, in the planetary exploration, the calibration device was mounted on the vehicle for onsite stereo calibration [5]. In order to develop practical and efficient vision systems, many researchers continued to JIA Yunde and QIN Xiameng are with Beijing Lab of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, Beijing , PR CHINA. jiayunde@bit.edu.cn, qxm0405@bit.edu.cn contribute much to design high efficient procedures of the conventional calibration. Zhang [6] and Bouguet [7] have developed popular toolboxes allowing others to implement calibration in their own research with well-user interface. But all these operations are labor intensive, cost expensive or halt in working. Moreover, the conventional method cannot be used in some applications where focusing and zooming of cameras are needed. Recently, Katayama [8] proposed a calibration method that uses a transparent calibration tool with patterns of dots of color filter. This tool is installed in front of the camera and not required resetting or removal to greatly simplify the calibration process. This is an online automatic robust calibration technique, but the accuracy of this method is not comparable with the conventional method because the calibration tool is located far from the plane of focus. Another problem is the size and location of the calibration tool which occupies the space of the field of view and makes the system not compact. In this paper, we propose an embedded calibration stereovision method that is able to strongly online auto-calibrate a stereovision system with a calibration device hidden inside the cavity of the system via a half-mirror, instead of placing the device in front of the system. The half-mirror can completely reflect the calibration device hidden inside the system to the stereo camera while allowing scene rays passing through it to the same camera. This means the stereo camera is able to simultaneously observe a scene and the calibration device, i.e. to form an embedded calibration stereo pair containing both scene and the calibration device. We can extract the features of the calibration patterns from the embedded calibration images to estimate camera intrinsic and extrinsic parameters, we can also use the same images to compute dense depth maps. In our system, a polyhedralmirror is used to generate multiple virtual images of the calibration device to occupy large part of images for estimating accurate camera parameters. And we use several mirrors to extend the distance from the calibration device to the camera for depth recovery of a distance object. The system can be easily used in a wide range of applications without considering variation of camera parameters. Another online auto-calibration method is the selfcalibration which has been explored intensively in last two decades [9], [10]. This method is only using a number of image correspondences to estimate camera parameters. But even under ideal conditions, when self-calibration methods are most reliable, they are routinely outperformed by the conventional calibration method [11] /$ IEEE 1072
2 Light Source Dense depth maps Stereo Computing Half- Scene Polyhedral- Fig. 1. The configuration of the embedded calibration stereo System. Half- Half- Fig. 2. A photograph of our system. II. THE SYSTEM OVERVIEW Fig. 1 shows the configuration of our embedded calibration stereo vision system which consists of a stereo head, a mirror group, a calibration device, a light source, and a stereo computing device. Fig. 2 is the system photograph. The stereo head comprises two cameras with fixed focal length lenses or zoom lenses. The calibration device is made of multiple planes, and each plane is covered with a chess-board pattern. The mirror group is composed of a half-mirror, a polyhedral-mirror and several mirrors to set up required the optical paths of the calibration device to the camera. The half-mirror is the fundamental element in the system which can be used to completely reflect the calibration device, hidden inside the system, to the stereo camera while allowing scene rays passing through it to the same camera. The stereo camera is able to simultaneously observe a scene and the calibration device to form an embedded calibration stereo pair, as shown in Fig. 3. In order to estimate more accurate camera parameters, the conventional method requires fabricating the calibration device as big as possible to occupy large part of images. Actually, the size of calibration device is ultimately limited, so most of researchers have to move the device manually in a wide area of interest or by using a robot arm to the different position of working space, and all data then are used to estimate camera parameters [6]. In our system, we use a polyhedral-mirror to generate multiple virtual images of the calibration device which can be projected to different regions in the field of view. Fig. 4 illustrates the formation of virtual calibration devices with a polyhedral-mirror. In practice, the calibration device has to be placed to the common visible area as well as the depth of field, and this distance is usually to be meters in front of the camera, e.g. 2 meters. We can use several mirrors in this system to fold this distance for reducing the system volume. The stereo computing device is an FPGA-based high performance computer developed in our lab. The device can simultaneously acquire images from the two color cameras, pre-process embedded calibration stereo pairs, estimate the camera parameters, and compute dense depth maps with a resolution of pixels and 32 disparity level in video rate (30pfs). The stereo computing device is developed based on our previous work [12], [13]. Different from the con- 1073
3 Fig. 3. Virtual An embedded calibration stereo pair. Virtual Virtual from an embedded calibration image is a challengeable task because the calibration process is online and full automatic, and the calibration device is embedded into a scene which might be complicated. Fortunately, in our system the virtual positions of the calibration device in stereo pairs are not changed, and we can set up a prior set of features with accurate coordinates as the initial values of online autocalibration. Before online auto-calibration, we use PCA (Principle Component Analysis) to train the positions of calibration planes and the four initialized corners of the chess-board in each plane. During calibration, the homography matrix is first computed from these four corners, and then the positions of the rest of corners are calculated based on projective transformation as the initialized value of iteration. The new positions of all calibration corners are detected by using Harris detector in the small windows around the initialized corners to update the homography matrix. Thus, we can iteratively detect all the corners with accurate coordinates. Fig.5 shows the positions of corners in calibration planes. Polyhedral- Fig. 4. The formation of virtual calibration device with a polyhedral-mirror. ventional stereovision system, our system can automatically work well when camera parameters are changed, and also run well without any human interaction. Moreover, our system is able to recover stereo pairs with zooming lenses. Indeed, it is a real zoom stereo camera. III. CAMERA CALIBRATION The system is able to perform camera calibration and depth recovery by using the same embedded calibration stereo pair. We follow the traditional stereo vision method to realize camera calibration first and then computing the corresponding depth map. A. pattern extraction The features of the calibration patterns must be extracted robustly and accurately for estimating accurate camera parameters. Conventional methods use simple background for ease of feature detection or often require human interaction for feature detection initialization, such as best-know calibration techniques [6], [7] available on the Internet need to manually point the four chess-board corners on all stereo pairs. In our system, the feature detection of the calibration patterns B. model The pixel location of a feature is denoted by m = (u,v) T, and the corresponding 3D point is denoted by M = (X,Y,Z) T. We use m = (u,v,1) T and M = (X,Y,Z,1) T to represent their homogeneous coordinates. A pinhole camera model is used and the projective relationship between m and M is given by s m = α γ u 0 0 β v 0 [ R t ] M = A [ R t ] M, (1) where α,β,γ,u 0,v 0 are the camera intrinsic parameters and A is intrinsic matrix. R and t are the extrinsic parameters, which are the rotation and translation between the world coordinates and the the camera coordinate system. Suppose an m-polyhedral-mirror is used to generate m virtual images of the calibration device in an embedded calibration stereo pair. Given n features on each calibration pattern image and corresponding 3D coordinates. The maximum likelihood estimate can be obtained by minimizing the following functional [6]: m n i=1 j=1 m i j ˆm(A 0,k i1,k i2,r i,t i,m j ) 2, (2) where ˆm(A 0,k i1,k i2,r i,t i,m j ) is a corresponding image point of 3D point M j, and k i1,k i2 are distortion factors. C. Image rectification The perspective projection matrix for image rectification [14], [15] is used in our system. A new coordinate system, a 3D affine coordinate system, is established for making the epipolar lines parallel to the scan line of images. The new X axis and Y axis are parallel to the baseline of the stereo camera and the intersecting line of the two imaging planes respectively, and the new Z axis is perpendicular to the XY plane. 1074
4 lighting the calibration device (e.g. conventional stereo pair), and Fig. 6(c) is the dense depth mapping from Fig. 6(b) only using a simple SAD matching method. In most applications, such as robot navigation and auto machinery, the focal length of lenses is usually fixed and the calibrated system should keep working well for a long time. Thus, our system can work in calibration mode with lighting the calibration device for system calibration, and depth recovery mode without lighting the calibration device for dense depth mapping of scenes, as conventional stereo system. Fig. 5. The close-up of features extracted from embedded calibration images in Fig. 3, in which red crosses are corners detected by Harris detector and blue crosses are refining results after iteration. It is easy to infer the original camera model based on the original intrinsic parameter matrix A and rotation matrix R: λ [ c r 1 ] T = AR(X T), (3) where c and r are column and row pixel coordinates in the original camera coordinate system. By using the new camera parameters, the rectified camera model is given by λ [ c r 1 ] T = A R (X T), (4) where A and R are the rectified intrinsic parameter matrix and rotation matrix, respectively, and c and r are column and row pixel coordinate in the rectified camera coordinate system. Combining (3) and (4), we have the rectification transform equation: λ [ c r 1 ] T = A R R 1 A 1 [ c r 1 ] T. (5) IV. EXPERIMENTS Our system uses two FFMV-03MTC Color Cameras (Point Gray Inc.) with U-Tron 4mm lenses to simultaneously acquire stereo pairs, as shown in Fig. 2. The calibration device is composed of three planes with different orientations and each plane is covered with an chess-board with a grid size of 1cm 1cm. We used a polyhedral-mirror with only two planar mirrors to form two mirror images (two virtual calibration devices) of the calibration device. We also used another mirror together with the polyhedral-mirror to extend the optical path of the calibration device to the stereo camera up to 1.2 meters. We tested our system in real scenes and get good results. Fig. 6(a) demonstrates an example of embedded calibration stereo pair with lighting the calibration device for system calibration. Fig. 6(b) is an example of stereo pair without A. Dense depth recovery from embedded calibration stereo pairs In object tracking and gazing areas, Stereo cameras with zooming lenses are often used, such as humanoid robots and surveillance systems. In this case, the system has to work in calibration mode high frequently with lighting the calibration device. Thus, the dense depth maps recovers directly from the embedded calibration stereo pairs is an interesting task. But a challenging problem is to separate the calibration device from embedded calibration stereo pairs. Some techniques [16], [17] have been proposed to separate the reflection components from images. We only employ a simply physical method to solve this problem and to demonstrate the feasibility. This method takes account of the cumulating of charges in imaging sensor which is related to the illumination intensity and illumination time (exposure time). Let I E denote an embedded calibration image and I C an image only containing the calibration object. The model of image separation is I S = I E αi C, where I S is the separated scene image, α is a coefficient and defined as α = F (mean(i E I C )), where mean( ) represents the mean gray. F( ) is a cubic function that is fitted by Least Square method. Fig. 7(a) shows the stereo pair after separation of the calibration device from the embedded calibration stereo pairs in Fig. 6(a). We can see that the separated image is very much resembling to the real scene image(fig. 6(b)). Fig. 7(b) is the dense depth map that recovers from the stereo pair of Fig. 7(a). As can be seen, the result is acceptable and comparable with the result from the conventional stereo pair(fig. 6(c)). V. CONCLUSIONS This paper has presented an embedded calibration stereo vision method that is able to strongly calibrate a camera 1075
5 (a) (a) (b) (b) Fig. 7. Depth map recovery from an embedded calibration stereo pair. (a) Gray stereo pair after separation of the embedded calibration device from the images of Fig. 6(a). (b) Dense depth map from the stereo pair. used in a wide range of applications without considering changeable parameters of cameras and zooming lenses. It is a real a complete standalone machine and can be easily used by any person out of the field. (c) VI. ACKNOWLEDGMENTS Fig. 6. An example of dense depth maps from our stereovision system with an embedded calibration device. (a) Stereo pair with lighting the embedded calibration device for camera calibration. (b) Stereo pair without lighting the embedded calibration device for 3D scene recovery. (c) Dense depth map computed from (b). This work was supported in part by the 973 Program of China under Grant No. 2012CB and Natural Science Foundation of China (NSFC) under Grant No R EFERENCES without placing a calibration device in front of the camera. We used a half-mirror to completely reflect the calibration device hidden inside the system to a camera while allowing scene rays passing through it. The stereo camera is able to simultaneously observe a scene and the calibration device to form an embedded calibration stereo pair. A polyhedralmirror is used to generate multiple virtual images of the calibration device to occupy large part of images for estimating accurate camera parameters, and several mirrors are used to extend the optical path from the calibration device to the camera for depth recovery of distance objects. The embedded calibration stereo vision system can be easily 1076 [1] Olivier Faugeras and Giorgio Toscani, The calibration problem for stereo, In Proceedings of the International Conference on ComputerVision and Pattern Recognition, Miami Beach, FL, 1986, pp [2] R.Y. Tsai, An Efficient and Accurate Camera Technique for 3D Machine Vision, Proceedings of the International Conference on ComputerVision and Pattern Recognition, Miami Beach, FL, 1986, pp [3] J. Weng and P. Cohen and M. Herniou, Camera calibration with distortion models and accuracy evaluation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, 1992, pp [4] N.A. Thacker and J.E.W.Mayhew, Optimal Combination of Stereo Camera from Arbitrary Stereo Images, Image and vision computing, vol. 9, 1991, pp [5] Zhengyou Zhang, A stereovision system for a planetary rover: calibration, correlation, registration, and fusion, Machine Vision and Applications, vol. 10, 1997, pp
6 [6] Zhang Zhengyou, A flexible new technique for camera calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, 1961, pp [7] J.Y. Bouguet, Camera Toolbox for Matlab, [8] Y. Katayama, Camera with a Transparent Tool Using Color Filters: Application to for a Distant Object, in Eighteenth International Conference on Pattern Recognition, 2006, pp [9] O. Faugeras, 3D Computer Vision, MIT Press, [10] R. Hartley and A. Zisserman, Multiple View Geometry in computer vision, Cambridge University Press, [11] S. Bougnoux, From projective to Euclidean space under any practical situation, a criticism of self-calibration, in Sixth International Conference on Computer Vision, 1998, pp [12] Chen Lei and Jia Yunde and Li Mingxiang, An FPGA-Based RGBD Imager, Machine Vision and Applications, Machine Vision and Applications,Vol. 23, no. 3, 2012, Page [13] Yunde Jia and Xiaoxun Zhang and Mingxiang Li and Luping An, A miniature stereo vision machine (MSVM-III) for dense disparity mapping, in Seventeenth International Conference on Pattern Recognition, 2004, pp [14] N. Ayache, and C. Hansen, Rectification of images for binocular and trinocular stereovision, in Ninth International Conference on Pattern Recognition, vol. 1, 1988, pp [15] A. Fusiello and E. Trucco and A. Verri, A compact algorithm for rectification of stereo pairs, Machine Vision and Applications, vol. 12, 2000, pp [16] R. Szeliksi and S. Avidan and P. Anandan, Layer extraction from multiple images containing reflections and transparency, Computer Vision and Pattern Recognition, 2000, pp [17] A. Levin and Y. Weiss, User assisted separation of reflections from a single image using a sparsity prior, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 9, 2007, pp
Stereo Camera Calibration with an Embedded Calibration Device and Scene Features
Stereo Camera Calibration with an Embedded Calibration Device and Scene Features Xiameng Qin, Jiaolong Yang, Wei Liang, Mingtao Pei and Yunde Jia Abstract In this paper, a new stereo camera calibration
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationImage Rectification (Stereo) (New book: 7.2.1, old book: 11.1)
Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationDepth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy
Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationComputer Vision Projective Geometry and Calibration. Pinhole cameras
Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationAnnouncements. Stereo
Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative
More informationPlanar pattern for automatic camera calibration
Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute
More informationCS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003
CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass
More informationStep-by-Step Model Buidling
Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure
More informationEpipolar Geometry in Stereo, Motion and Object Recognition
Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,
More informationComments on Consistent Depth Maps Recovery from a Video Sequence
Comments on Consistent Depth Maps Recovery from a Video Sequence N.P. van der Aa D.S. Grootendorst B.F. Böggemann R.T. Tan Technical Report UU-CS-2011-014 May 2011 Department of Information and Computing
More informationAnnouncements. Stereo
Announcements Stereo Homework 1 is due today, 11:59 PM Homework 2 will be assigned on Thursday Reading: Chapter 7: Stereopsis CSE 252A Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where
More informationLUMS Mine Detector Project
LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines
More informationDD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication
DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:
More informationCHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION
CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationProjector Calibration for Pattern Projection Systems
Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationCamera Model and Calibration. Lecture-12
Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationCorrespondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong
More informationImage Transformations & Camera Calibration. Mašinska vizija, 2018.
Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize
More informationLecture 14: Basic Multi-View Geometry
Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationMERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia
MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects
More informationAdaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision
Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China
More informationCompositing a bird's eye view mosaic
Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition
More informationEfficient Stereo Image Rectification Method Using Horizontal Baseline
Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,
More informationDepth from two cameras: stereopsis
Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationComputer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16
Computer Vision I Dense Stereo Correspondences Anita Sellent Stereo Two Cameras Overlapping field of view Known transformation between cameras From disparity compute depth [ Bradski, Kaehler: Learning
More informationInfrared Camera Calibration in the 3D Temperature Field Reconstruction
, pp.27-34 http://dx.doi.org/10.14257/ijmue.2016.11.6.03 Infrared Camera Calibration in the 3D Temperature Field Reconstruction Sun Xiaoming, Wu Haibin, Wang Wei, Liubo and Cui Guoguang The higher Educational
More informationA Stereo Machine Vision System for. displacements when it is subjected to elasticplastic
A Stereo Machine Vision System for measuring three-dimensional crack-tip displacements when it is subjected to elasticplastic deformation Arash Karpour Supervisor: Associate Professor K.Zarrabi Co-Supervisor:
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationEECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration
1 Lab 1 Camera Calibration Objective In this experiment, students will use stereo cameras, an image acquisition program and camera calibration algorithms to achieve the following goals: 1. Develop a procedure
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationMOTION STEREO DOUBLE MATCHING RESTRICTION IN 3D MOVEMENT ANALYSIS
MOTION STEREO DOUBLE MATCHING RESTRICTION IN 3D MOVEMENT ANALYSIS ZHANG Chun-sen Dept of Survey, Xi an University of Science and Technology, No.58 Yantazhonglu, Xi an 710054,China -zhchunsen@yahoo.com.cn
More informationRange Estimation in Disparity Mapping for Navigation of Stereo Vision Autonomous Vehicle Using Curve Fitting Tool
International Journal of Video& Image Processing and Network Security IJVIPNS-IJENS Vol:09 No:09 5 Range Estimation in Disparity Mapping for Navigation of Stereo Vision Autonomous Vehicle Using Curve Fitting
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationDepth from two cameras: stereopsis
Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture
More informationMultiple View Image Rectification
GS6-B- 20 st International Symposium on Access Spaces (ISAS), IEEE-ISAS 20 Multiple View Image Rectification Vincent Nozick Gaspard Monge Institute, UMR 8049 Paris-Est Marne-la-Vallee University, France
More informationCalibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern
Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook
More informationA COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION
XIX IMEKO World Congress Fundamental and Applied Metrology September 6 11, 2009, Lisbon, Portugal A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION David Samper 1, Jorge Santolaria 1,
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More informationThere are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationComputer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15
Announcement Stereo II CSE252A Lecture 15 HW3 assigned No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Mars Exploratory Rovers: Spirit and Opportunity Stereo Vision Outline
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationFinal Exam Study Guide
Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationCamera Calibration with a Simulated Three Dimensional Calibration Object
Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek
More informationMultichannel Camera Calibration
Multichannel Camera Calibration Wei Li and Julie Klein Institute of Imaging and Computer Vision, RWTH Aachen University D-52056 Aachen, Germany ABSTRACT For the latest computer vision applications, it
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationCameras and Stereo CSE 455. Linda Shapiro
Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation
More informationRectification and Disparity
Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationDRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri
The 23 rd Conference of the Mechanical Engineering Network of Thailand November 4 7, 2009, Chiang Mai A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking Viboon Sangveraphunsiri*, Kritsana Uttamang,
More informationImage formation. Thanks to Peter Corke and Chuck Dyer for the use of some slides
Image formation Thanks to Peter Corke and Chuck Dyer for the use of some slides Image Formation Vision infers world properties form images. How do images depend on these properties? Two key elements Geometry
More informationAssignment 2: Stereo and 3D Reconstruction from Disparity
CS 6320, 3D Computer Vision Spring 2013, Prof. Guido Gerig Assignment 2: Stereo and 3D Reconstruction from Disparity Out: Mon Feb-11-2013 Due: Mon Feb-25-2013, midnight (theoretical and practical parts,
More informationSoftware Calibration for Stereo Camera on Stereo Vision Mobile Robot using Tsai s Method
Software Calibration for Stereo Camera on Stereo Vision Mobile Robot using Tsai s Method Rostam Affendi Hamzah and Sani Irwan Md Salim Abstract - An improvement for image distortion in cameras has been
More informationThree-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras
Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.
More informationPerspective Projection [2 pts]
Instructions: CSE252a Computer Vision Assignment 1 Instructor: Ben Ochoa Due: Thursday, October 23, 11:59 PM Submit your assignment electronically by email to iskwak+252a@cs.ucsd.edu with the subject line
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationEECS 442: Final Project
EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective
More informationComputer Vision I - Algorithms and Applications: Multi-View 3D reconstruction
Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View
More informationRectification. Dr. Gerhard Roth
Rectification Dr. Gerhard Roth Problem Definition Given a pair of stereo images, the intrinsic parameters of each camera, and the extrinsic parameters of the system, R, and, compute the image transformation
More informationA linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images
A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection
More informationPattern Feature Detection for Camera Calibration Using Circular Sample
Pattern Feature Detection for Camera Calibration Using Circular Sample Dong-Won Shin and Yo-Sung Ho (&) Gwangju Institute of Science and Technology (GIST), 13 Cheomdan-gwagiro, Buk-gu, Gwangju 500-71,
More information3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,
More informationCIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM
CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and
More informationStereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world
More informationCIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM
CIS 580, Machine Perception, Spring 2016 Homework 2 Due: 2015.02.24. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Recover camera orientation By observing
More informationCamera Calibration With One-Dimensional Objects
Camera Calibration With One-Dimensional Objects Zhengyou Zhang December 2001 Technical Report MSR-TR-2001-120 Camera calibration has been studied extensively in computer vision and photogrammetry, and
More informationApplication questions. Theoretical questions
The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list
More informationWide-Baseline Stereo Vision for Mars Rovers
Proceedings of the 2003 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems Las Vegas, Nevada October 2003 Wide-Baseline Stereo Vision for Mars Rovers Clark F. Olson Habib Abi-Rached Ming Ye Jonathan
More informationWeek 2: Two-View Geometry. Padua Summer 08 Frank Dellaert
Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential
More informationHartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne
Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne
More informationInstance-level recognition I. - Camera geometry and image alignment
Reconnaissance d objets et vision artificielle 2011 Instance-level recognition I. - Camera geometry and image alignment Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire
More informationA Specialized Multibaseline Stereo Technique for Obstacle Detection
A Specialized Multibaseline Stereo Technique for Obstacle Detection Todd Williamson and Charles Thorpe Robotics Institute Carnegie Mellon University Pittsburgh PA 15213 Abstract This paper presents a multibaseline
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationFusion of Stereo Vision and Time-of-Flight Imaging for Improved 3D Estimation. Sigurjón Árni Guðmundsson, Henrik Aanæs and Rasmus Larsen
Int. J. Intelligent Systems Technologies and Applications, Vol. x, No. x, xxxx 1 Fusion of Stereo Vision and Time-of-Flight Imaging for Improved 3D Estimation Sigurjón Árni Guðmundsson, Henrik Aanæs and
More informationStructure from Motion and Multi- view Geometry. Last lecture
Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,
More information