STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS
|
|
- Emory Fowler
- 5 years ago
- Views:
Transcription
1 XVI CONGRESO INTERNACIONAL DE INGENIERÍA GRÁFICA STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS BARONE, Sandro; BRUNO, Andrea University of Pisa Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione s.barone@ing.unipi.it, andrea.bruno@ing.unipi.it ABSTRACT In this paper, a 3D scanner, based on tracking a laser stripe by a stereo vision sensor, is presented. The system is composed of a pair of digital cameras, a laser stripe generator and software to control the hardware and process the images. The measurement procedure is based on detecting laser stripe locations in the digital images with subpixel resolution. Stereo vision principles are then used to correlated the data between the pair of images. The measurement process does not require the calibration of the laser light source. The use of a stereo vision system allows the integration with a photogrammetric procedure, developed in a previous research activity, to align multiple scans. In the work, experimental tests have been conducted with nominal models to analyse and verify usability and accuracy of the methodology. Key words: Reverse engineering, stereo-vision, surface modelling. 1. Introduction During the last decades, innovative optical technologies have proven to be very effective in providing non-contact techniques for measuring three-dimensional shapes [1]. The selection of a vision system is highly dependent on the type of applications and its practical limitations. Widely accepted measurement techniques are based on scanning a surface by controlled beams of radiation, typically laser [2] or white light [3], which interact with capturing devices (active systems). Geometric data about objects are extracted by processing grey intensity images. These methodologies require complicated calibration procedures to evaluate the parameters of the optical devices (i.e., sensors and light source), necessary for the measurement process.
2 Although very accurate, these techniques cannot be used as general purpose because optical adjustments are required for every specific application and, consequently, tremendous effort must be devoted to calibration aspects. This paper presents a research activity concerning the development of a versatile non-contact scanning system, which overcomes some of the limitations above described. The system uses a laser light source and a stereo image acquisition sensor, which is calibrated by simple procedures [4]. The measurement process is based on laser stripe tracking by using principles of stereo-vision. In particular, a procedure has been developed to detect peak positions with sub-pixel accuracy on the basis of light intensity constraints and fuzzy decision maker. The method comprehends algorithms for camera calibration, laser stripe detection and analysis, and 3D range-map reconstruction. In the paper, the methodology is described and laboratory applications are illustrated. 2. The experimental setup The optical system (Figure 1) is composed of two conventional monochrome digital cameras and a laser source (laser diode, wavelength 650 nm, output power 35 mw). For this work, all images were sampled with a matrix containing pixels and quantized with 256 grey levels (I(x,y)=0: black; I(x,y)=255: white). The laser light is a beam of circular cross-section with Gaussian intensity profile. The laser beam is spread by a cylindrical lens into a thin plane, so that the intensity profiles in the image planes of the cameras are practically still Gaussian across the stripes. This is used for detecting the stripe centres in the camera images with subpixel resolution. The light beam generated by the laser is deflected by a mirror and scanned on the object. Generally, all types of laser-based 3D range finders are based on calibrating an optical configuration, which includes sensors and light source [2]. In this paper, the 3D stripes are reconstructed by only using 2-D imagery, whereas the laser light source is not directly involved in the measurement process. There are certain advantages to using this approach. In particular, because the knowledge of the moving part positions is not needed in the triangulation process, the method lends itself to be simple, inexpensive and mechanically very robust. Moreover, the calibration of a stereo-vision system is now a well-established procedure, which can be easily carried out to adapt the stereo vision configuration to specific contexts. Figure 1: The stereo-vision system.
3 The calibration procedure [4] allows the calculation of the intrinsic (focal distances, coordinates of the principal points, radial and tangential distortions) and extrinsic (positions and orientations with respect to an absolute reference system) of the cameras. In particular, the parameters of the camera are obtained correlating the coordinates of known markers located on a calibrating sample, acquired in different positions, with the corresponding coordinates on the image planes. The procedure provides the translation vector and the rotation matrix of the reference system of the camera with reference to an absolute datum system. 3. Laser stripe detection The images acquired through the instrumentation above described are evaluated and the results are processed in order to obtain 3D point maps. Figures 2 show a pair of images acquired by the stereo-vision system. An image processing procedure is applied to detect the locations of laser stripe centres in the camera images. Naidu and Fisher [5] have proposed an interesting comparative analysis of five algorithms for determining the peak position of a laser stripe in an image with subpixel accuracy. In this work, the Blais e Rioux detector [6], which has been verified to be more noise-tolerant and equally computationally efficient, has been used. In the original image, the grey level transition across a laser line is gradual (Figure 3-a), in theory Gaussian, and this permits a sub-pixel evaluation of the line centres. The algorithm is based on the differentiation of the signal and the elimination of high frequency noise by using linear filters. In particular, the filtered signal at a pixel x,y is given by 2n th order linear operator defined as: I ( x, y) = sign( i) I( x + i, y) f n i= -n where I(x,y) is the grey level measured in the acquired images at a pixel x,y and -sign(i) is the sign operator. Typically, 6 th and 8 th order operators have provided good performances with stripe width of at least 3 pixels. It is remarkable that these operators act like a form of numerical derivative operator (Figure 3-b). Figure 4 shows the full field map of filtered signals obtained from Figure 2-a. Linear interpolation is then used to increase the precision of the zero crossing. In particular, the peak position is estimated with subpixel accuracy as: X I I f ( xc, y) (, ) ( + 1, ) = xc + δ = I x y I x y f c f c (1) Y I = y (2) where I f (x c,y) and I f (x c +1,y) are the filtered signals at the zero crossing zone. By this procedure the centres of the lines are determined with a resolution which depends on grid line width, grey level profile, grey level difference and system noise. In this paper, a resolution of 0.08 pixels was obtained applying a 6 th order operator to 10-pixel width lines and amplitude of the signal 130.
4 The detection of laser stripe centres can be disturbed by the presence of undesirable elements in the scene. Figure 5 show the plots of the intensity levels and the filtered signals measured across three different lines of the acquired image (Figure 2-a) and the filtered map (Figure 4). It is apparent that the signal peaks are not always due the presence of a laser stripe, rather, spurious elements in the scene can also induce forms of signal variations. The use of pass-band interference filters can partially avoid these drawbacks. However, this solution does not allow the use of a multi-scan procedure based on photogrammetric routines integrated to the measurement process as proposed in [3]. In this paper, a fuzzy-based algorithm has been developed and applied to the filtered maps (Figure 4) in order to optimise the peak detection process. In particular, the filtered maps are row-wise scanned; for each row, the filtered signal is measured and a fuzzy decision maker [7] is employed to resolve whether the light variations are due to the laser stripe or spurious image features. The fuzzy algorithm is an ordered set of conditional fuzzy statements, which upon execution yield an approximate solution to the recognition problem reproducing human inference way. Each statement deals with fuzzy variables, which are defined to distinguish peaks of intensity. The fuzzy procedure is based on detecting the highest (I M ) and smallest (I m ) values of each filtered signal distribution (Figure 4). Then, five variables, named a, b, c, d, e, are defined as: a = I M + I m (3) b = I M I m (4) c = I M I m (5) d = x M x m (6) e= x x (7) M m where x M and x m are the pixels where I M and I m occur, respectively, a takes into account the difference of intensity between the peak and his neighbourhood, b takes into account the symmetry of the peak distribution, c, d and e control the distribution and the position of the signal peak. The five variables are modelled by means of fuzzy membership functions (Figure 5) with different shapes and chosen on the basis of a trial and error method. The modelling procedure provides the basis for the set of fuzzy rules, which are processed by a Sugeno-style inference and the weighted sum as a defuzzification method. The fuzzy algorithm coupled to camera adjustments (i.e., gain and shutter-time) provide high robustness to the peak detection process, even in non-controlled working environments.
5 (a) (b) Figure 2: Example of acquired left (a) and right (b) images: measurement of a rectified plane. (a) (b) Figure 3: Grey level distributions (a) and filtered signals (b) across a laser stripe. Figure 4: Full filed map of the filtered signals obtained from the image of Figure 2-a.
6 Figure 5: Plots of intensity levels and filtered signals measured in three different rows of the acquired filtered images. 4. Stereo image correlation Figure 6: Scheme of the fuzzy inference system. The 3D measurement using stereo vision requires the matching of conjugate image points. The correlation process is typically based on similarity constraints (feature-based or intensity-based). However, a point in image I 1 can be correlated with many points in image I 2 (false matches), and this problem is particularly evident in measuring free-form surfaces due to the absence of peculiarities in the scene to be used as similarity constraints. The measurement system presented in this paper solves the matching problem by combining the laser stripe detection (intensity-based constraint) with the epipolar geometry (stereo vision constraint) (Figure 7). In particular, given a point m 1 on the
7 image I 1 (camera 1), its conjugate m 2 on the image I 2 (camera 2) belongs to the epipolar line defined as intersection of the image plane with the plane through m 1 and the focuses of the two optical devices. On the basis of the perspective projection theory, the coordinates of conjugate points m 1 e m 2 are defined by the following equations: m = P w i = 1,2 (8) i i where P i (i = 1, 2) is the perspective projection matrix and w the coordinate vector of a generic 3D point. The mathematical expressions of the optical rays through w and the focus c 1, or c 2, can be written as: wλ = ci + λ wi = ci + λ P + i mi i = 1,2 (9) where λ is a scalar parameter, w i represents the direction of an optical ray and P is the pseudo-inverse perspective matrix. + i The epipolar line l 2 corresponds to the projection of the optical ray w-c 1 on to the image plane of the second camera (I 2 ) with respect to the optical centre c 2. Its expression can be determined considering the projections of c 1 on to I 2 (corresponding to the epipole e 2 ) and infinite point of the optical ray w-c 1. These points are defined as: e2 = P2 c 1 (10) e = P P m (11) The equation of the epipolar line of m 1 can then be written as (in homogeneous coordinates): l = e + λ e (12) 2 2 e 2 2 Considering that an equation in homogeneous coordinates of a line through two points can be written as the external product of the points, the epipolar line of m 1 can be expressed as: l = e e = e P P m = F m (13) The matrix F, named the Fundamental Matrix (a 3 3 square matrix), depends on the epipolar parameters and the perspective projection matrixes of the optical devices, which can be obtained by the calibration routines. However, considering that the point m 2, conjugate of m 1, belongs to the epipolar line l 2, the following relation can be written (Longuet-Higgins equation): m T 2 F m1 0 1 = (14)
8 The Fundamental Matrix is obtained on the basis of an optimisation procedure, considering a number of known conjugate points higher than the number of unknowns. In the method developed in this paper, the epipolar constraint is applied to the points belonging to the laser stripes. The matching problem of conjugate points is straightforward considering that, in each image plane, a conjugate point belongs both to a laser line and to an epipolar line (Figure 7): given a laser line point m 1 on the image (I 1 ), its conjugate m 2 is obtained as intersection between the epipolar line and the laser stripe on the image (I 2 ). Then, from the knowledge of the conjugate pair m 1, m 2, the corresponding 3D point w is determined by a triangulation procedure. The correlation procedure does not require any operator s intervention. The combination of two constraints, i.e. laser line and epipolar line belongings, guarantees the matching problem has always a unique solution. Moreover, this methodology can efficiently used to measure free-form shapes, which typically do not present peculiarities usable by classical correlation procedures. Figure 7: Schematic diagram of the stereo vision set-up. 5. Laboratory applications Experimental tests have been carried out to measure nominal shapes to verify the accuracy of the proposed methodology. In particular, two nominal samples, i.e. a rectified stepped surface and a cylinder (radius of 125 mm), have been captured with working distances of 1800 mm and 3000 mm. Figure 8-a shows the three-dimensional points determined for the stepped surface. The experimental points have been compared with planes of best fit. The analysis of deviations has yielded an average error of 0.15 mm and a standard deviation of 0.07
9 mm with a working distance of 1800 mm, and an average error of 0.27 mm and a standard deviation of 0.18 mm with a working distance of 3000 mm. Figure 8-b shows the three-dimensional points determined for the cylindrical surface. The experimental points have been compared with a cylinder of best fit. The analysis of deviations has yielded an average error of 0.1 mm and a standard deviation of 0.05 mm with a working distance of 1800 mm, and an average error of 0.19 mm and a standard deviation of 0.1 mm with a working distance of 3000 mm. It is possible to note the results for the cylinder are more accurate than those obtained for the stepped surface. This is probably due the better surface quality of the cylinder, which showed negligible reflection phenomena. It is interesting to evidence that the procedure directly yields sets of ordered points, which can better support the successive process of surface modelling. (a) (b) Figure 8: Acquisition of rectified stepped surface (a) and a cylindrical surface (b). (a) (b) Figure 9: Deviations from the nominal geometries of Figures 8 as obtained by the proposed method for the rectified stepped surface (a) and the cylindrical surface (b).
10 6. Conclusions In this paper, a stereo vision system for shape measurement applications has been developed. The methodology is based on using light stripe, generated by a laser beam deflected by a mirror and scanned on the object, and a stereo vision system. A procedure has been developed to detect peak positions with sub-pixel accuracy on the basis of light intensity constraints and fuzzy decision maker. The 3D stripes are reconstructed by only using 2-D image information, whereas the laser light source is not directly involved in the measurement process. This solution reduces the influence of the mechanical moving parts on the accuracy and the stability of the system. Moreover, the optical configuration can be easily adapted to specific applications calibrating the stereo vision system by wellestablished procedures. Experimental applications concerning shape measurements of nominal models have evidenced the accuracy and the easy of use of the proposed technique. References [1] VÀRADY T., MARTIN R.R., COX J., Reverse engineering of geometric models an introduction, Computer Aided Design, 29(4), 1997, [2] ELLI ANGELOPOULOU, JOHN R. WRIGHT, Laser Scanner Technology, Philadelphia, PA , GRASP Laboratory University of Pennsylvania [3] BARONE S., CURCIO A., RAZIONALE A., A structured light stereo system for reverse engineering applications, Proceedings of the IV Italo-Español Seminar on Reverse Engineering Techniques and Applications, Naples 2003, [4] GRUEN A., HUANG T.S., Calibration and Orientation of Cameras in Computer Vision, Springer, Berlin, [5] TRUCCO E., FISHER R.B., FITZGIBBON A.W., NAIDU D.K., Calibration, data consistency and model acquisition with laser stripers, International Journal of Computer Integrated Manufacturing, 11(9), 1998, [6] BLAISE F., RIOUX M., "Real-Time Numerical Peak Detector." Signal Processing, 1986, 11, [7] KASABOV N.K., Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering, A Bradford Book, 1996, Cambridge, Massachusetts, The MIT Press.
Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationChapters 1 7: Overview
Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter
More informationDigital Volume Correlation for Materials Characterization
19 th World Conference on Non-Destructive Testing 2016 Digital Volume Correlation for Materials Characterization Enrico QUINTANA, Phillip REU, Edward JIMENEZ, Kyle THOMPSON, Sharlotte KRAMER Sandia National
More informationOptimized Design of 3D Laser Triangulation Systems
The Scan Principle of 3D Laser Triangulation Triangulation Geometry Example of Setup Z Y X Target as seen from the Camera Sensor Image of Laser Line The Scan Principle of 3D Laser Triangulation Detektion
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*
ADS40 Calibration & Verification Process Udo Tempelmann*, Ludger Hinsken**, Utz Recke* *Leica Geosystems GIS & Mapping GmbH, Switzerland **Ludger Hinsken, Author of ORIMA, Konstanz, Germany Keywords: ADS40,
More informationDepth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy
Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More informationModel-based segmentation and recognition from range data
Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This
More informationRectification and Disparity
Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationRegistration of Moving Surfaces by Means of One-Shot Laser Projection
Registration of Moving Surfaces by Means of One-Shot Laser Projection Carles Matabosch 1,DavidFofi 2, Joaquim Salvi 1, and Josep Forest 1 1 University of Girona, Institut d Informatica i Aplicacions, Girona,
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationLUMS Mine Detector Project
LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines
More information3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly
More informationBasilio Bona DAUIN Politecnico di Torino
ROBOTICA 03CFIOR DAUIN Politecnico di Torino Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The
More informationDEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD
DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,
More informationThree-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner
Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner F. B. Djupkep Dizeu, S. Hesabi, D. Laurendeau, A. Bendada Computer Vision and
More informationMAPI Computer Vision. Multiple View Geometry
MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry
More informationCamera model and multiple view geometry
Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then
More informationCorrespondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong
More informationTwo-view geometry Computer Vision Spring 2018, Lecture 10
Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of
More informationCatadioptric camera model with conic mirror
LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación
More informationStructured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe
Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based
More information3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,
3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates
More informationBinocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?
System 6 Introduction Is there a Wedge in this 3D scene? Binocular Stereo Vision Data a stereo pair of images! Given two 2D images of an object, how can we reconstruct 3D awareness of it? AV: 3D recognition
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationAn Intuitive Explanation of Fourier Theory
An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory
More informationThree-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras
Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.
More informationTowards the completion of assignment 1
Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?
More informationCamera Model and Calibration. Lecture-12
Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationInformation page for written examinations at Linköping University TER2
Information page for written examinations at Linköping University Examination date 2016-08-19 Room (1) TER2 Time 8-12 Course code Exam code Course name Exam name Department Number of questions in the examination
More informationCV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more
CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured
More informationImproving the 3D Scan Precision of Laser Triangulation
Improving the 3D Scan Precision of Laser Triangulation The Principle of Laser Triangulation Triangulation Geometry Example Z Y X Image of Target Object Sensor Image of Laser Line 3D Laser Triangulation
More informationOverview of Active Vision Techniques
SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active
More informationLaser stripe peak detector for 3D scanners. A FIR filter approach
Laser stripe peak detector for 3D scanners. A FIR filter approach Josep Forest, Joaquim Salvi, Enric Cabruja and Carles Pous Universitat de Girona Computer Vision and Robotics Lab. Politècnica P-IV, Campus
More informationProjector Calibration for Pattern Projection Systems
Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationStructured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)
Structured Light II Guido Gerig CS 6320, Spring 2013 (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC) http://www.cs.cmu.edu/afs/cs/academic/class/15385- s06/lectures/ppts/lec-17.ppt Variant
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationCamera calibration. Robotic vision. Ville Kyrki
Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis
More informationComputer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15
Announcement Stereo II CSE252A Lecture 15 HW3 assigned No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Mars Exploratory Rovers: Spirit and Opportunity Stereo Vision Outline
More informationComputer Vision cmput 428/615
Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?
More informationStereo Observation Models
Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative
More informationOutline. ETN-FPI Training School on Plenoptic Sensing
Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration
More informationVIRTUAL PROTOTYPING SIMULATION FOR THE DESIGN OF TWO-WHEELED VEHICLES
NTERNATIONAL DESIGN CONFERENCE - DESIGN 2002 Dubrovnik, May 14-17, 2002. VIRTUAL PROTOTYPING SIMULATION FOR THE DESIGN OF TWO-WHEELED VEHICLES S. Barone, A. Curcio and F. Pierucci Keywords: CAD, Multi-Body
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationCreating a distortion characterisation dataset for visual band cameras using fiducial markers.
Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Robert Jermy Council for Scientific and Industrial Research Email: rjermy@csir.co.za Jason de Villiers Council
More informationEpipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz
Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two
More information3D Measurement of Transparent Vessel and Submerged Object Using Laser Range Finder
3D Measurement of Transparent Vessel and Submerged Object Using Laser Range Finder Hirotoshi Ibe Graduate School of Science and Technology, Shizuoka University 3-5-1 Johoku, Naka-ku, Hamamatsu-shi, Shizuoka
More informationThere are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationDD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication
DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:
More informationPART A Three-Dimensional Measurement with iwitness
PART A Three-Dimensional Measurement with iwitness A1. The Basic Process The iwitness software system enables a user to convert two-dimensional (2D) coordinate (x,y) information of feature points on an
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationReal-time surface tracking with uncoded structured light
Real-time surface tracking with uncoded structured light Willie Brink Council for Scientific and Industrial Research, South Africa wbrink@csircoza Abstract A technique for tracking the orientation and
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric
More informationProduct information. Hi-Tech Electronics Pte Ltd
Product information Introduction TEMA Motion is the world leading software for advanced motion analysis. Starting with digital image sequences the operator uses TEMA Motion to track objects in images,
More information5LSH0 Advanced Topics Video & Analysis
1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl
More informationCh 22 Inspection Technologies
Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface
More information3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light I Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationExterior Orientation Parameters
Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product
More informationEpipolar Geometry in Stereo, Motion and Object Recognition
Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,
More informationAccurate and Dense Wide-Baseline Stereo Matching Using SW-POC
Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp
More informationCalibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland
Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland New Zealand Tel: +64 9 3034116, Fax: +64 9 302 8106
More informationA COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION
XIX IMEKO World Congress Fundamental and Applied Metrology September 6 11, 2009, Lisbon, Portugal A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION David Samper 1, Jorge Santolaria 1,
More informationMultiple Views Geometry
Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective
More informationAnnouncements. Stereo
Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative
More informationFAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES
FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing
More informationSIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS
SIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS J. KORNIS, P. PACHER Department of Physics Technical University of Budapest H-1111 Budafoki út 8., Hungary e-mail: kornis@phy.bme.hu, pacher@phy.bme.hu
More informationReminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches
Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary
More informationBall detection and predictive ball following based on a stereoscopic vision system
Research Collection Conference Paper Ball detection and predictive ball following based on a stereoscopic vision system Author(s): Scaramuzza, Davide; Pagnottelli, Stefano; Valigi, Paolo Publication Date:
More informationCentre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB
HIGH ACCURACY 3-D MEASUREMENT USING MULTIPLE CAMERA VIEWS T.A. Clarke, T.J. Ellis, & S. Robson. High accuracy measurement of industrially produced objects is becoming increasingly important. The techniques
More informationMeasurement of 3D Foot Shape Deformation in Motion
Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The
More information3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,
More informationImage Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments
Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features
More informationDESIGN AND EVALUATION OF A PHOTOGRAMMETRIC 3D SURFACE SCANNER
DESIGN AND EVALUATION OF A PHOTOGRAMMETRIC 3D SURFACE SCANNER A. Prokos 1, G. Karras 1, L. Grammatikopoulos 2 1 Department of Surveying, National Technical University of Athens (NTUA), GR-15780 Athens,
More informationComputer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16
Computer Vision I Dense Stereo Correspondences Anita Sellent Stereo Two Cameras Overlapping field of view Known transformation between cameras From disparity compute depth [ Bradski, Kaehler: Learning
More informationEpipolar Geometry and the Essential Matrix
Epipolar Geometry and the Essential Matrix Carlo Tomasi The epipolar geometry of a pair of cameras expresses the fundamental relationship between any two corresponding points in the two image planes, and
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationCamera Calibration Using Line Correspondences
Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of
More informationPattern Feature Detection for Camera Calibration Using Circular Sample
Pattern Feature Detection for Camera Calibration Using Circular Sample Dong-Won Shin and Yo-Sung Ho (&) Gwangju Institute of Science and Technology (GIST), 13 Cheomdan-gwagiro, Buk-gu, Gwangju 500-71,
More informationEpipolar geometry. x x
Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationOPTI-521 Graduate Report 2 Matthew Risi Tutorial: Introduction to imaging, and estimate of image quality degradation from optical surfaces
OPTI-521 Graduate Report 2 Matthew Risi Tutorial: Introduction to imaging, and estimate of image quality degradation from optical surfaces Abstract The purpose of this tutorial is to introduce the concept
More informationAnd. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi
Full Field Displacement And Strain Measurement And Modal Analysis Using VIC-3D-HS, High Speed 3D Digital Image Correlation System At Indian Institute of Technology New Delhi VIC-3D, 3D Digital Image Correlation
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More information