Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens

Similar documents
Exterior Orientation Parameters

Vision Review: Image Formation. Course web page:

BUNDLE BLOCK ADJUSTMENT WITH HIGH RESOLUTION ULTRACAMD IMAGES

Chapters 1 7: Overview

Camera model and multiple view geometry

Digital Photogrammetric System. Version 5.3 USER GUIDE. Block adjustment

AUTOMATIC CAMERA CALIBRATION FOR CULTURAL HERITAGE APPLICATIONS USING UNSTRUCTURED PLANAR OBJECTS

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

A Summary of Projective Geometry

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*

EPIPOLAR IMAGES FOR CLOSE RANGE APPLICATIONS

AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA

DESIGN AND EVALUATION OF A PHOTOGRAMMETRIC 3D SURFACE SCANNER

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

DESIGN AND TESTING OF MATHEMATICAL MODELS FOR A FULL-SPHERICAL CAMERA ON THE BASIS OF A ROTATING LINEAR ARRAY SENSOR AND A FISHEYE LENS

Calibration of IRS-1C PAN-camera

PERFORMANCE OF LARGE-FORMAT DIGITAL CAMERAS

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

High Altitude Balloon Localization from Photographs

Calibrating an Overhead Video Camera

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

PHOTOGRAMMETRIC TECHNIQUE FOR TEETH OCCLUSION ANALYSIS IN DENTISTRY

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Structure from Motion. Prof. Marco Marcon

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Camera Model and Calibration

Chapter 7: Computation of the Camera Matrix P

Chapter 3 Image Registration. Chapter 3 Image Registration

arxiv: v1 [cs.cv] 28 Sep 2018

CAMERA CONSTANT IN THE CASE OF TWO MEDIA PHOTOGRAMMETRY

PART A Three-Dimensional Measurement with iwitness

A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION

calibrated coordinates Linear transformation pixel coordinates

Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point Auxiliary Orientation

Processing each single plane

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Chapters 1-4: Summary

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Textures Tutorial. Using Cobalt, Xenon, Argon. Copyright Vellum Investment Partners dba Ashlar-Vellum. All rights reserved.

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows

MOTION STEREO DOUBLE MATCHING RESTRICTION IN 3D MOVEMENT ANALYSIS

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

3D Visualization through Planar Pattern Based Augmented Reality

Guidelines for proper use of Plate elements

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4

POSITIONING A PIXEL IN A COORDINATE SYSTEM

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Micro-scale Stereo Photogrammetry of Skin Lesions for Depth and Colour Classification

DEVELOPMENT AND EVALUATION OF A MEASUREMENT AND CALCULATION MODULE FOR OFFSETS WITH RESPECT TO A STRECHED WIRE BASED ON PHOTOGRAMMETRY AT CERN

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Camera Model and Calibration. Lecture-12

METR Robotics Tutorial 2 Week 2: Homogeneous Coordinates

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Agenda. Rotations. Camera models. Camera calibration. Homographies

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Perspective Projection Describes Image Formation Berthold K.P. Horn

Raycasting. Chapter Raycasting foundations. When you look at an object, like the ball in the picture to the left, what do

Introduction to Homogeneous coordinates

IRIS SEGMENTATION OF NON-IDEAL IMAGES

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY

Multi-view Surface Inspection Using a Rotating Table

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

GEOMETRY AND INFORMATION CONTENTS OF LARGE SIZE DIGITAL FRAME CAMERAS

November c Fluent Inc. November 8,

Using LoggerPro. Nothing is more terrible than to see ignorance in action. J. W. Goethe ( )

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Stereo Image Rectification for Simple Panoramic Image Generation

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

STIPlotDigitizer. User s Manual

Flexible Calibration of a Portable Structured Light System through Surface Plane

Short on camera geometry and camera calibration

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2

A SIMPLIFIED OPTICAL TRIANGULATION METHOD FOR HEIGHT MEASUREMENTS ON INSTATIONARY WATER SURFACES

Camera Geometry II. COS 429 Princeton University

USING THE ORTHOGONAL PROJECTION FOR PARAMETER INITIALIZATION IN THE 3D RECONSTRUCTION OF DISTANT OBJECTS INTRODUCTION

ACCURACY ANALYSIS FOR NEW CLOSE-RANGE PHOTOGRAMMETRIC SYSTEMS

CSE 252B: Computer Vision II

EVALUATION OF SEQUENTIAL IMAGES FOR PHOTOGRAMMETRICALLY POINT DETERMINATION

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi

Particle localization and tracking GUI: TrackingGUI_rp.m

Chapter 6 Formatting Graphic Objects

Cameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

Factorization with Missing and Noisy Data

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

button in the lower-left corner of the panel if you have further questions throughout this tutorial.

MIMAKI ENGINEERING CO., LTD.

Perspective Projection [2 pts]

DIGITAL HEIGHT MODELS BY CARTOSAT-1

Transcription:

An open-source toolbox in Matlab for fully automatic calibration of close-range digital cameras based on images of chess-boards FAUCCAL (Fully Automatic Camera Calibration) Implemented by Valsamis Douskos (mdouskos@googlemail.com) Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens DOCUMENTATION and TIPS Introduction The concept behind this open-source toolbox, implemented in Matlab, relies on the Camera Calibration Toolbox for Matlab (http://www.vision.caltech.edu/bouguetj/calib_doc/) of Jean- Yves Bouguet. Our aim was to develop a toolbox for camera calibration which would run fully automatically. The single assumption is that several images of a typical chess-board pattern (alternating dark and light squares), acquired with the same camera, are available. The algorithm then proceeds to extract object corner points; discard blunders; sort the valid nodes in pattern rows and columns; and finally calibrate the camera by bundle adjustment. Care has been taken to help the user avoid frequently occurring errors. The algorithm has been documented in more detail in our publications available in the FAUCCAL home page. Here a short documentation of the software is given, along with some useful tips. Comments, suggestions and criticism by users are most welcome. Installation and Starting To install the toolbox, copy the FAUCCAL folder (contained in the Fauccal_code.rar archive) in your hard drive and use the File Set Path menu command in Matlab to add this folder to the Matlab path. To start FAUCCAL toolbox, type fauccal in the Matlab command window. The following window will appear on the screen: Figure 1 1

Mathematical Model, Parameters and Options This GUI is divided in two sections. On the left, one selects the combination of the camera interior orientation elements which will be estimated. These parameters include: the camera constant c, the image coordinates (x o,y o ) of the principle point, the aspect ratio α, the skewness sk, the radial-symmetric lens distortion parameters (k 1, k 2 ) and the decentering lens distortion parameters (p 1, p 2 ). The adopted camera model is the following: with: 2 2 2 2 (as usual, X o, Y o and Z o denote the object space coordinates of the projection centre, while r ij are the elements of the image rotation matrix). The distortion parameters may be calculated: either (i) by referring distortion to the centrally projected ground coordinates of points on the image plane (Ground-based) i.e. by adopting the rigorous model of the image function or (ii) by referring distortion to the measured image coordinates of the points (Image-based). The choice is made in the last option of the left section of the menu (Type of distortion): i. When the Ground-based option is selected, the quantities and which appear in the model described above are expressed by the following equations: (*note the absence of the α and sk parameters). ii. In case the Image-based option is selected, and are simply the measured image coordinates of the point. In the right section of the GUI, users may set and modify certain parameters concerning the bundle adjustment. In the first textbox, one may fix the convergence limit for the corrections of the values of the c, x o, y o parameters (in pixels). When corrections for these three parameters fall below this given limit, the toolbox exits the bundle adjustment loop. In the second 2

textbox, users can set the maximum number of iterations allowed for the bundle adjustment. If the number of iterations exceeds this limit, it is considered that the adjustment has failed to converge. Next, if the user wishes to regard the coordinates of all pattern nodes not as fixed (error-free) control points but rather as observed control points (i.e. subject to uncertainty), the checkbox Uncertain control points in the GUI must be checked. The two textboxes below will then be enabled, and one has to insert sigma values (standard deviations) for the image coordinates and the ground coordinates of the pattern nodes, which will allow calculation of the corresponding weight matrix. It is noted that scale is here irrelevant; hence, the squares of the pattern could be given any arbitrary size. In this toolbox, the ground unit is fixed by the horizontal dimension of the chess-board, namely the distance between the first and the last column of the pattern (or, in case the pattern has not been wholly recorded on any image, the largest distance between columns appearing on the images) equals one ground unit. If the second checkbox Perform back-projection in the right section of the GUI has been checked, the algorithm after the first solution has been reached will automatically backproject all missing pattern nodes, plus 3 additional outer rows and columns on all sides of the chess-board, in order to detect any nodes that might have been missed or discarded in the initial bundle adjustment and introduce them into the final adjustment. Finally, if the last checkbox Resize images is checked in order to increase speed, then if the images are larger than 920 pixels in width or 720 pixels in height and provided that the user has installed the Matlab Image Processing Toolbox our toolbox automatically resizes the images to 640 pixels width (the aspect ratio of the original image is retained). The Harris operator is applied initially to the down-sized images; the extracted points are transferred to the original image; the Harris operator is then applied on the original image only in a small area around these positions. In this way, the point extraction algorithm runs approximately 10 times faster on a 7 Mpixel image. By default this option is set to on. However, if in some cases the Harris operator will not be able to extract a sufficient number of points in the downsized image, the user is advised to disable this option and apply the operator directly to the original image. This, of course, will increase computation time but will also result in extracting as many points as possible from the images. Running the Program After all these options and parameters have been set, the Run button initializes the camera calibration process. A dialog box (Fig. 2) will appear, on which one can select the images to be used for calibration. For a solution at least two images must be selected in this dialog box (but please consult the tips at the end of this text for details about advisable image number and configurations). After all desired images have been selected Open must be clicked, and the toolbox will continue with the estimation of the camera interior orientation parameters. Throughout the initialization phase (for details regarding initialization please consult Douskos et al., 2007 and 2008), a wait bar will appear on the screen, showing which image is being processed at the moment (Figure 3). 3

Figure 2 Figure 3 After the final bundle adjustment (which follows the step of back-projection) has converged, two windows (Fig. 4 and Fig. 5) will appear on the screen. The first displays all image points with residuals >3 times larger than the standard error s o of the adjustment. From this window one can also select and exclude all, or some, of these points from the solution. In this case the bundle adjustment is resumed. In the second window one may review all images of the dataset, and also visualize on each image all nodes involved in the bundle adjustment as well as the corresponding residuals (vectors of the residuals are enlarged by a factor of 100). Nodes which took part in the initial solution appear in yellow, nodes recovered by back-projection appear in green, and points which belong to the additional outer rows and columns appear in red (no such points are present in Fig. 5). A calculator at the bottom of the window allows finding the point closest to coordinates provided by the user in the corresponding textboxes, or shows the coordinates and the residuals of a point given by its number. One can also visualize this point with the Show point button. Figure 4 4

Figure 5 If the user does not choose to exclude any points, the toolbox first shows a window with the estimated values for the camera interior orientation parameters (Fig. 6). It then displays a dialog box (Fig. 7) where one can give a name for the text file in which the results will be saved. If Cancel is chosen, all results are still saved in a file named results.txt in the current directory. The results include the standard error of the adjustment, the values of all camera parameters with standard errors and correlations, the values of the exterior orientation parameters and all image residuals. Finally, when this dialog box closes, the toolbox also plots the radial distortion curve (calibrated after Karras et al., 1998) as seen in Fig. 8. Figure 6 5

Figure 7 Figure 8 The results shown above and all examples in the previous windows refer to a camera calibration process based on the image set shown below in Table 1. All options had been set to default except for the back-projection option which was set to on. This dataset, including the results file, is available to download from the FAUCCAL site. 6

Table 1 Requirements and Tips Image acquisition 1. The pattern must be planar. All tiles should be square and equally spaced (including those at the pattern edges). The pattern should consist of a sufficiently large number of rows and columns (e.g. 8), but not too large (e.g. 20) since this might result in imaged tiles which are too small and confused with their neighbours (cf. Fig.9). 2. All images must be acquired with constant camera focus (preferably at infinity, if suitable for the application). 3. In principle, a dataset of at least two images must be used to obtain solution. The use of a larger number of images (> 5) is suggested for accurate results. 4. Image quality must be good, i.e. images should not be blurry or too noisy. Also, all images must have the same dimensions (in pixels). 5. The imaged pattern must occupy a large portion (or the whole) of each image frame. It is highly recommended that the point which corresponds to the intersection of the left-most column appearing on the image with the row that lies nearest to the image bottom is actually recorded on the image. 6. Caution must be taken to avoid imaging the pattern from symmetric view-points (this might produce a singularity). In general, it is strongly suggested to take images which depict the pattern with significant differences in perspective. Other tips 1. Please keep in mind that the exterior orientation of the images, included in the results file, is irrelevant. The images refer to their own object systems, which may differ by in-plane shifts and rotations. 2. To obtain solution for the DLR (http://www.dlr.de/rm/desktopdefault.aspx/tabid-4853/) or the Bouguet datasets, the corresponding block of code in the relpnt.m file must be uncommented. These blocks of code compensate for the special types and/or the ir- 7

regularities (at the edges) of the patterns used in these datasets. Finally, an example for cases which should be avoided is shown below. This pattern (33 27) includes a very large number of nodes, i.e. the squares are small. This, combined with the strong perspective, leads to difficulties in distinguishing nodes from each other, mainly at the far end of the pattern (especially if the Resize images option is used). Besides, the pattern is not planar (see arrows). Figure 9 References Douskos V., Kalisperakis I., Karras G., 2007. Automatic calibration of digital cameras using planar chess-board patterns. Optical 3-D Measurement Techniques VIII, Wichman, vol. 1, pp.132-140. Douskos V., Kalisperakis I., Karras G., Petsa E., 2008. Fully automatic camera calibration using regular planar patterns. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVII, Part B5, pp. 21-26. Karras G., Mountrakis G., Patias P., Petsa E., 1998. Modeling distortion of super-wide-angle lenses for architectural and archaeological applications. International Archives of Photogrammetry & Remote Sensing, Vol. XXXII, Part B(5), pp. 570-573. 8