Chapter 7: Computation of the Camera Matrix P

Similar documents
Camera model and calibration

Camera models and calibration

Camera calibration. Robotic vision. Ville Kyrki

Multiple View Geometry in Computer Vision

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Multiple View Geometry in Computer Vision Second Edition

Pin Hole Cameras & Warp Functions

Vision Review: Image Formation. Course web page:

Rectification and Distortion Correction

Geometric camera models and calibration

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Pin Hole Cameras & Warp Functions

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

CS231A Course Notes 4: Stereo Systems and Structure from Motion

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Lecture 5.3 Camera calibration. Thomas Opsahl

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Camera model and multiple view geometry

Robot Vision: Camera calibration

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

Midterm Exam Solutions

Agenda. Rotations. Camera calibration. Homography. Ransac

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Stereo Vision. MAN-522 Computer Vision

Lecture 3: Camera Calibration, DLT, SVD

Calculation of the fundamental matrix

Computer Vision: Lecture 3

COMP 558 lecture 19 Nov. 17, 2010

CS201 Computer Vision Camera Geometry

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Minimizing Algebraic Error

Contents. 1 Introduction Background Organization Features... 7

CS 664 Structure and Motion. Daniel Huttenlocher

CS6670: Computer Vision

Improving camera calibration

Flexible Calibration of a Portable Structured Light System through Surface Plane

Agenda. Rotations. Camera models. Camera calibration. Homographies

Projective geometry for Computer Vision

Two-view geometry Computer Vision Spring 2018, Lecture 10

Chapter 18. Geometric Operations

Instance-level recognition I. - Camera geometry and image alignment

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Spline Curves. Spline Curves. Prof. Dr. Hans Hagen Algorithmic Geometry WS 2013/2014 1

3D Geometry and Camera Calibration

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

calibrated coordinates Linear transformation pixel coordinates

CSE 252B: Computer Vision II

521466S Machine Vision Exercise #1 Camera models

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

MAPI Computer Vision. Multiple View Geometry

Stereo and Epipolar geometry

Two-View Geometry (Course 23, Lecture D)

Short on camera geometry and camera calibration

Structure from Motion. Prof. Marco Marcon

Vision par ordinateur

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

C / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.

Exterior Orientation Parameters

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Computer Vision. 2. Projective Geometry in 3D. Lars Schmidt-Thieme

The Pre-Image Problem in Kernel Methods

Structure from motion

Sedat Doğan Introduction

Translations. Geometric Image Transformations. Two-Dimensional Geometric Transforms. Groups and Composition

Lecture 8 Fitting and Matching

Computer Vision Projective Geometry and Calibration. Pinhole cameras

6.837 LECTURE 7. Lecture 7 Outline Fall '01. Lecture Fall '01

EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES

3D Geometry and Camera Calibration

Dense 3D Reconstruction. Christiano Gava

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

When Sparsity Meets Low-Rankness: Transform Learning With Non-Local Low-Rank Constraint For Image Restoration

Robust Geometry Estimation from two Images

Joint Feature Distributions for Image Correspondence. Joint Feature Distribution Matching. Motivation

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Structure from Motion and Multi- view Geometry. Last lecture

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

Stereo Observation Models

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Epipolar Geometry and Stereo Vision

3D reconstruction class 11

Geometric Transformations and Image Warping

Metric Structure from Motion

Computer Vision Projective Geometry and Calibration

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

Perspective Projection Describes Image Formation Berthold K.P. Horn

Multiple View Geometry in Computer Vision

DTU M.SC. - COURSE EXAM Revised Edition

Super-resolution on Text Image Sequences

EECS 442 Computer vision. Fitting methods

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Single View Metrology

Transcription:

Chapter 7: Computation of the Camera Matrix P Arco Nederveen Eagle Vision March 18, 2008 Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 1 / 25

1 Chapter 7: Computation of the camera Matrix P Basic Equations Geometric error Restricted camera estimation Radial distortion Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 2 / 25

Basic equations Chapter 7: Numerical methods for estimation of the camera matrix. Given point correspondences X i x i between 3D points X i and 2D image points x i find the camera matrix P. Where P is a 3 4 matrix, such that x i = PX i for all i For each correspondence: 0 w i X i y i X i P 1 w i X i 0 x i X i P 2 (1) y i X i x i X i 0 P 3 Where each P i is a 4-vector, the i-th row of P. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 3 / 25

Basic equations Alternatively: [ 0 w i X i y i X i w i X i 0 x i X i ] P 1 P 2 P 3 A set of n correspondences results in 2n 12 matrix A. The projection matrix P is computed by solving. Ap = 0. Where p contains the entries of P. Minimal solution: P has 12 entries and 11 degrees of freedom, therefore 5 1 2 correspondences are needed. Overdetermined solution: Minimize algebraic or geometric error. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 4 / 25

Basic equations Algebraic error: minimize Ap with a normalization constraint. like p = 1 or ˆp 3 = 1 where ˆp 3 is the vector. (p 31, p 32, p 33 ) from P Algebraic error: the residual Ap. The DLT algorithm can be used to compute P in the same manner as computing H. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 5 / 25

Basic equations Degenerate configurations: The camera and points all lie on a twisted cubic. The points lie on a union of a plane and a single straight line containing the camera center. Data normalization: 2D points: Centroid of data at origin and their RMS distance from the origin is 2. 3D points: If variation of depth of points is relatively small than position centroid of data on the origin and scale the RMS of the points to 3 Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 6 / 25

Basic equations Using Line correspondences in the DLT algorithm Line in 3D can be represented by X 0 and X 1 The plane formed by back-projecting the image line l is equal to P l Than X j lies on this plane if: P l j = 0 for j = 0, 1 Each choice for j gives a single linear equation which may be added to equation 1. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 7 / 25

Geometric error If world points X i are exactly known and the 2D image points contain noise then the geometric error is given by: d(x i, ˆx i ) 2 Where x i is the measured point and ˆx i is the point PX i. If the measurement noise is Gaussian then: min d(x i, PX i ) 2 P is the Maximum Likelihood estimate of P. i i To minimize the geometric a iterative method must be used. Such as used in the Gold Standard algorithm in table 7.1. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 8 / 25

Geometric Error Example 7.1: Camera estimation from a calibration object. DLT algorithm compared to Gold Standard. Rule of thumb: point measurements should exceed the number of unknowns by a factor five. Table 7.1 shows the results: slight improvement with Gold Standard algorithm. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 9 / 25

Errors in the world points Its possible world points cannot be determined with absolute accuracy. If this is the case the 3D geometric error is defined as: d(x i, ˆX i ) 2 i Where ˆX i is the closest point in space to X i that maps exactly onto x i via x i = P ˆX i. If both the world and image coordinates contain errors the sum of world and image errors is minimized. n d Mah (x i, P ˆX i ) 2 + d Mah (X i, ˆX i ) 2 i Where d Mah represents the Mahalanobis distance. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 10 / 25

Geometric interpretation of algebraic error Points X i in the DLT algorithm are normalized such that X i = (X i, Y i, Z i, 1) and x i = (x i, y i, 1) The quantity minimized is then given by: i (ŵ id(x i, ˆx i )) 2 where ŵ i (ˆx i, ŷ i, 1) = PX i But given, ŵ i = ± ˆp 3 depth(x; P) we see that ŵ i can be seen as the depth of point X i from the camera in the direction of principle ray if ˆp 3 2 = 1. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 11 / 25

Geometric interpretation of algebraic error As seen in figure 7.2 ŵ i d(x i, ˆx i ) is proportional to fd(x, X) The algebraic distance being minimized is equal to f i d(x i, X i) 2 / Xi Xi xi C f d w Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 12 / 25

Transformation invariance Scaling and translation in either the image of world coordinates have no influence on minimizing Ap. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 13 / 25

Estimation of an affine camera Affine camera is one for which the projection matrix has (0, 0, 0, 1) as last row. Given X i = (X i, Y i, Z i, 1) and x i = (x i, y i, 1) equation 1 reduces to: [ 0 w i X ] ( ) ( ) i P 1 yi 0 P 2 + x i X i which shows that the squared algebraic error in this case equals the squared geometric error Ap 2 = i (x i P 1 X i ) 2 + (y i P 2 X i ) = i d(x i, ˆx i ) 2. A linear estimation algorithm is given in algorithm 7.2. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 14 / 25

Restricted camera estimation Matrix P with centre at finite point P = K[R R C]. With α x s x 0 K = α y y 0 1 With common assumptions: The skew s is zero. The pixels are square: α x = α y. The principal point (x 0, y 0 ) is known. The complete camera calibration matrix K is known. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 15 / 25

Restricted camera estimation Some assumptions can make it possible to estimate the restricted camera matrix with a linear algorithm. Example: fit a pinhole camera model (s = 0 and α x = α y ) to a set of point measurements. Minimize geometric error. Parameterize the camera model using the remaining 9 parameters. (x 0, y 0, α, 6 from R and C denoted collectively by q) Iterative minimization of geometric error. Minimizing algebraic error. Iterative minimization problem becomes smaller. Equivalent to minimizing Ag(q). Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 16 / 25

Restricted camera estimation The reduced measurement matrix. Possible to replace matrix A with a square 12 12 matrix ^A such that Ap = p A Ap = ^Ap. By SVD: Let A = UDV be the SVD of A, and define ^A = DV. Then A A = (VDU )(UDV ) = (VD)(DV ) = ^A ^A Given a set of n world to image correspondence, X i x i, the problem of finding a constrained camera matrix P that minimizes the sum of algebraic distances i d alg(x i, PX i ) 2 reduces to the minimization of a function R 9 R 12, independent of the number n correspondences. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 17 / 25

Restricted camera estimation Initialization of the iteration Use a linear algorithm such as DLT to find initial camera matrix. Clamp fixed parameters Set variable parameter to their values obtained by decomposition of initial camera matrix The values obtained from the DLT can differ significant from the real values. Therefore use soft constraints and add extra terms to the cost function. E.g. if s = 0 and α x = α y the geometric error becomes: d(x i, PX i ) 2 + ws 2 + w(α x α y ) 2 i Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 18 / 25

Restricted camera estimation Exterior orientation All internal parameters of the camera are known. Only position and orientation of the camera unknown. Six degrees of freedom remain and can be determined with three points. Experimental evaluation (table 7.2) Same results Algrabraic method is quicker (12 errors vs. 2n = 396) Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 19 / 25

Restricted camera estimation Covariance estimation Similar to 2D homography case. Assuming all errors are in the image measurements the residual error is equal to: ɛ res = σ(1 d/2n) 1/2 With d the number of camera parameters being fitted. (Example 7.1: ɛ = 0.365 results in σ = 0.37). Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 20 / 25

Restricted camera estimation Covariance ellipsoid for an estimated camera. Backpropogate covariance of the point measurements back to camera models. Which results in: Σ camera = (J Σ 1 points J) 1 With J Jacobian of measured points. Error bounds of ellipsoid can now be computed. (C C) Σ 1 C (C C) = k 2 Where k 2 = Fn 1 (α) from the inverse cumulative χ 2 n with confidence level α and n the number of variables. Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 21 / 25

Radial distortion Assumed that a linear model is accurate for imaging. Not the case for real lenses. Radial distortion radial distortion linear image correction Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 22 / 25

Radial distortion Radial distortion is modeled by: ( ) ) xd ( x = L( r) ỹ y d ( x, ỹ) is the ideal position (x d, y d ) actual image position r is the radial distance x 2 + x 2 from the centre of distortion L( r) is a distortion factor Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 23 / 25

Radial distortion Correction of the coordinates in pixel coordinates is given by: ˆx = x c + L(r)(x x c ) ŷ = y c + L(r)(y y c ) Where (x, y) are the measured coordinates, (ˆx, ŷ) corrected coordiates, (x c, y c ) the center of radial distortion, and r 2 = (x x c ) 2 + (y y c ) 2. L(r) is approximated by a Taylor expansion L(r) = 1 + κ 1 r + κ 2 r 2 + κ 3 r 3 +... Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 24 / 25

Radial distortion Principal point is taken as the centre of radial distortion. L(r) may be computed by minimizing a cost based on a the deviation from a linear mapping. Parameters κ i can be computed together with P during minimization of the geometric error. Alternatively one can use known straight lines to guide minimization process. Example 7.3 and 7.4 show an example of radial distortion compensation and the effect on the residual error (Table 7.3). Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 25 / 25