HW 1: Project Report (Camera Calibration)

Similar documents
COMP 558 lecture 19 Nov. 17, 2010

CS201 Computer Vision Camera Geometry

3D Geometry and Camera Calibration

Introduction to Homogeneous coordinates

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

calibrated coordinates Linear transformation pixel coordinates

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Perspective Projection [2 pts]

Geometric camera models and calibration

Agenda. Rotations. Camera calibration. Homography. Ransac

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

Flexible Calibration of a Portable Structured Light System through Surface Plane

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Model and Calibration

Vision Review: Image Formation. Course web page:

Project Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any)

CSCI 5980: Assignment #3 Homography

Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition

Agenda. Rotations. Camera models. Camera calibration. Homographies

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Two-view geometry Computer Vision Spring 2018, Lecture 10

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Performance Study of Quaternion and Matrix Based Orientation for Camera Calibration

An idea which can be used once is a trick. If it can be used more than once it becomes a method

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

Computer Vision. Geometric Camera Calibration. Samer M Abdallah, PhD

521466S Machine Vision Exercise #1 Camera models

Short on camera geometry and camera calibration

Computer Vision Projective Geometry and Calibration. Pinhole cameras

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.

Lecture 3: Camera Calibration, DLT, SVD

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Pin Hole Cameras & Warp Functions

CSE 252B: Computer Vision II

CSE152a Computer Vision Assignment 1 WI14 Instructor: Prof. David Kriegman. Revision 0

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Camera Model and Calibration. Lecture-12

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

NAME VCamera camera model representation

Computer Graphics with OpenGL ES (J. Han) Chapter IV Spaces and Transforms

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

Pin Hole Cameras & Warp Functions

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

Module 6: Pinhole camera model Lecture 32: Coordinate system conversion, Changing the image/world coordinate system

3D Reconstruction with two Calibrated Cameras

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

Camera calibration. Robotic vision. Ville Kyrki

Calibrating an Overhead Video Camera

Visual Recognition: Image Formation

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

Answers to practice questions for Midterm 1

3-D D Euclidean Space - Vectors

CS231A Course Notes 4: Stereo Systems and Structure from Motion

Transformation. Jane Li Assistant Professor Mechanical Engineering & Robotics Engineering

Computer Vision cmput 428/615

Early Fundamentals of Coordinate Changes and Rotation Matrices for 3D Computer Vision

AH Matrices.notebook November 28, 2016

Outline. ETN-FPI Training School on Plenoptic Sensing

1 Projective Geometry

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

CSE 252B: Computer Vision II

Image Formation I Chapter 2 (R. Szelisky)

3D Geometry and Camera Calibration

Midterm Exam Solutions

Camera models and calibration

Unit 3 Multiple View Geometry

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Geometric Computing. in Image Analysis and Visualization. Lecture Notes March 2007

Computer Vision I - Appearance-based Matching and Projective Geometry

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Assignment 2 : Projection and Homography

Announcements. Stereo

Take Home Exam # 2 Machine Vision

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15

Image Formation I Chapter 1 (Forsyth&Ponce) Cameras

4.1 Radian and Degree Measure

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES

COS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures)

Perspective projection and Transformations

Computer Vision I - Appearance-based Matching and Projective Geometry

CSE452 Computer Graphics

3D Motion Analysis Based on 2D Point Displacements

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

COMP30019 Graphics and Interaction Three-dimensional transformation geometry and perspective

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

EECS 442: Final Project

Graphics and Interaction Transformation geometry and homogeneous coordinates

Self-Calibration of a Camera Equipped SCORBOT ER-4u Robot

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

Epipolar Geometry and the Essential Matrix

Perspective Projection in Homogeneous Coordinates

Lecture 9: Epipolar Geometry

1 Trigonometry. Copyright Cengage Learning. All rights reserved.

Perspective Projection Describes Image Formation Berthold K.P. Horn

The Singular Value Decomposition: Let A be any m n matrix. orthogonal matrices U, V and a diagonal matrix Σ such that A = UΣV T.

Transcription:

HW 1: Project Report (Camera Calibration) ABHISHEK KUMAR (abhik@sci.utah.edu) 1 Problem The problem is to calibrate a camera for a fixed focal length using two orthogonal checkerboard planes, and to find intrinsic and extrinsic parameters. 2 Experimental Procedure 2.1 Data Capture Two checkerboard patterns are pasted on a wall corner at an angle of 90 degree to each other, as shown in the following figure. The world frame axes are chosen as follows: The origin is at the lower end where two images meet on the corner. Z axis points upward from the origin. X axis is pointed along the left wall from the origin. Y axis is pointed along the right wall and goes through the origin. The origin and the three axes are shown in the following figure. Figure 1: Checkerboard pattern on the wall corner and the world frame coordinate axes The points to be measured are shown with red dots in the above image. Total 60 points are chosen as shown in figure, with 30 on one checkerboard and 30 on the other checkerboard. The real world coordinates for these 60 points are measured using a ruler (in the reference of the world frames axes shown in Fig. 1). All world frame coordinates are measured in centimeters. 1

The picture of the checkerboard pattern is captured using a camera and the coordinates of these 60 points are measured in the image coordinate frame (with origin taken at top left corner of the image). The image pixel resolution 2048 1536. is This step gives us the coordinates of 60 pairs of points with each pair having one point in world frame and the corresponding point in image frame. We need to estimate 11 free parameters for camera calibration so this number of points is more than sufficient. We choose a large number of points so that human measurement errors (in marking points in image and world frame) are averaged out and we get a robust estimate of the camera parameters. 2.2 Estimation of the Calibration Matrix Least squares method is used to estimate the calibration matrix. There are 120 homogeneous linear equations in twelve variables, which are the coefficients of the calibration matrix M. Lets denote this system of linear equations as Pm = 0, m := [m 1 m 2 m 3 ] T, (1) where, m 1, m 2, m 3 are first, second and third rows of the matrix M respectively. m is a 12 1 vector, and P is a 120 12 matrix. The problem of least square estimation of P is defined as min Pm 2, subject to m 2 = 1. (2) As it turns out, the solution of above problem is given by the eigenvector of matrix P T P having the least eigenvalue. The eigenvectors of matrix P T P can also be computed by performing the singular value decomposition (SVD) of P. The 12 right singular vectors of P are also the eigenvectors of P T P. The SVD method is used here to get the eigenvector corresponding to the least eigenvalue. This eigenvector is the solution to the above problem. Reorganizing the 12 1 vector m in a matrix of 4 3 gives us the matrix M. The first three elements of the third row of this matrix denote one of the three rotation vectors. As we know that the norm of rotation vectors is unity, this matrix is scaled by the norm of the third rotation vector to get the calibration matrix in canonical form. We take this matrix as the final calibration matrix M. 2.3 Computation of Intrinsic and Extrinsic Parameters The intrinsic and extrinsic parameters are computed from the calibration matrix M by using the method given in the book. These methods use the various properties of the rotation vectors, namely the norm of them being one and the dot product of two rotation matrix being zero. Once the calibration matrix is estimates, we pick 4 different points on the checkerboard pattern (other than the points picked before) and record their coordinates in the world frame. These points are then transformed to image frame using the calibration matrix obtained using least squares estimation. These coordinates are then compared with measured coordinates of these points in the image frame. We give the errors obtained in the following section. 3 Results The estimated calibration matrix using least squares estimation is given below. 2872.0 1744.3 138.7 77270.0 M = 273.9 174.8 3287.5 94116.0 0.7380 0.6480 0.1883 75.633 2

Parameter Value θ 1.5697 rad ( π/2 deg) u 0 1015.5 v 0 934.4519 α 3206.1 β 3168.6 Table 1: Intrinsic Parameters The intrinsic parameters are tabulated below. The extrinsic parameters consist of rotation matrix and the translation vector. The rotation matrix is given below: 0.6619 0.7494 0.0153 R = 0.1312 0.1359 0.9820 0.7380 0.6480 0.1883 The translation vector t is estimated as K 1 b, where K is the intrinsic parameter matrix and b is the last column of the calibration matrix M. The t obtained in our experiments is [0.1519 7.3976 75.6330] T. 4 Discussion 4.1 Intrinsic Parameters We get θ almost equal to π/2 (Table 1), which means that the camera coordinates are not skewed much and the X and Y axes in the image frame are almost at 90 to each other. As mentioned in the Sec. 2, the image resolution is 2048 1536. This means that the center of the image lies at (1024, 768). We obtain u 0 and v 0 equal to 1015.5 and 934.4 respectively. The image center does not coincide with the principle point C 0 and is offset by (8.5, 166.4), as estimated in the experiments. α and β are magnifications equal to kf and lf respectively. The terms k and l denote the number of pixels per centimeter, and f is the distance of physical image frame from the pinhole or equivalent lens. The magnitude of α and β are reasonable from this point of view. 4.2 Extrinsic Parameters Three rows of rotation matrix have norm equal to 1 and their dot product with each other is a very low number (of the order of 10 18 ). This is justified because of possible errors in the measurement process which is purely manual. The translation vector also approximately matches the real values. We measured the difference in the world origin and the camera along the three coordinates and the values match approximately. A plain distance measure from the world origin to the camera was measured almost equal to 75 centimeters which is approximately equal to the norm of the translation vector (76 centimeters). The experiments using total 60 points give a very good estimates of the camera parameters. 4.3 Image Coordinates Reconstruction To reconfirm the validity of the estimated parameters, we identify 4 new test points marked as light green dots in the following figure. 3

Figure 2: Checkerboard pattern on the wall corner: 4 new test points The coordinates of these points are measured in the world coordinate system, and the camera calibration matrix estimated in the previous sections is used to get the corresponding coordinates in the image frame. We also measure the true coordinate values of these points using MATLAB. These true measurements are also prone to error as they are done manually using the mouse click. Results of this experiment are tabulated below: World Coordinates (measured, in cms.) Image Coordinates (estimated) Image Coordinates (measured) Error norm (14.9 0 22.4 1) (519.2 271.3) (517.7 274.0) 3.0545 (9.3 0 22.4 1) (735.1 277.7) (736.5 279.4) 2.1762 (0 6.5 5.6 1) (1248.2 1059.7) (1248.5 1060.9) 1.2177 (0 9.3 5.6 1) (1352.5 1080.6) (1.3517 1079.7) 1.2075 Table 2: Test of the Calibration Parameters: Error between measured and estimated image coordinates The error norm is calculated by subtracting corresponding coordinates and find errors along X and Y axes. These errors are then squared and added, and then square root of the resultant is taken to get the error norm. We note that the reconstruction errors are not so big and can be justified given the amount of inherent errors in the measuring process itself. MATLAB Code %world coordinates in cms. %z coordinates wz(1:6) = 7*2.8; wz(31:36) = 7*2.8; wz(7:12) = 6*2.8; wz(37:42) = 6*2.8; wz(13:18) = 5*2.8; wz(43:48) = 5*2.8; wz(19:24) = 4*2.8; wz(49:54) = 4*2.8; 4

wz(25:30) = 3*2.8; wz(55:60) = 3*2.8; %x coordinates wx(1:60) = 0; wx(1:6:30) = 0.9 + 6*2.8; wx(2:6:30) = 0.9 + 5*2.8; wx(3:6:30) = 0.9 + 4*2.8; wx(4:6:30) = 0.9 + 3*2.8; wx(5:6:30) = 0.9 + 2*2.8; wx(6:6:30) = 0.9 + 1*2.8; %y coordinates wy(1:60) = 0; wy(31:6:60) = 0.9 + 1*2.8; wy(32:6:60) = 0.9 + 2*2.8; wy(33:6:60) = 0.9 + 3*2.8; wy(34:6:60) = 0.9 + 4*2.8; wy(35:6:60) = 0.9 + 5*2.8; wy(36:6:60) = 0.9 + 6*2.8; %%image coordinates ix = [0.4020 0.5177 0.6333 0.7365 0.8294 0.9202 0.4062 0.5218 0.6333 0.7386... 0.8294 0.9202 0.4124 0.5239 0.6333 0.7365 0.8273 0.9202 0.4144 0.5300... 0.6312 0.7324 0.8315 0.9202 0.4144 0.5280 0.6353 0.7324 0.8294 0.9161... 1.1700 1.2712 1.3724 1.4942 1.6119 1.7419 1.1659 1.2650 1.3724 1.4859... 1.6057 1.7337 1.1618 1.2588 1.3662 1.4818 1.5995 1.7275 1.1577 1.2547... 1.3620 1.4735 1.5912 1.7171 1.1535 1.2506 1.3538 1.4694 1.5850 1.7130] * 1000; iy = [0.4222 0.4168 0.4168 0.4141 0.4168 0.4195 0.5731 0.5677 0.5570 0.5543... 0.5489 0.5462 0.7213 0.7079 0.6998 0.6863 0.6782 0.6701 0.8696 0.8534... 0.8345 0.8210 0.8076 0.7941 1.0124 0.9908 0.9720 0.9504 0.9342 0.9181... 0.4195 0.4222 0.4276 0.4276 0.4330 0.4384 0.5435 0.5516 0.5597 0.5704... 0.5785 0.5866 0.6728 0.6836 0.6917 0.7052 0.7186 0.7348 0.7968 0.8103... 0.8237 0.8399 0.8588 0.8722 0.9181 0.9342 0.9531 0.9720 0.9908 1.0151] * 1000; n=60; %number of points P(1:2*n,1:12) = 0; j=1; %construct matrix P for i=1:2:120 P(i,1) = wx(j); P(i+1,5) = wx(j); P(i,2) = wy(j); P(i+1,6) = wy(j); P(i,3) = wz(j); P(i+1,7) = wz(j); P(i,4) = 1; P(i+1,8) = 1; P(i,9:12) = P(i,1:4)*-1*ix(j); P(i+1,9:12) = P(i,1:4)*-1*iy(j); j = j+1; end %Perform SVD of P [U S V] = svd(p); [min_val, min_index] = min(diag(s(1:12,1:12))); %m is given by right singular vector of min. singular value m = V(1:12, min_index); %normalize M to make the norm of third rotation vecto unity norm_31 = norm(m(9:11)); m_canonical = m / norm_31; M(1,1:4) = m_canonical(1:4); M(2,1:4) = m_canonical(5:8); M(3,1:4) = m_canonical(9:12); 5

a1 = M(1,1:3); a2 = M(2,1:3); a3 = M(3,1:3); b = M(1:3, 4); r3 = a3; %compute the intrinsic parameters u_0 = a1*a3 ; v_0 = a2*a3 ; cross_a1a3 = cross(a1,a3); cross_a2a3 = cross(a2,a3); theta = acos (-1*cross_a1a3*cross_a2a3 /(norm(cross_a1a3)*norm(cross_a2a3))); alpha = norm(cross_a1a3) * sin(theta); beta = norm(cross_a2a3) * sin(theta); %compute the extrinsic parameters r1 = cross_a2a3/norm(cross_a2a3); r2 = cross(r3, r1); K = [alpha -1*alpha*cot(theta) u_0 0 beta/sin(theta) v_0 0 0 1]; t = inv(k) * b; %translation vector %rotation matrix R(1,1:3) = r1; R(2,1:3) = r2; R(3,1:3) = r3; %%Test of calibration estimates: reconstruction error of 4 new points %image coordinates (measured) ix_test_measured = 1.0e+003 *[0.5177 0.7365 1.2485 1.3517]; iy_test_measured = 1.0e+003 *[0.2740 0.2794 1.0609 1.0797]; %world coordinates (measured) wx_test_measured = [0.9+5*2.8 0.9+3*2.8 0 0]; wy_test_measured = [0 0 0.9+2*2.8 0.9+3*2.8]; wz_test_measured = [8*2.8 8*2.8 2*2.8 2*2.8]; %reconstruct image coordinates and calculate estimation error for i=1:4 temp(1:4) = [wx_test_measured(i) wy_test_measured(i) wz_test_measured(i) 1]; image_reconstructed = M * temp ; image_recons_x(i) = image_reconstructed(1)/image_reconstructed(3); image_recons_y(i) = image_reconstructed(2)/image_reconstructed(3); error(i) = norm([image_recons_x(i)-ix_test_measured(i) image_recons_y(i)-iy_test_measured(i)]); end %print all the values %intrinsic theta u_0 v_0 alpha beta %extrinsic R t %reconstruction error error 6