3D Geometry and Camera Calibration

Similar documents
3D Geometry and Camera Calibration

Camera Calibration. COS 429 Princeton University

Camera Model and Calibration

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

3-D D Euclidean Space - Vectors

calibrated coordinates Linear transformation pixel coordinates

Computer Vision. Geometric Camera Calibration. Samer M Abdallah, PhD

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

CS201 Computer Vision Camera Geometry

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Camera Model and Calibration. Lecture-12

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Epipolar geometry. x x

Two-view geometry Computer Vision Spring 2018, Lecture 10

Vision Review: Image Formation. Course web page:

Transforms. COMP 575/770 Spring 2013

1 Projective Geometry

Vector Algebra Transformations. Lecture 4

Agenda. Rotations. Camera calibration. Homography. Ransac

Pin Hole Cameras & Warp Functions

The end of affine cameras

Geometric camera models and calibration

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Rectification and Distortion Correction

CV: 3D sensing and calibration

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

Agenda. Rotations. Camera models. Camera calibration. Homographies

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6

CS6670: Computer Vision

Image Formation I Chapter 1 (Forsyth&Ponce) Cameras

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

Two-View Geometry (Course 23, Lecture D)

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Pin Hole Cameras & Warp Functions

Lecture 14: Basic Multi-View Geometry

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Recovering structure from a single view Pinhole perspective projection

Image Formation I Chapter 2 (R. Szelisky)

Visual Recognition: Image Formation

An idea which can be used once is a trick. If it can be used more than once it becomes a method

Image Formation I Chapter 1 (Forsyth&Ponce) Cameras

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Multi-view geometry problems

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Structure from motion

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

To Do. Outline. Translation. Homogeneous Coordinates. Foundations of Computer Graphics. Representation of Points (4-Vectors) Start doing HW 1

Lecture 9: Epipolar Geometry

Camera Geometry II. COS 429 Princeton University

Introduction to Homogeneous coordinates

Perspective projection and Transformations

N-Views (1) Homographies and Projection

Geometric Transformations

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Camera model and multiple view geometry

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

Fundamental Matrix & Structure from Motion

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

CS4620/5620. Professor: Kavita Bala. Cornell CS4620/5620 Fall 2012 Lecture Kavita Bala 1 (with previous instructors James/Marschner)

Midterm Exam Solutions

Geometry of a single camera. Odilon Redon, Cyclops, 1914

Stereo Vision. MAN-522 Computer Vision

Lecture 3: Camera Calibration, DLT, SVD

COS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures)

Specifying Complex Scenes

2D/3D Geometric Transformations and Scene Graphs

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

Structure from motion

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Rectification. Dr. Gerhard Roth

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

CSE328 Fundamentals of Computer Graphics

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

CSE528 Computer Graphics: Theory, Algorithms, and Applications

Epipolar Geometry in Stereo, Motion and Object Recognition

COMP 558 lecture 19 Nov. 17, 2010

Cameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models

Cameras and Stereo CSE 455. Linda Shapiro

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices

Graphics pipeline and transformations. Composition of transformations

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

Epipolar Geometry and Stereo Vision

Announcements. Stereo

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.

Structure from motion

CT5510: Computer Graphics. Transformation BOCHANG MOON

Computer Graphics. Chapter 5 Geometric Transformations. Somsak Walairacht, Computer Engineering, KMITL

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

Chapter 18. Geometric Operations

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Epipolar Geometry and the Essential Matrix

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Identifying Car Model from Photographs

Interlude: Solving systems of Equations

Unit 3 Multiple View Geometry

Transcription:

3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often write (u, v) for image coordinates u v v v u u 3D points = column vectors Transformations = pre-multiplied matrices Rotation Arbitrary Rotation Rotation about the z axis Rotation about x, y axes similar (cyclically permute x, y, z) Any rotation is a composition of rotations about x, y, and z Composition of transformations = matrix multiplication (watch the order!) Result: orthonormal matrix Each row, column has unit length Dot product of rows or columns = 0 Inverse of matrix = transpose 1

Arbitrary Rotation Scale Rotate around x, y, then z: Scale in x, y, z: Don t do this! Compute simple matrices and multiply them! Shear Shear parallel to xy plane: Translation Can translation be represented by multiplying by a 3 3 matrix? No. roof: Homogeneous Coordinates Translation in Homogeneous Coordinates Add a fourth dimension to each point: To get real (3D) coordinates, divide by w: After divide by w, this is just a translation by (t x, t y, t z ) 2

erspective rojection erspective rojection What does 4 th row of matrix do? This is projection onto the z=1 plane (x,y,z) After divide, (0,0,0) (x/z,y/z,1) z=1 Add scaling, etc. pinhole camera model utting It All Together: A Camera Model utting It All Together: A Camera Model Translate to image center Scale to pixel size erspective projection Camera orientation Camera location 3D point (homogeneous coords) Intrinsics Extrinsics Then perform homogeneous divide, and get (u,v) coords utting It All Together: A Camera Model Eye coordinates Camera coordinates Normalized device coordinates More General Camera Model Multiply all these matrices together Don t care about z after transformation Image coordinates ixel coordinates World coordinates Scale ambiguity 11 free parameters 3

Radial Distortion Camera Calibration Radial distortion can not be represented by matrix (c u, c v ) is image center, u * img = u img c u, v* img = v img c v, κ is first-order radial distortion coefficient Determining values for camera parameters Necessary for any algorithm that requires 3D 2D mapping Method used depends on: What data is available Intrinsics only vs. extrinsics only vs. both Form of camera model Given: 3D 2D correspondences General perspective camera model (11-parameter, no radial distortion) Write equations: Linear equation Overconstrained (more equations than unknowns) Underconstrained (rank deficient matrix any multiple of a solution, including 0, is also a solution) Standard linear least squares methods for Ax=0 will give the solution x=0 Instead, look for a solution with x = 1 That is, minimize Ax 2 subject to x 2 =1 Minimize Ax 2 subject to x 2 =1 Ax 2 = (Ax) T (Ax) = (x T A T )(Ax) = x T (A T A)x Expand x in terms of eigenvectors of A T A: x = µ 1 e 1 + µ 2 e 2 + x T (A T A)x = λ 1 µ 12 +λ 2 µ 22 + x 2 = µ 12 +µ 22 + 4

To minimize λ 1 µ 12 +λ 2 µ 22 + subject to µ 12 +µ 22 + = 1 set µ min = 1 and all other µ i =0 Thus, least squares solution is the eigenvector e 1 corresponding to the minimum (non-zero) eigenvalue λ 1 of A T A Camera Calibration Example 2 Incorporating additional constraints into camera model No shear, no scale (rigid-body motion) Square pixels etc. These impose nonlinear constraints on camera parameters Camera Calibration Example 2 Option 1: solve for general perspective model, then find closest solution that satisfies constraints Option 2: nonlinear least squares Usually gradient descent techniques Common implementations available (e.g. Matlab optimization toolbox) Camera Calibration Example 3 Incorporating radial distortion Option 1: Find distortion first (straight lines in calibration target) Warp image to eliminate distortion Run (simpler) perspective calibration Option 2: nonlinear least squares Camera Calibration Example 4 What if 3D points are not known? Structure from motion problem! As we saw last time, can often be solved since # of knowns > # of unknowns Multi-Camera Geometry Epipolar geometry relationship between observed positions of points in multiple cameras Assume: 2 cameras Known intrinsics and extrinsics 5

Epipolar line Goal: derive equation for Observation:, C 1, C 2 determine a plane Epipoles Work in coordinate frame of C 1 Normal of plane is T x R, where T is relative translation, R is relative rotation is perpendicular to this normal: (T x R ) = 0 6

Write cross product as matrix multiplication T * R = 0 T E = 0 E is the essential matrix Essential Matrix E depends only on camera geometry Given E, can derive equation for line Fundamental Matrix Can define fundamental matrix F analogously, operating on pixel coordinates instead of camera coordinates u 1 T F u 2 = 0 Advantage: can sometimes estimate F without knowing camera calibration 7