Camera Parameters and Calibration

Similar documents
Camera Parameters, Calibration and Radiometry. Readings Forsyth & Ponce- Chap 1 & 2 Chap & 3.2 Chap 4

Camera Model and Calibration

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

3D Transformations. CS 4620 Lecture 10. Cornell CS4620 Fall 2014 Lecture Steve Marschner (with previous instructors James/Bala)

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

3D Transformations. CS 4620 Lecture Kavita Bala w/ prior instructor Steve Marschner. Cornell CS4620 Fall 2015 Lecture 11

Camera Model and Calibration. Lecture-12

Outline. ETN-FPI Training School on Plenoptic Sensing

Pin Hole Cameras & Warp Functions

3D Geometry and Camera Calibration

Mysteries of Parameterizing Camera Motion - Part 1

Pin Hole Cameras & Warp Functions

Vision Review: Image Formation. Course web page:

CSE 252B: Computer Vision II

Flexible Calibration of a Portable Structured Light System through Surface Plane

Geometric transformations in 3D and coordinate frames. Computer Graphics CSE 167 Lecture 3

3D Modeling using multiple images Exam January 2008

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Specifying Complex Scenes

Early Fundamentals of Coordinate Changes and Rotation Matrices for 3D Computer Vision

3-D D Euclidean Space - Vectors

UNIT 2 2D TRANSFORMATIONS

Introduction to Robotics

Object Representation Affine Transforms. Polygonal Representation. Polygonal Representation. Polygonal Representation of Objects

Short on camera geometry and camera calibration

Lecture 9: Transformations. CITS3003 Graphics & Animation

Computer Vision. Geometric Camera Calibration. Samer M Abdallah, PhD

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Vector Algebra Transformations. Lecture 4

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES

Inverse Kinematics of 6 DOF Serial Manipulator. Robotics. Inverse Kinematics of 6 DOF Serial Manipulator

GEOMETRIC TRANSFORMATIONS AND VIEWING

Rotations (and other transformations) Rotation as rotation matrix. Storage. Apply to vector matrix vector multiply (15 flops)

Unit 3 Multiple View Geometry

CV: 3D sensing and calibration

Geometric Transformations

''VISION'' APPROACH OF CALmRATION METIIODS FOR RADIOGRAPHIC

CT5510: Computer Graphics. Transformation BOCHANG MOON

Epipolar geometry. x x

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Agenda. Rotations. Camera models. Camera calibration. Homographies

BIL Computer Vision Apr 16, 2014

Structure from motion

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Computer Vision cmput 428/615

Autonomous Navigation for Flying Robots

3D Reconstruction from Scene Knowledge

Camera calibration. Robotic vision. Ville Kyrki

2D/3D Geometric Transformations and Scene Graphs

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Transformations Week 9, Lecture 18

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Humanoid Robotics. Projective Geometry, Homogeneous Coordinates. (brief introduction) Maren Bennewitz

Advanced Computer Graphics Transformations. Matthias Teschner

An idea which can be used once is a trick. If it can be used more than once it becomes a method

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

CS231A. Review for Problem Set 1. Saumitro Dasgupta

Visualizing Quaternions

Multiple View Geometry

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Geometry of image formation

Image Formation I Chapter 2 (R. Szelisky)

Visual Recognition: Image Formation

Monday, 12 November 12. Matrices

Section III: TRANSFORMATIONS

1 Projective Geometry

arxiv: v1 [cs.cv] 28 Sep 2018

2D Object Definition (1/3)

Structure from motion

calibrated coordinates Linear transformation pixel coordinates

Structure from Motion and Multi- view Geometry. Last lecture

Translation. 3D Transformations. Rotation about z axis. Scaling. CS 4620 Lecture 8. 3 Cornell CS4620 Fall 2009!Lecture 8

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Camera Calibration. COS 429 Princeton University

Structure from Motion

Animation. Keyframe animation. CS4620/5620: Lecture 30. Rigid motion: the simplest deformation. Controlling shape for animation

Transforms. COMP 575/770 Spring 2013

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6

Calibrated Image Acquisition for Multi-view 3D Reconstruction

Planar homographies. Can we reconstruct another view from one image? vgg/projects/singleview/

Last week. Machiraju/Zhang/Möller/Fuhrmann

Linear and Affine Transformations Coordinate Systems

So we have been talking about 3D viewing, the transformations pertaining to 3D viewing. Today we will continue on it. (Refer Slide Time: 1:15)

Rectification and Distortion Correction

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Recovering light directions and camera poses from a single sphere.

ENGN D Photography / Spring 2018 / SYLLABUS

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

ME5286 Robotics Spring 2014 Quiz 1 Solution. Total Points: 30

Agenda. Rotations. Camera calibration. Homography. Ransac

Camera Geometry II. COS 429 Princeton University

LUMS Mine Detector Project

Camera model and multiple view geometry

CS 231A: Computer Vision (Winter 2018) Problem Set 2

METR Robotics Tutorial 2 Week 2: Homogeneous Coordinates

Epipolar Geometry and Stereo Vision

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Structure from motion

Transcription:

Camera Parameters and Calibration

Camera parameters U V W Ê Ë ˆ = Transformation representing intrinsic parameters Ê Ë ˆ Ê Ë ˆ Transformation representing extrinsic parameters Ê Ë ˆ X Y Z T Ê Ë ˆ From last time.

Homogeneous Coordinates (Again)

Extrinsic Parameters: Characterizing Camera position Chasles's theorem: Any motion of a solid body can be composed of a translation and a rotation.

3D Rotation Matrices

3D Rotation Matrices cont d Euler s Theorem: An arbitrary rotation can be described by 3 rotation parameters For example: R = More Generally: Most General:

Rodrigue s Formula Take any rotation axis a and angle q: What is the matrix? with R = e Aq

Rotations can be represented as points in space (with care) Turn vector length into angle, direction into axis: Useful for generating random rotations, understanding angular errors, characterizing angular position, etc. Problem: not unique Not commutative

Other Properties of rotations NOT Commutative R*R2 R2*R

Rotations To form a rotation matrix, you can plug in the columns of new coordinate points For Example: The unit x-vector goes to x : È x'= Î x x 2 x 3 È x y z È = x 2 y 2 z 2 Î x 3 y 3 z 3 Î

Other Properties of Rotations Inverse: R - = R T rows, columns are othonormal r T i r j = if i j, else r T i r i = Determinant: det( R ) = The effect of a coordinate rotation on a function: x = R x F( x ) = F( R - x )

Extrinsic Parameters p = R p + t R = rotation matrix t = translation vector In Homogeneous coordinates, p = R p + t => È x' È r r 2 r 3 t x È x y' r = 2 r 22 r 23 ty y z' r 3 r 32 r 33 t z z Î Î Î

Intrinsic Parameters u * Su v * Sv È Î = Su Sv È Î u v È Î Differential Scaling u + u v + v È Î = u v È Î u v È Î Camera Origin Offset

Î È - Î È 2 2 Î È Rotation 9 o Scaling *2 Translation Î È y x p = House Points 2D Example

The Whole (Linear) Transformation U V W È Î = - f s u u - f s v v È Î È Î r r 2 r 3 t x r 2 r 22 r 23 t y r 3 r 32 r 33 t z È Î x y z T È Î U V W Ê Ë ˆ = Transformation representing intrinsic parameters Ê Ë ˆ Ê Ë ˆ Transformation representing extrinsic parameters Ê Ë ˆ X Y Z T Ê Ë ˆ u =U/W v =U/W Final image coordinates

Non-linear distortions (not handled by our treatment)

Camera Calibration You need to know something beyond what you get out of the camera Known World points (object) Traditional camera calibration Linear and non-linear parameter estimation Known camera motion Camera attached to robot Execute several different motions Multiple Cameras

Classical Calibration

Camera Calibration Take a known set of points. Typically 3 orthogonal planes. Treat a point in the object as the World origin Points x, x2, x3, Project to y,y2,y3

Calibration Patterns

Classical Calibration Put the set of points known object points x i into a matrix X = [ x L x ] n And projected points u i into a matrix U = [ u L u ] n Solve for the transformation matrix: Odd derivation of the Least Squares solution: U = P X UX t = P(XX t ) Note this is only for instructional purposes. It does not work as a real procedure. UX t (XX t ) - = P(XX t )(XX t ) - UX t (XX t ) - = P Next extract extrinsic and intrinsic parameters from P

Real Calibration Real calibration procedures look quite different Weight points by correspondence quality Nonlinear optimization Linearizations Non-linear distortions Etc.

Camera Motion

Calibration Example (Zhang, Microsoft) 8 Point matches manually picked Motion algorithm used to calibrate camera

Applications Image based rendering: Light field -- Hanrahan (Stanford)

Virtualized Reality

Projector-based VR UNC Chapel Hill

Shader Lamps

Shape recovery without calibration Fitzgibbons, Zisserman Fully automatic procedure: Converting the images to 3D models is a black-box filter: Video in, VRML out. * We don't require that the motion be regular: the angle between views can vary, and it doesn't have to be known. Recovery of the angle is automatic, and accuracy is about 4 millidegrees standard deviation. In golfing terms, that's an even chance of a hole in one. * We don't use any calibration targets: features on the objects themselves are used to determine where the camera is, relative to the turntable. Aside from being easier, this means that there is no problem with the setup changing between calibration and acquisition, and that anyone can use the software without special equipment. For example, this dinosaur sequence was supplied to us by the University of Hannover without any other information. (Actually, we do have the ground-truth angles so that we can make the accuracy claims above, but of course these are not used in the reconstruction).