Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Similar documents
Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Model and Calibration

Camera Model and Calibration. Lecture-12

Pin Hole Cameras & Warp Functions

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

CS201 Computer Vision Camera Geometry

Pin Hole Cameras & Warp Functions

An idea which can be used once is a trick. If it can be used more than once it becomes a method

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah

Geometric camera models and calibration

Vision Review: Image Formation. Course web page:

3-D D Euclidean Space - Vectors

calibrated coordinates Linear transformation pixel coordinates

CSE 252B: Computer Vision II

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

3D Geometry and Camera Calibration

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah

Camera model and multiple view geometry

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.

Image Formation I Chapter 2 (R. Szelisky)

1 Projective Geometry

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Introduction to Homogeneous coordinates

Image Formation I Chapter 1 (Forsyth&Ponce) Cameras

Computer Vision Project-1

Image Formation I Chapter 1 (Forsyth&Ponce) Cameras

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

CS6670: Computer Vision

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

Outline. ETN-FPI Training School on Plenoptic Sensing

Autonomous Navigation for Flying Robots

521466S Machine Vision Exercise #1 Camera models

Camera Calibration. COS 429 Princeton University

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Stereo Image Rectification for Simple Panoramic Image Generation

Visual Recognition: Image Formation

Unit 3 Multiple View Geometry

Multiple View Geometry in Computer Vision

Structure from motion

CS4670: Computer Vision

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices

Specifying Complex Scenes

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Early Fundamentals of Coordinate Changes and Rotation Matrices for 3D Computer Vision

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Perspective Projection [2 pts]

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds

Lecture 9: Epipolar Geometry

Agenda. Rotations. Camera models. Camera calibration. Homographies

DD2429 Computational Photography :00-19:00

To Do. Outline. Translation. Homogeneous Coordinates. Foundations of Computer Graphics. Representation of Points (4-Vectors) Start doing HW 1

Structure from motion

Assignment 2 : Projection and Homography

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

Two-view geometry Computer Vision Spring 2018, Lecture 10

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Robotics - Projective Geometry and Camera model. Marcello Restelli

Geometry of image formation

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Geometric transformations in 3D and coordinate frames. Computer Graphics CSE 167 Lecture 3

Computer Vision. Geometric Camera Calibration. Samer M Abdallah, PhD

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Module 6: Pinhole camera model Lecture 32: Coordinate system conversion, Changing the image/world coordinate system

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Computer Vision cmput 428/615

Recovering structure from a single view Pinhole perspective projection

Advanced Computer Graphics Transformations. Matthias Teschner

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Short on camera geometry and camera calibration

Flexible Calibration of a Portable Structured Light System through Surface Plane

CV: 3D sensing and calibration

Cameras and Stereo CSE 455. Linda Shapiro

Hand-Eye Calibration from Image Derivatives

Humanoid Robotics. Projective Geometry, Homogeneous Coordinates. (brief introduction) Maren Bennewitz

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

3D Transformations. CS 4620 Lecture 10. Cornell CS4620 Fall 2014 Lecture Steve Marschner (with previous instructors James/Bala)

Discriminant Functions for the Normal Density

Perspective Projection in Homogeneous Coordinates

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

COS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures)

Geometry: Outline. Projections. Orthographic Perspective

Lecture 14: Basic Multi-View Geometry

CSE152a Computer Vision Assignment 1 WI14 Instructor: Prof. David Kriegman. Revision 0

Epipolar Geometry and the Essential Matrix

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Announcements. Equation of Perspective Projection. Image Formation and Cameras

C / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.

An Overview of Matchmoving using Structure from Motion Methods

Lecture 5.3 Camera calibration. Thomas Opsahl

Projective Geometry and Camera Models

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Transcription:

Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu

Reference Most slides are adapted from the following notes: Some lecture notes on geometric computer vision (available online) by Peter Sturm

3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com)

3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com)

3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com)

Pixels are permuted and replicated Scene Output Image Is this a camera?

Perspective Imaging Viewpoint Image Detector ECCV'06 Tutorial on General Imaging Design, Calibration and Applications. Sturm, Swaminathan, Ramalingam

Many Types of Imaging Systems ECCV'06 Tutorial on General Imaging Design, Calibration and Applications. Sturm, Swaminathan, Ramalingam

Pinhole model A camera maps the 3D world to a 2D image. Many such mappings or camera models exist. A pinhole model is a good approximation for many existing cameras.

Perspective projection Camera center (or) Optical center optical axis pixels (0,0) Principal point A pinhole model can be expressed using an optical center and an image plane. We treat the optical center to be the origin of the camera coordinate frame. A 3D point gets projected along the line of sight that connects it with the optical center. Its image point is the intersection of this line with the image plane.

Perspective projection Camera center (or) Optical center optical axis pixels (0,0) Principal point We are given the 3D point in the camera coordinate system (exponent c denotes the camera coordinate system). Why do we have the 1?

Why homogenous coordinates? Allows us to express common transformations in matrix form: Simplifies the concepts such as points at infinity, etc.

Perspective projection optical axis (0,0) Principal point The image point can be computed using similar triangles:

Perspective projection Camera center (or) Optical center optical axis pixels (0,0) Principal point In homogeneous coordinates (by adding 1 at the end of a vector), these equations can be written in the form of matrixvector product: up to a scale - scalar multiplication does not change the equivalence.

From image plane to pixel coordinates (0,0) Principal point Pixels need not be squares (especially the older cameras): - density of pixels along direction. - density of pixels along direction.

From image plane to pixel coordinates (0,0) Principal point In homogeneous coordinates we have the following for the image point :

3D-to-2D projection 3D to image plane projection Image plane to pixel system

World coordinate frame We assume that the 3D point is given in the world coordinate system. We model the pose of the camera using a 3x1 translation vector and a 3x3 rotation matrix. Let us assume that the superscript m denotes 3D points in the world coordinate frame, and the transformation to camera frame is given below:

World coordinate frame Rewriting the above equation in homogeneous coordinates to denote the mapping from world to camera coordinate frame: 4x4 matrix

Cross-checking We want to ensure that the optical center is the origin of the camera coordinate system. In the world coordinate system, the optical center is given by.

Rotation matrices Rotations are orthonormal matrices: their columns are mutually orthogonal 3-vectors of norm 1. For a rotation matrix, the determinant value should be equal to +1. For reflection matrix, the determinant value will be -1. The inverse of a rotation matrix is its transpose:

Rotation matrices Each rotation can be decomposed into three base rotations: The Euler angles and are associated with and axes respectively.

Complete Model The following 3x3 matrix is the camera calibration matrix:

Projection Matrix The 3 4 matrix is called projection matrix. It maps 3D points to 2D image points, all expressed in homogeneous coordinates.

Camera Parameters Extrinsic parameters the rotation matrix and the translation vector. Intrinsic parameters that explains what happens inside a camera - and.

Calibration matrix There used to be a skew parameter in old cameras, that are not necessary in modern cameras. We have 4 parameters defined by 5 coefficients.

Reparameterization Focal lengths in pixels Principal point in pixels

Calibration Parameters Focal length typically ranges to several hundreds of millimeters. The photosensitive area of a camera is typically a rectangle with several millimeters side length and the pixel density is usually of the order of one or several hundreds of pixels per mm For cameras with well mounted optics, the principal point is usually very close to the center of the photosensitive area.

What is Camera Calibration? The task refers to the problem of computing the calibration matrix. We compute the focal length, principal point, and aspect ratio.

Why do you need calibration? For a given pixel, the camera calibration allows you to know the light ray along which the camera samples the world.

Calibration Toolbox MATLAB https://www.vision.caltech.edu/bouguetj/cali b_doc/htmls/example.html OpenCV http://docs.opencv.org/2.4/doc/tutorials/calib3d /camera_calibration/camera_calibration.html

Visual SFM http://ccwu.me/vsfm/ https://www.youtube.com/watch?v=sha_lbizdac

Forward and backward projection Forward: The projection of a 3D point on the image:

Forward and backward projection Backward projection: Given a pixel in the image, we determine the set of points in space that map to this point. Optical Center

Pose estimation Given an image of an object whose structure is perfectly known, is it possible find the position and orientation of the camera?

Pose estimation Camera center and orientation Compute the pose using constraints from the 3D points Camera center and orientation

Where do you find applications for pose estimation? Image courtesy: Oculus Rift Google Tango Microsoft Hololens

Where do you find applications for pose estimation? Magic Leap

Pose estimation Camera Center What other constraints can you use to find the pose?

Pose estimation Camera Center We are given 3 points on the image and their corresponding 3D points. We compute the 2D distances between pairs of 2D points on the image: and

Thank You!