Multiple View Geometry in computer vision

Similar documents
More on single-view geometry class 10

Multiple View Geometry in Computer Vision

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Projective geometry, camera models and calibration

Invariance of l and the Conic Dual to Circular Points C

3D reconstruction class 11

Camera models and calibration

Epipolar Geometry class 11

A Stratified Approach for Camera Calibration Using Spheres

Visual Recognition: Image Formation

Computer Vision I - Appearance-based Matching and Projective Geometry

calibrated coordinates Linear transformation pixel coordinates

Auto-calibration. Computer Vision II CSE 252B

Projective geometry for Computer Vision

Computer Vision. 2. Projective Geometry in 3D. Lars Schmidt-Thieme

Structure from motion

CS-9645 Introduction to Computer Vision Techniques Winter 2019

SINGLE VIEW GEOMETRY AND SOME APPLICATIONS

3D Reconstruction from Scene Knowledge

Robotics - Projective Geometry and Camera model. Matteo Pirotta

Projective geometry for 3D Computer Vision

CSE 252B: Computer Vision II

Camera Calibration and Light Source Estimation from Images with Shadows

METR Robotics Tutorial 2 Week 2: Homogeneous Coordinates

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

Camera Geometry II. COS 429 Princeton University

Pose Estimation from Circle or Parallel Lines in a Single Image

Multiple Views Geometry

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera calibration with spheres: Linear approaches

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Structure from motion

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

Multiple View Geometry in Computer Vision Second Edition

BIL Computer Vision Apr 16, 2014

DD2429 Computational Photography :00-19:00

Camera calibration with two arbitrary coaxial circles

Computer Vision I - Appearance-based Matching and Projective Geometry

CS201 Computer Vision Camera Geometry

CSE 252B: Computer Vision II

3D Photography. Marc Pollefeys, Torsten Sattler. Spring 2015

Metric Rectification for Perspective Images of Planes

Coplanar circles, quasi-affine invariance and calibration

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Camera model and multiple view geometry

Camera Calibration from Surfaces of Revolution

Robotics - Projective Geometry and Camera model. Marcello Restelli

Pin Hole Cameras & Warp Functions

Camera model and calibration

1 Projective Geometry

TD2 : Stereoscopy and Tracking: solutions

COMP 558 lecture 19 Nov. 17, 2010

On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles

Multiple View Geometry in Computer Vision

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

Robot Vision: Projective Geometry

Single-view 3D Reconstruction

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Single-view metrology

Auto-Configuration of a Dynamic Non-Overlapping Camera Network

Projective 2D Geometry

Globally Optimal Algorithms for Stratified Autocalibration

Recovering structure from a single view Pinhole perspective projection

Structure from Motion

Circular Motion Geometry by Minimal 2 Points in 4 Images

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

Image Based Rendering. D.A. Forsyth, with slides from John Hart

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

Scene Modeling for a Single View

Lecture 9: Epipolar Geometry

1.1 Multiview Geometry (in Archimedes s lab)

Scene Modeling for a Single View

Scene Modeling for a Single View

Camera Calibration and Light Source Orientation Estimation from Images with Shadows

Multiple View Geometry in Computer Vision

Bayesian perspective-plane (BPP) with maximum likelihood searching for visual localization

Structure from Motion. Prof. Marco Marcon

A simple method for interactive 3D reconstruction and camera calibration from a single view

Structure from Motion

Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Glossary of dictionary terms in the AP geometry units

(Geometric) Camera Calibration

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 37, NO. 4, AUGUST

Three Dimensional Geometry. Linear Programming

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

Pin Hole Cameras & Warp Functions

12 - THREE DIMENSIONAL GEOMETRY Page 1 ( Answers at the end of all questions ) = 2. ( d ) - 3. ^i - 2. ^j c 3. ( d )

CS4670: Computer Vision

Linear Auto-Calibration for Ground Plane Motion

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Flexible Calibration of a Portable Structured Light System through Surface Plane

Uncalibrated 3D metric reconstruction and flattened texture acquisition from a single view of a surface of revolution

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Part 0. The Background: Projective Geometry, Transformations and Estimation

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics

3D Geometric Computer Vision. Martin Jagersand Univ. Alberta Edmonton, Alberta, Canada

Unit 3 Multiple View Geometry

Transcription:

Multiple View Geometry in computer vision Chapter 8: More Single View Geometry Olaf Booij Intelligent Systems Lab Amsterdam University of Amsterdam, The Netherlands HZClub 29-02-2008

Overview clubje Part 0, The Background Part 1, Single View Geometry Chap 6, Defined camera matrix P, internal, external camera parameters Chap 7, Estimated P using X i x i Chap 8, Estimate P using x i and various pieces of information about X i Part 2, Two View Geometry Chap 9, Define Fundamental matrix F using P and P Chap 11, Estimate F using x i x i

Overview clubje Part 0, The Background Part 1, Single View Geometry Chap 6, Defined camera matrix P, internal, external camera parameters Chap 7, Estimated P using X i x i Chap 8, Estimate P using x i and various pieces of information about X i Part 2, Two View Geometry Chap 9, Define Fundamental matrix F using P and P Chap 11, Estimate F using x i x i

Overview clubje Part 0, The Background Part 1, Single View Geometry Chap 6, Defined camera matrix P, internal, external camera parameters Chap 7, Estimated P using X i x i Chap 8, Estimate P using x i and various pieces of information about X i Part 2, Two View Geometry Chap 9, Define Fundamental matrix F using P and P Chap 11, Estimate F using x i x i

Overview Chap 8 If internal parameters K are (partially) known, euclidian properties of the scene can be measured in the image. K can be computed from absolute conic ω ω can be estimated from lines/points in the image with known geometric properties in the scene (geometric properties as in coplanarity/orthogonality) Finally... some clear applications (see Fig. 8.21)

Basics The mapping from points on a plane to the image plane is a homography: x = Hx π. C x x π Y Z X π extra: P has 11 dof, ph has nine dof, thus P can not be computed from planar points only.

Basics (Points on) A line L in the scene maps to (points on) a line l in the image. (Points on) A line l in the image maps to (points on) a plane π in the scene: x π P T l. L l π C

Basics Plücker lines anyone?

Basics A conic C in the image maps to a cone Q cone in the scene, with: Q cone = P T CP proof: x T Cx = 0 gives x = PX X T P T CP }{{} Q cone X = 0

Smooth surfaces, general contour-generator Γ results in profile γ on image-plane. Γ is defined by smooth surface and camera center C lines tangent to γ map to planes tangent to Γ n n X k x C Contour generator Γ Apparent contour γ

Smooth surfaces, quadrics A general quadric in the scene maps to a conic in the image. spheres map to circles: General quadrics are 3-space projective transformations of spheres. Intersection and tangency is preserved, thus contour-generator remains a plane conic. More on this, anyone?

Two cameras with equal centers C x / x Images taken by cameras with the same center are related by a homography: x = P X = (K R )(KR) 1 PX = (K R )(KR) 1 x = Hx... we already new this: X C x x π Y Z X π

Two cameras with equal centers Zooming K again: K = f m x s x 0 0 f m y y 0 0 0 1 [ f bmi = x0 0 T 1 ] Given K and K with f / f = k: [ K K 1 ki (1 k) x0 = 0 T 1 ] K = K [ ki 0 0 T 1 ]... Uh, yes, figures Does this hold for s 0

Two cameras with equal centers Rotating. x = K[I 0]X X = K 1 x x = K[R 0]X = KRK 1 x H = KRK 1 is and example of an infinite homography (?)

Two cameras with equal centers Two example applications. Synthetic views:

Two cameras with equal centers Mosaicing:

Two cameras without equal centers Motion parallax.

K and ω Calibration (i.e. determining K) relates image points x to the ray s direction d : d = K 1 x. If K is known (i.e. the camera is calibrated) angles between rays can bejcomputed: cos(θ) = d T 1 d 2 d T 1 d 1 d T 2 d 2 = (1) x T 1 (K T K 1 )x 2 x T1 (K T K 1 )x 1 x T2 (K T K 1 )x 2 (2) d 1 C θ x 1 x 2 d2

K and ω Now finally something important: the image of the absolute conic ω is related to the calibration K. Points on π, say X = (d T, 0) T map to KRd: x = PX = KR[I t](d T, 0) T = KRd. Notice: points on π direction. Notice-2: π is really a plane and thus there is a H = KR that maps it to the image plane. Notice-3 H does not depend on t (dc) (think of a stary night)

K and ω The absolute conic Θ = I on π. Mapping Θ using H = KR to the image plane gives us ω: ω = H T IH 1 = (KR) T I(KR) 1 = K T RR 1 K 1 = (KK T ) 1. Using Cholesky decomposition K can be recovered from ω. Angles between rays can now be expressed in ω: cos(θ) = x T 1 ωx 2 x T 1 ωx 1 x T 2 ωx 2. So, knowing ω resutls in a calibrated camera!!

Estimating ω Let s define linear constraints on ω so we can estimate it If x 1 and x 2 resulted from perpendicular rays then: x T 1 ωx 2 = 0 Through the pole-polar relationship we get for imagepoint x and imageline l resulting from a ray perpendicular to a scene plane: l = ωx [l] ωx = 0 Compute the imaged circular points of a mapped scene plane using an estimated H: H(1, ±i, 0). These lie on ω: h T 1 ωh 2 = 0 and h T 1 ωh 1 = h T 2 ωh 2

Estimating ω We know how to estimate homographies. Three homographies provide 6 linear constraints. Solve by stacking constraints and apply the DLT algorithm

Estimating ω So how do we determine perpendicular points/lines in images: vanishing points/lines x/ x C v/ v D X 1 X 2 X 3 X 4 X

Estimating ω Especially in man-made environments there are a lot of parallel and perpendicular lines. Using the fact that we *know* that lines are parallel/perpendicular in the plane helps us find vanishing points/constraints on ω.

Estimating ω Vanishing points are just like regular image point: cos(θ) = v T 1 ωv 2 v T 1 ωv 1 v T 2 ωv 2. Also for vanishing points related to perpendicular lines in the scene define a linear constraint on ω: v T 1 ωv 2 = 0

Estimating ω Vanishing lines correpond to planes intersecting π. They can be found by joining vanishing points resulting from rays on a the plane. Or, by using equally spaced parallel scene lines.

Estimating ω ω can also be constrained by constraining certain internal camera parameters: Zero skew, results in ω 12 = ω 21 = 0. This can also be seen as: x and y axes are orthogonal. zero skew and square pixels, results in ω 11 = ω 22. This can also be seen as: diagonal lines x = y and x = -y are orthogonal.

Measuring height The computation of K does not have to be explicit for measuring euclidian entities. See image 8.20 (p221) The ratio of parallel vertical scene lines can be determined using the vanishing line of the ground plane. In image 8.21 (p223) this is used to compute the length of two terrorists.

Almost done Calibrating conic, anyone......coffee?