Robotics - Single view, Epipolar geometry, Image Features. Simone Ceriani

Similar documents
Robotics - Projective Geometry and Camera model. Marcello Restelli

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Stereo Vision. MAN-522 Computer Vision

Robotics - Projective Geometry and Camera model. Matteo Pirotta

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Application questions. Theoretical questions

Computer Vision I - Appearance-based Matching and Projective Geometry

calibrated coordinates Linear transformation pixel coordinates

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

Rectification and Disparity

Rectification and Distortion Correction

Computer Vision I - Appearance-based Matching and Projective Geometry

3D Geometry and Camera Calibration

Camera Geometry II. COS 429 Princeton University

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Final Exam Study Guide

Two-view geometry Computer Vision Spring 2018, Lecture 10

Multiple View Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

CS201 Computer Vision Camera Geometry

C / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.

Flexible Calibration of a Portable Structured Light System through Surface Plane

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

EE795: Computer Vision and Intelligent Systems

Geometric camera models and calibration

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Epipolar Geometry and the Essential Matrix

Computer Vision Lecture 17

Computer Vision Lecture 17

Epipolar Geometry and Stereo Vision

arxiv: v1 [cs.cv] 28 Sep 2018

Multiple Views Geometry

Unit 3 Multiple View Geometry

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

TD2 : Stereoscopy and Tracking: solutions

Final Review CMSC 733 Fall 2014

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Computer Vision cmput 428/615

CS4670: Computer Vision

Lecture 9: Epipolar Geometry

Line, edge, blob and corner detection

Local features: detection and description. Local invariant features

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Cameras and Stereo CSE 455. Linda Shapiro

Computer Vision I - Basics of Image Processing Part 2

MAPI Computer Vision. Multiple View Geometry

Anno accademico 2006/2007. Davide Migliore

CS231A Midterm Review. Friday 5/6/2016

Epipolar Geometry class 11

EE795: Computer Vision and Intelligent Systems

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Projector Calibration for Pattern Projection Systems

Image processing and features

Epipolar Geometry and Stereo Vision

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

1 Projective Geometry

Visual Recognition: Image Formation

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Camera model and multiple view geometry

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Feature Detectors and Descriptors: Corners, Lines, etc.

Robot Vision: Camera calibration

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

Recovering structure from a single view Pinhole perspective projection

Outline. ETN-FPI Training School on Plenoptic Sensing

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Camera models and calibration

Edge and corner detection

Other Linear Filters CS 211A

Lecture 3: Camera Calibration, DLT, SVD

Lecture 14: Basic Multi-View Geometry

Camera Model and Calibration

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

Local features: detection and description May 12 th, 2015

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Pin Hole Cameras & Warp Functions

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Structure from motion

Towards the completion of assignment 1

Capturing, Modeling, Rendering 3D Structures

CSE 252B: Computer Vision II

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

Object Recognition with Invariant Features

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Corner Detection. GV12/3072 Image Processing.

Multi-View Geometry Part II (Ch7 New book. Ch 10/11 old book)

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Agenda. Rotations. Camera calibration. Homography. Ransac

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Multiple View Geometry in Computer Vision

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Stereo Epipolar Geometry for General Cameras. Sanja Fidler CSC420: Intro to Image Understanding 1 / 33

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Transcription:

Robotics - Single view, Epipolar geometry, Image Features Simone Ceriani ceriani@elet.polimi.it Dipartimento di Elettronica e Informazione Politecnico di Milano 12 April 2012

2/67 Outline 1 Pin Hole Model 2 Distortion 3 Camera Calibration 4 Two views geometry 5 Image features 6 Edge, corners 7 Exercise

3/67 Outline 1 Pin Hole Model 2 Distortion 3 Camera Calibration 4 Two views geometry 5 Image features 6 Edge, corners 7 Exercise

4/67 Pin hole model - Recall The intrinsic camera matrix or calibration matrix f x s c x K = 0 f y c y 0 0 1 f x, f y: focal lenght (in pixels) f x/f y = s x/s y = a: aspect ratio s: skew factor pixel not orthogonal usually 0 in modern cameras c x, c y: principal point (in pixel) usually half image size due to misalignment of CCD Projection X u [ ] Y v = K 0 Z w W [ ] [ ] u fx X Z = +cx f v y Y Z +cy

5/67 Points in the world Consider f x s c x 0 p (I ) = 0 f y c y 0 0 0 1 0 P (O) = T (O) OW P(W) extrinsic camera matrix P (O) p (W) O W T (O) OW

5/67 Points in the world Consider f x s c x 0 p (I ) = 0 f y c y 0 0 0 1 0 P (O) = T (O) OW P(W) extrinsic camera matrix One step [ ] [ ] p (I ) R t = K 0 0 1 [ ] π = KR Kt Note P (O) P (W) complete projection matrix R is R (O) OW t is t (O) OW i.e., the position and orientation of W in O W p (W) T (O) OW p (O) O

6/67 Note on camera reference system Camera reference system z: front y: down World reference system x: front z: up Rotation of O w.r.t. W Rotate around y of 90 z front Rotate around z of 90 R (W) y point down WO = Ry(90 )R z( 90 ) 0 0 1 0 1 0 R (W) WO = 0 1 0 1 0 0 = 1 0 0 0 0 1 0 0 1 1 0 0 0 1 0 0 1 0 R (O) OW = T R(W) WO = 0 0 1 1 0 0

7/67 Interpretation line Calculate l Po 3D lines not coded in 3D remember duality points planes p (I ) = [ K 0 ] P (O) p (I ) = [ K 0 ][ X, Y, Z, W ] T p (I ) = K [ X, Y, Z ] T W cancelled by zeros fourth column Given p (I) = Calculate P (O)? [ u,v] T: point in image (pixel) No, only l Po : interpretation line P (O) i l Po image is p (I) d (O) = K 1[ u,v,1 ] T d (O) = d (O) / d (O) : unit vector 0 [ ] P (O) i = 0 0 +λ d (O), λ > 0 0 1 l Po in parametric form

8/67 Interpretation line & Normalized image plane Interpretation line direction ] T d (O) = K [u,v,1 1 1/f x 0 c x/fx 0 K 1 = 0 1/f y c y/fy 0 0 0 1 0 assume skew = 0 d (O) = P (O) λ= d (O) = [ u c x, v cy fx lies on π N ] T, 1 fy 0 [ ] 0 d (O) 0 + 0 1 If u = c x, v = c y, d is the principal direction Normalized image plane Distance 1 from the optical center Independent of camera intrinsic [ Given a cartesian point P (O) = X, Y, Z [ ] P (O) T π N = X/Z, Y/Z, 1 ] T

9/67 Interpretation line in the world - 1 Consider Interpretation line in camera coordinate P (O) i = ] 0 1 Interpretation line in world coordinate P (W) i = = [ [ R (O) T OW [ +λ [ R (O) T (O) OW t OW 0 1 ] [ R (O) OW 1 T t (O) OW = O (W) +λd (W) + ] d (O) 0 ]([ λr (O) OW 0 ] 0 1 T d (O) +λ ] [ ]) d (O) 0 Camera center in world coordinate + direction rotated as world reference

10/67 Interpretation line in the world - 2 Consider Interpretation line in world coordinate P (W) i = λr (O) OW = λr (O) T OW K 1 T (O) (O) d R OW u v 1 T t (O) OW R (O) OW T t (O) OW Complete projection matrix π = [ KR (O) OW Kt(O) OW ] = [ M m ] M 1 = R (O) T OW K 1 : direction in world coordinate given pixel coordinate M 1 m = R (O) T OW K 1 Kt (O) OW = T R(O) (O) OW t OW = t(w) WO : camera center in world coordinate O (W)

11/67 Principal ray Interpretation line of principal point c x 0 d (O) = K 1 c y = 0 1 1 z-axis of the camera reference system d (W) = R (O) T OW P (W) i 0 0 = M 1 1 z-axis of the camera in world reference system c x c x c y 1 = λm 1 c y M 1 m 1 parametric line of z-axis of the camera in world reference system

12/67 Vanishing points & Origin Vanishing points [ V x (W) = 1, 0, 0, 0 [ V y (W) = 0, 1, 0, 0 [ V z (W) = 0, 0, 1, 0 ] T ] T ] T Projection on the image [ ] [ ] p (I ) d = M m = Md 0 [ ] p (I ) x = M m V x (W) = M (1) [ ] p (I ) x = M m V y (W) = M (2) [ ] p (I ) z = M m V z (W) = M (3) Origin O (W) = [ ] T 0, 0, 0, 1 Projection on the image [ ] [ ] p (I ) 0 = M m = m 1

12/67 Vanishing points & Origin Vanishing points [ V x (W) = 1, 0, 0, 0 [ V y (W) = 0, 1, 0, 0 [ V z (W) = 0, 0, 1, 0 ] T ] T ] T Projection on the image [ ] [ ] p (I ) d = M m = Md 0 [ ] p (I ) x = M m V x (W) = M (1) [ ] p (I ) x = M m V y (W) = M (2) [ ] p (I ) z = M m V z (W) = M (3) Origin O (W) = [ ] T 0, 0, 0, 1 Projection on the image [ ] [ ] p (I ) 0 = M m = m 1 Note Col 1 of π is image of x vanishing point Col 2 of π is image of y vanishing point Col 3 of π is image of w vanishing point Col 4 of π is image of O (W)

13/67 Angle of View Given Examples Image size: [w, h] Focal lenght: f x (assume f x = f y) Angle of view θ = 2atan2(w/2,f x) θ < 180 14mm 20mm 28mm 35mm 50mm

14/67 From past exam Ex. 4-20 November 2006 Problem 1 0 0 1 Given π = 3 1 0 4 1 2 3 1 Where is the camera center in world reference frame?

14/67 From past exam Ex. 4-20 November 2006 Problem 1 0 0 1 Given π = 3 1 0 4 1 2 3 1 Where is the camera center in world reference frame? Solution [ ] π = M m O (W) = M 1 m 1 0 0 M 1 = 3 1 0 5/3 2/3 1/3 1 O (W) = 1 2/3

15/67 Outline 1 Pin Hole Model 2 Distortion 3 Camera Calibration 4 Two views geometry 5 Image features 6 Edge, corners 7 Exercise

16/67 Distortion Distortion Deviation from rectilinear projection Lines in scene don t remains lines in image Original image Corrected

17/67 Distortion Distortion Can be irregular Most common is radial (radially symmetric) Radial Distortion Barrel distortion Magnification decrease with distance from optical axis Pincushion distortion Magnification increase with distance from optical axis

18/67 Brown distortion model Consider [ T P (O) = X, Y, Z] in camera reference system [ ] T [ T Calculate p (I) = x, y, 1 = X/Z, Y/Z, 1] on the normalized image plane Distortion model p (I) = ( 1+k 1r 2 +k 2r 4 +k 3r 6) p (I) +d x r 2 = x 2 +y 2 : distance wrt optical axis (0,0) [ ] 2p 1xy +p 2(r 2 +2x 2 ) d x = : tangential distortion compensation p 1(r 2 +2y 2 )+2p 2xy Image coordinate p (I ) = K p (I) : pixel coordinate of P (O) considering distortion

19/67 Brown distortion model From image points p (I ) in image (pixel) Calculate p (I) = K 1 p (I ) = Undistort: p (I) = dist 1 ( p (I) ) Image projection: p (I ) = Kp (I) [ x, y, 1] T on the (distorted) normalized image plane Evaluation of dist 1 ( ) No analytic solution Iterative solution (N = 20 is enough): 1: p (I) = p (I) : initial guess 2: for i = 1 to N do 3: r 2 = x 2 +y 2, k r = ( 1+k 1r 2 +k 2r 4 +k 3r 6), d x = ) 4: p (I) = ( p (I) d x /k r 5: end for [ ] 2p 1xy +p 2(r 2 +2x 2 ) p 1(r 2 +2y 2 )+2p 2xy

20/67 Outline 1 Pin Hole Model 2 Distortion 3 Camera Calibration 4 Two views geometry 5 Image features 6 Edge, corners 7 Exercise

21/67 Camera calibration Intrinsic calibration Find parameters of K Nominal values of optics are not suitable Differences between different exemplar of same camera/optic system Include distortion coefficient estimation Extrinsic calibration Find parameters of π = i.e., find K and R, t [ ] M m

22/67 Camera calibration - Approaches Calibration Very large literature! Different approaches Known 3D pattern Methods Based on correspondances Need for a pattern Planar pattern

Camera calibration - Formulation Formulation M i : model points on the pattern p ij : observation of model point i in image j [ T: p = f x,f y,s, k 1,k 2, ] intrinsic parameters R j,t j : pose of the patter wrt camera reference frame j i.e., R (C) CP j,t(c) CP j ˆm(p,R j,t j,m i ): estimated projection of M i in image j. Estimation argmin j i p ij m ij (p,r j,t j,m i ): observation of model point i in image j p,r j,t j Gives both intrinsic (unique) and extrinsic (one for each image) calibration Z. Zhang, A flexible new technique for camera calibration, 2000 Heikkila, Silvén, A Four-step Camera Calibration Procedure with Implicit Image Correction, 1997 23/67

Camera Calibration Toolbox for Matlab Collect images Automatic corners identification Find chessboard external corners Calibration Camera Calibration Toolbox for Matlab - http://www.vision.caltech.edu/bouguetj/calib doc/ 24/67

25/67 Outline 1 Pin Hole Model 2 Distortion 3 Camera Calibration 4 Two views geometry 5 Image features 6 Edge, corners 7 Exercise

26/67 Epipolar geometry introduction Epipolar geometry Projective geometry between two views Independent of scene structure Depends only on Cameras parameters Cameras relative position Two views Simultaneously (stereo) Sequentially (moving camera) Bumblebee camera Robot head with two cameras

27/67 Correspondence Consider P a 3D point in the scene Two cameras with π and π p = πp image on first camera p = π P image on second camera p and p : images of the same point correspondence Correspondence geometry p on first image How p is constrained by p?

28/67 Correspondences and epipolar geometry Suppose (1) P, a 3D point imaged in two views p L and p R image of P P, p L, p R, O L, O R are coplanar on π π is the epipolar plane Suppose (2) P is unknown p L is known Where is p R? or how is constrained p R? P i = O L +λd pl O L is the interpretation line of p L p R lies on a line: intersection of π with the 2 nd image epipolar line

29/67 Epipolar geometry - Definitions Base line Line joining O L and O R Epipoles (e L, e R ) Intersection of base line with image planes Projection of camera centres on images Intersection of all epipolar lines Epipolar line Intersection of epipolar plane with image plane Epipolar plane A plane containing the baseline It s a pencil of planes Given an epipolar line is possible to identify a unique epipolar plane

30/67 Epipolar constraints Correspondences problem Given p L in one image Search on second image along the epipolar line 1D search! A point in one image generates a line in the second image Correspondences example

31/67 The fundamental matrix F Epipolar geometry Given p L on one image p R lies on l R, i.e. the epipolar line p R l R p T Rl R = 0 Thus, there is a map p L l R The fundamental matrix F l R = Fp L F is the fundamental matrix p R l R p T RFp L = 0

32/67 The fundamental matrix F properties Properties If p L correspond to p R p L Fp R = 0, necessary condition for correspondence If p L Fp R = 0 interpretation lines (a.k.a. viewing ray) are coplanar F is a 3x3 matrix det(f) = 0 rank(f) = 2 F has 7 dof (1 homogeneous, 1 rank deficient) l R = Fp L, l L = F T p R Fe L = 0, F T e R = 0, i.e.: epipoles are the right null vector of F and F T Proof: p L e L, l R = Fp L and e R l R p L e T R Fp L = 0 F T e R = 0

33/67 F calculus From calibrated cameras π L and π R are known F = [e R ] π R π + L where ( ) e R = π R O L = π R M 1 L m L π + L = ( ) 1: πt L πl π T L pseudo-inverse a b = [a] b where [a] = 0 a 3 a 2 a 3 0 a 1 a 2 a 1 0 Calibrated cameras with K and R,t ] π L = K L [I 0 : origin in the left camera ] π R = K R [R t F = K T R [t] RK 1 L Special forms with pure translations, pure rotations,...

34/67 F estimation - procedure sketch From uncalibrated images Get point correspondances ( by hand or automatically) Compute F by consider that p T R Fp L = 0 At least 7 correspondances are needed but the 8-point algorithm is the simplest Impose rank(f) = 2 Details Multiple View Geometry in computer vision Hartley Zisserman. Chapters 9,10,11,12.

35/67 Projective reconstruction F is not unique If estimated by correspondences Without any additional constraints allow at least a projective reconstruction

36/67 Metric reconstruction and reconstruction Additional constraints Parallelism, measures of some points,... allow affine/similar/metric reconstruction

37/67 Triangulation Suppose p R and p L are correspondent points π L and π R are known Due to noise is possible that interpretation lines don t intersect p L Fp R 0 3D point computation argmin ˆp L,ˆp R d(ˆp L,p L ) 2 +d(ˆp R,p R ) 2 subject to ˆp L Fˆp R = 0

38/67 Outline 1 Pin Hole Model 2 Distortion 3 Camera Calibration 4 Two views geometry 5 Image features 6 Edge, corners 7 Exercise

39/67 Features in image What is a feature? No common definition Depends on problem or application Is an interesting part of the image Types of features Edges Boundary between regions Corners / interest points Blobs Edge intersection Corners Point-like features Smooth areas that define regions Corners Edges Blob

40/67 Black & White threshold Thresholding On a gray scale image I(u,v) If I(u,v) > T I (u,v) = white else I (u,v) = black Properties Simplest method of image segmentation Critical point: threshold T value Mean value of I(u,v) Median value of I(u,v) (Local) adaptive thresholding

41/67 Filtering Kernel matrix filtering Given an image I(i,j),i = 1 h,j = 1 w A kernel H(k,z),k = 1 r,z = 1 c I (i,j) = k z I(i r/2 +k 1,j c/2 +z 1) H(k,z) special cases on borders Final result

42/67 Filter Examples - 1 Identity Translation

43/67 Filter Examples - 2 Average (3 3) Average (5 5)

44/67 Filter Examples - 3 Gaussian - N(0,σ) Gaussian vs Average

45/67 Smoothing Generally expect Pixels to be like neighbours Noise independent from pixel to pixel Implies Smoothing suppress noises Appropriate noise model (?)

46/67 Image Gradient - 1 Horizontal derivatives ( I x) =

47/67 Image Gradient - 2 Vertical derivatives ( I y) =

48/67 Rough edge detector I 2 x + I 2 y 2 2 + = then apply threshold...

49/67 Outline 1 Pin Hole Model 2 Distortion 3 Camera Calibration 4 Two views geometry 5 Image features 6 Edge, corners 7 Exercise

50/67 Canny Edge Detector Criterion Good detection: minimize probability of false positive and false negatives Good localization: Edges as closest as possible to the true edge Single response: Only one point for each edge point (thick = 1) Procedure Smooth by Gaussian (S = I G(σ)) Compute derivatives ( S x, S y) Alternative in one step: Filter with derivative of Gaussian Compute magnitude and orientation of gradient ( S x = Sx 2 + Sy, 2 θ S = atan2 S y, S x) Non maxima suppression Search for local maximum in the gradient direction θ S Hysteresis threshold Weak edges (between the two thresholds) are edges if connected to strong edges (greater than high threshold)

51/67 Canny Edge Detector - Non Maxima Suppression Non Maxima Suppression Example

52/67 Canny Edge Detector - Hysteresis threshold Hysteresis threshold example

53/67 Lines from edges How to find lines? Hough Transformation after Canny edges extraction Use a voting procedure Generally find imperfect instances of a shape class Classical Hough Transform for lines detection Later extended to other shapes (most common: circles, ellipses) Example

54/67 Corners - Harris and Shi Tomasi Edge intersection At intersection point gradient is ill defined Around intersection point gradient changes in all directions It is a good feature to track Corner detector Examine gradient over window C = [ ] Ix 2 I x I y w I x I y I 2 y Shi-Tomasi: corner if mineigenvalue(c) > T Harris: approximation of eigenvalues

55/67 Template matching - Patch Filtering with a template Correlation between template (patch) and image Maximum where template matches Alternatives with normalizations for illumination compensation, etc. Good features On corners: higher repeatability (homogeneous zone and edges are not distinctive)

Template matching - SIFT Template matching issues SIFT Rotations Scale Invariant Feature Transform Scale change Alternatives descriptor to patch Performs orientation and scale normalization See also SURF (Speeded Up Robust Feature) SIFT example D. Lowe - Object recognition from local scale-invariant features - 1999 56/67

57/67 Outline 1 Pin Hole Model 2 Distortion 3 Camera Calibration 4 Two views geometry 5 Image features 6 Edge, corners 7 Exercise

Camera matrix - 1 Given 122.5671 320.0000 102.8460 587.3835 P = 113.7667 0.0000 322.2687 350.6050 0.7660 0 0.6428 4.6711 f x = f y = 320 c x = 160 c y = 120 Questions Where is the camera in the world? Compute the coordinate of the vanishing point of x,y plane in the image Where is the origin of the world in the image? Write the parametric 3D line of the principal axis in world coordinates... 58/67

59/67 Camera matrix - 2 Where is the camera in the world? [ ] P = KR Kt [ ] K = 320 0 1600 320 1200 0 1 R = K 1 P(1 : 3,1 : 3) t = K 1 P(1 : 3,4) 1 T (W) WC = R t 0 1 P contains the world wrt camera Camera is at [ 4, 0.5,2.5] Rotation around axis x,y,z is [ 130,0.0, 90 ] To be more clear, remove the rotation of camera reference frame T (W) WC Rz(90 ),R y( 90 ) rotation around axis x,y,z is [0,40.0,0 ]

60/67 Camera matrix - 3 Vanishing point of x,y plane in the image [ ] T [ ] T v x = P 1,0,0,0 160.0, 148.5, 1 [ ] T [ T v y = P 0,1,0,0 1, 0,0] (improper point) Remember: they are the 1 st and 2 nd column of P Where is the origin of the world in the image? [ ] T [ ] T o = P 0,0,0,1 125.75, 75.05, 1 Remember: it is the 4 st column of P Principal axis in world coordinates [ ] P = M m [ ] T O (W) = M 1 m = 4, 0.5, 2.5 ] T [ d (W) = M [c 1 x,c y,1 = 0.766, 0, 0.6428 a (W) = O (W) +λd (W) ] T

61/67 Camera matrix - 5 Questions Where is the intersection between principal axis and the floor? Calculate the field of view of the camera (image size is 320 240) i.e., the portion of the plane imaged by the camera

Camera matrix - 6 Intersection between principal axis and the floor a (W) = O (W) +λd (W) a (W) z=0 = [X,Y,0 ] T λ z=0 = O (W) z /d (W) z = 3.89 a (W) z=0 = [ 1.02, 0.5,0 Field of view (1) ] T a (W) 1 = O (W) +λ 1M 1 [0,0,1 a (W) ] T 2 = O (W) +λ 2M 1 [320,0,1 a (W) 3 = O (W) +λ 3M 1 [0,240,1 ] T ] T a (W) 4 = O (W) +λ 4M 1 [320,240,1 ] T calculate λ i such that a (W) i has z = 0... 62/67

63/67 Camera matrix - 7 Field of view (2) Transformation between plane z = 0 and image plane is a 2D homography [ T; Consider p z=0 = x,y,0,1] Projection Pp z=0 1 0 0 Notice that p z=0 = 0 1 0 [ ] T 0 0 0 x,y,1 0 0 1 }{{} W H = PW is the homography maps points (x,y) on the z = 0 plane to the image H = H 1 is the inverse homography maps image points (u,v) to the z = 0 plane

64/67 Camera matrix - 8 Field of view (2) (continue) 122.56 320.00 587.38 H = 113.76 0.00 350.60 0.76 0 4.67 p 1 z=0 = H [0,0,1 ] T p 2 z=0 = H [320,0,1 p 3 z=0 = H [0,240,1 ] T ] T p 4 z=0 = H [320,240,1 ] T

64/67 Camera matrix - 8 Field of view (2) (continue) 122.56 320.00 587.38 H = 113.76 0.00 350.60 0.76 0 4.67 p 1 z=0 = H [0,0,1 ] T p 2 z=0 = H [320,0,1 ] T ] T p 3 z=0 [0,240,1 = H ] p 4 z=0 [320,240,1 = T The 3D world with the camera reference H system (green), the world reference system (blue), the principal axis (dashed blue) and the Field of view (FoV) (grey)

65/67 Camera matrix - 9 Questions There is a flat robot moving on the floor, imaged by the camera Two distinct and coloured point are drawn on the robot. [ ] T, [ ] p (R) T (R) 1 =.3,0 p.3,0 2 = Could you calculate the robot position and orientation?

66/67 Camera matrix - 10 Robot pose Call p 1, p 2 the points in the image p 1 z=0 = H p 1 p 2 z=0 = H p 2 Position: d = p 2 z=0 p1 z=0 1 2 (p1 z=0 +p 2 z=0) Orientation: atan2(d y,d x)

66/67 Camera matrix - 10 Robot pose Call p 1, p 2 the points in the image p 1 z=0 = H p 1 p 2 z=0 = H p 2 Position: d = p 2 z=0 p1 z=0 1 2 (p1 z=0 +p 2 z=0) Orientation: atan2(d y,d x) A complete execution with a robot trajectory depicted in the world (top) and in the image (bottom)

67/67 Camera matrix - 11 A complete execution with a robot trajectory depicted in the world (top) and in the image (bottom). A Square is drawn on the floor to check correctness of the calculated vanishing point x (see previous questions)