C18 Computer Vision. Lecture 1 Introduction: imaging geometry, camera calibration. Victor Adrian Prisacariu.

Size: px
Start display at page:

Download "C18 Computer Vision. Lecture 1 Introduction: imaging geometry, camera calibration. Victor Adrian Prisacariu."

Transcription

1 C8 Computer Vision Lecture Introduction: imaging geometry, camera calibration Victor Adrian Prisacariu

2 InfiniTAM Demo

3 Course Content VP: Intro, basic image features, basic multiview geometry Introduction: imaging geometry, camera calibration. Salient feature detection: points, edges and SIFTs. Recovering 3D from two images I: epipolar geometry. Recovering 3D from two images II: stereo correspondences, triangulation, ego-motion. Slides at -> Teaching Lots borrowed from David Murray + AV C8. AV: Neural networks and applications Small baseline correspondences and optic flow. Large scale image retrieval. Fundamentals of convolutional neural networks. Object detection and recognition.

4 Useful Texts Multiple View Geometry in Computer Vision Richard Hartley, Andrew Zisserman Computer Vision: A Modern Approach David Forsyth, Jean Ponce Prentice Hall; ISBN: Dimensional Computer Vision: A Geometric Viewpoint Olivier Faugeras

5 Computer Vision: This time. Introduction: imaging geometry, camera calibration.. Introduction. 2. The perspective camera as a geometric device. 3. Perspective using homogeneous coordinates. 4. Calibration the elements of the perspective model. 2. Salient feature detection: points, edges and SIFTs. 3. Recovering 3D from two images I: epipolar geometry. 4. Recovering 3D from two images II: stereo correspondences, triangulation, ego-motion.

6 . Introduction Aim in geometric computational vision is to take a number of 2D images, and obtain an understanding of the 3D environment; what is in it; and how it evolves over time. What do we have here? seems very easy

7 Wrong! It s a very hard big data problem From the hardware engineering perspective The raw throughput is unsettlingly large: Colour stereo pair at 30Hz -> 00s MB/s. Now multiply by non-trivial processing cost per byte Image collections are huge. Certainly challenging, but no longer frightening. From applied maths perspective Scene info is made highly implicit or lost by reprojection. Inverse mappings 2D -> 3D ill-posed, ill-conditioned. Can only be solved by introducing constraints: about the way the world actually works; or about the way we expect it to work. We now know about these. No longer frightening.

8 Wrong! It s a very hard big data problem From the Information-Engineering / AI perspective Images have uneven information content both absolutely and contextually. Computational visual semantics: what does visual stuff mean exactly? If we are under time pressure, what is the important visual stuff right now? Still a massive challenge if we want genuine autonomy.

9 Natural vision is a hard problem But we see effortlessly! Yep, spot on if one neglects: the 0 neurons involved. aeons of evolution generating hardwire priors P(I). that we sleep with eyes shut, and avert gaze when thinking. Very hard work for our brains does machine vision have a hope? Perhaps building a general vision system is a flawed concept. Evidence that the human visual system is a bag of tricks specialized processing for specialized tasks. Perhaps we should expect no more of computer vision? However, the h.v.s. does give us a convincing magic show. Each trick flows seamlessly into the next. Do we have an inkling how that will be achieved?

10 So why bother? What are the advantages? From a sensor standpoint Vision is a passive sensor (unlike sonar, radar, lasers). Wide range of depths, overlaps and complements other sensors. Wide diversity of methods to recover 3D information: so vision has process redundancy. Images provide data redundancy. From a natural communications standpoint The world is awash with information-rich photons. Because we have eyes, vision provides a natural language of communication. If we want robots/man-machine-interfaces to act and interact in our environments, they have to speak that language.

11 Organizing the tricks Although human and computer vision might be bags of tricks, it is useful to place the tricks within larger processing paradigms. For example: a) Data-driven, bottom-up processing. b) Model-driven, top-down, generative processing. c) Dynamic Vision (mixes bottom-up with top-down feedback). d) Active Vision (task oriented). e) Data-driven discriminative approach (machine learning). These are neither all-embracing nor exclusive.

12 (a) Data-driven, bottom-up processing Image processing produces map of salient 2D features. Features input into a range of shape from X processes whose output was the 2.5D sketch. Only in the last stage we get a fully 3D objectcentered description.

13 (b) Model-driven, and (c) Dynamic vision Model-driven, top-down, generative processing: a model of the scene is assumed known. Supply a pose for the object relative to the camera, and use projection to predict where salient features should be found in the image space. Search for the features, and refine the pose by minimizing the observed deviation. Dynamic vision: mixes bottomup/top-down by introducing feedback. Top-down Dynamic

14 (d) Active Vision Introduces task-oriented sensing-perceptionaction loops: Visual data needs only be good enough to drive the particular action. No need to build and maintain an overarching representation of the surroundings. Computational resources focused where they are needed.

15 (e) Data-driven approach The aim is to learn a description of the transformation between input and output using exemplars. Geometry is not forgotten, but implicit learned representation are favored.

16 .2 The perspective camera as a geometric device

17 This is (a picture of) my cat x = Cat nose 520

18 My cat lives in a 3D world x = x. x / X = X. X / X 0 The point X in world space projects to the point x in image space

19 Going from X in 3D to x in 2D x = x. x /? X = X. X / X 0 film/sensor cat Output would be blurry L if film just exposed to the cat

20 Going from X in 3D to x in 2D x = x. x /? X = X. X / X 0 film/sensor barrier cat Blur reduced, looks good J

21 Pinhole Camera x = x. x /? X = X. X / X 0 Image Plane pinhole cat All rays pass through the center of projection (a single point). Image forms on the image plane.

22 Pinhole Camera image plane f X = X. X / X 0 p o Optical axis x = x. x / f focal length o camera origin p principal point The 3D point X = x. x / = f f X. X / X 0 is imaged into x = x. x / as:

23 .2 The perspective camera as a geometric device

24 Homogeneous coordinates The projection x = fx/x 0 is non-linear L. Can be made linear using homogeneous coordinates involves representing the image and scene in higher dimensional space. Limiting cases e.g. vanishing points are handled better. Homogeneous coordinates allow for transformations to be concatenated more easily.

25 3D Euclidean transforms: inhomogeneous coordinates My cat moves through 3D space. The movement of the tip of the nose can be described using an Euclidean transform: ; X 0. = R 0 0 X 0. + t 0. rotation translation

26 3D Euclidean transforms: inhomogeneous coordinates ; Euclidean transform: X 0. = R 0 0 X 0. + t 0. Concatenation of successive transform is a mess! X. = R. X + t. X / = R / X. + t / X / = R / R. X + t. + t / = R / R. X + R / t + t /.

27 3D Euclidean transforms: homogeneous coordinates We replace the 3D points X Y Z The Euclidean transform becomes: X = E X with a four vector = R t 0 G X X Y Z. Transformations can now be concatenated by matrix multiplication: X. = E.H X H X / = E /. X. X / = E /. E.H X 0

28 Homogeneous coordinates definition in R 0 X = X, Y, Z G is represented in homogeneous coordinates by any 4-vector X. X / X 0 X L such that X = X. /X L, Y = X / /X L, and Z = X 0 /X L. So the following homogeneous vectors represent the same point, for any λ 0: X. X / X 0 X L and λ X. X / X 0 X L E.g. 2,3,5, G is the same as 3, 4.5, 7.5,.5 G and both represent the same inhomogeneous point 2,3,5 G

29 Homogeneous coordinates definition in R / x = x, y G is represented in homogeneous coordinates by any 3-vector x. x / x 0 such that x = x. /x 0, y = x / /x 0. E.g.,2,3 G is the same as 3,6,9 G and both represent the same inhomogeneous point 0.33,0.66 G

30 Homogeneous notation rues for use. Convert the inhomogeneous point to an homogeneous vector: X X Y Y Z Z 2. Apply a 4 4 transform. 3. Dehomogenize the resulting vector: X. X / X 0 X L X. /X L X / /X L X 0 /X L

31 Projective transformations A projective transformation is a linear transformation on homogeneous 4-vectors represented by a non-singular 4x4 matrix. X. X / X 0 X L = p.. p./ p./ p.l p /. p // p /0 p /L p 0. p 0/ p 00 p 0L p L. p L/ p L0 p LL X. X / X 0 X L The effect on the homogenous points is that the original and transformed points are linked through a projection center. The 4x4 matrix is defined up to scale, and so has 5 degrees of freedom.

32 More 3D-3D and 2D-2D Transforms Projective (5 dof): X. X / = P X L L 0 X L Affine (2 dof): X = A 0 0 t 0 0 G Similarity (7 dof): X = SR 0 0 t 0 0 G Euclidean (6 dof): X = R 0 0 t 0 0 G X. X / X 0 X L X X X Projective (aka Homography, 8 dof): x ;. x ; / x ; 0 Affine (6 dof): = H 0 0 x. x / x 0 x = A 2 2 t / 0 G Similarity (5 dof): x = SR / / t / 0 G Euclidean (4 dof): x = R / 2 t 2 0 G x x x

33 2D-2D Transform Examples cos θ sin θ t a sin θ cos θ t b 0 0 scos θ ssin θ t a ssin θ scos θ t b 0 0 a.. a./ t a a /. a // t b 0 0 h.. h./ h./ h /. h // h /0 h 0. h 0/ h 00 Euclidean 3 DoF Similarity 4 DoF Affine 6 DoF Projective 8 DoF

34 Perspective 3D-2D Transforms Similar to a 3D-3D projective transform, but constrain the transformed point to a plane z = f. x. x z = f X ijklm = / f Because z = f is fixed, we can write: x. p.. p./ p./ p.l x λ / p /. p // p /0 p /L = f fp 0. fp 0/ fp 00 fp 0L p 0. p 0/ p 00 p 0L X. X / X 0 The 3 rd row is redundant, so: λ x. x / = p.. p./ p./ p.l p /. p // p /0 p /L p 0. p 0/ p 00 p 0L X. X / X 0 = P 0 L X. X / X 0 P 0 L is the projection matrix and this is a perspective transform

35 .3 Perspective using homogeneous coordinates x = x. x / x. x / = f f X = X. X / X 0 λ x. x / = f f X. X / X 0 λx. = fx. λx / = fx / λ = X 0 x. = f X. X 0 x / = f X / X 0

36 Perspective using homogeneous coordinates Image Point Projection Matrix World Point λ x. x / = f f X. X / X 0

37 Perspective using homogeneous coordinates It is useful to split up the overall projection matrix into three parts:. a part that depends on the internals of the camera 2. a vanilla projection matrix 3. a Euclidean transformation between the world and camera frames. We first assume the scene and world are aligned with the camera coords, so that the extrinsic camera matrix is identity and get: Image Point Camera s Intrinsic Calibration Projection matrix (vanilla) Camera s Extrinsic Calibration World Point λ x y f f X

38 Perspective using homogeneous coordinates Now let s make things more general: Insert a rotation R and translation t between world and camera coordinates. Insert some extra term in the intrinsic calibration matrix. Image Point Camera s Intrinsic Calibration Projection matrix (vanilla) Camera s Extrinsic Calibration World Point λ x y f sf u H 0 γf v r r.. r./ r.0 t. r /. r // r /0 t / r 0. r 0/ r 00 t X

39 The camera pose (extrinsic parameters) The camera s extrinsic calibration is just the rotation R and translation t that take points from the world frame to the camera frame. X t = R t 0 G X u

40 Building R R captures rotation and can be built from various types of rotation representations (Euler angles, quaternions, etc.). Euler angles capture the angles of rotation axis using 3 parameters, one for each axis. X ; = R v X u = cos θ v sin θ v 0 sin θ v cos θ v X w X ;; = R b X ; = X x = R a X ;; = cos θ b 0 sin θ b 0 0 X ; sin θ b 0 cos θ b cos θ a ± sin θ a X ;; 0 sin θ a cos θ a R {u = R a R b R v Order matters!

41 Building t

42 Inverting the transform R {u t {u 0 G. = Ru{ t u{ 0 G For rotation: R u{ = R {u. = R {u G For translation: t u{ = t {u t u{ = R u{ t {u

43 The intrinsic calibration parameters Describe hardware properties of real cameras: The image plane might be skewed. The central axis of the lens might not line up with the optical axis. The light gathering elements might not be square. Lens distortion. K = f γf ~ s = f sf u r 0 γf v r 0 0 different scaling on x and y γ is the aspect ratio. Origin offset, (u r, v r ) is the principal point. s accounts for skew

44 Summary of steps from Scene to Image. Move scene point X u, G into camera coordinate by 4 4 extrinsic Euclidean transformation: X { R t = X u 0 G 2. Project into ideal camera via a vanilla perspective transformation: x = I 0 X{ 3. Map the ideal image into the real image using intrinsic matrix: x = K x

45 .4 Camera Calibration The process that finds K, and accounts for the internal physical characteristics of the camera. (Usually) done once per camera. There are a variety of method for self-calibration, auto-calibration or pre-calibration. We will gloss over pre-calibration, using a specially made known visual scene.

46 What is camera calibration? Image Point Camera s Intrinsic Calibration Projection matrix (vanilla) Camera s Extrinsic Calibration World Point λ x y f sf u H 0 γf v r r.. r./ r.0 t. r /. r // r /0 t / r 0. r 0/ r 00 t X λx K [I 0] P 0 L = K[R t] R t 0 G X Camera calibration: recover K

47 Camera Calibration Math Part. Recover overall projection matrix P 0 L. Assume target with at least 6 known scene points. Build and solve system of (at least) 2 equations. 2. Construct P Š Œ x = p.. KR p./ from p.0 p X.L leftmost 3x3 block λ y = p of P = K[R t]. /. p // p /0 p Y /L Z p 0. p 0/ p 00 p 0L 3. Invert P Š Œ so P Š Œ = R. K.. λ = p 4. Decompose P 0. X + p Š Œ using 0/ Y + p 00 Z + p QR decomposition 0L p 0. X + p 0/ Y + p 00 Z + p 0L x = p.. X + p./ Y + p.0 Z + p.l into p 0. RX and + p 0/ Y K. + p 00 Z + p 0L y = p /. X + p // Y + p /0 Z + p /L 5. Normalise K (as scale of P was unknown). X Y Z Recover t = K. 0 0 X x Y x Z x x X Y Z p.l X y p Y /L y Z p G /L y y. p = 0 0 where p contains the unknowns.

48 Camera Calibration Math Part. Recover overall projection matrix P 0 L. Assume target with at least 6 known scene points. Build and solve system of (at least) 2 equations. 2. Construct P Š Œ = KR from leftmost 3x3 block of P = K[R t]. 3. Invert P Š Œ so P Š Œ = R. K.. 4. Decompose P P Š Œ using G QR decomposition p into R and K... p./ p.0 p.l P = p /. p // p /0 p /L 5. Normalise K (as pscale 0. 0/ of p 00 Pp 0L was unknown). 6. Recover t = K. p.l p /L p G /L.

49 Camera Calibration Math Part. Recover overall projection matrix P 0 L. Assume target with at least 6 known scene points. Build and solve system of (at least) 2 equations. 2. Construct P Š Œ = KR from leftmost 3x3 block of P = K[R t]. 3. Invert P Š Œ so P Š Œ = R. K.. 4. Decompose P Š Œ using QR decomposition into R and K. 5. Normalise K (as scale of - P was unknown) Recover P. t = K R. K K p.l p /L p G G /L. R

50 Camera Calibration Math Part. Recover overall projection matrix P 0 L. Assume target with at least 6 known scene points. Build and solve system of (at least) 2 equations. 2. Construct P Š Œ = KR from leftmost 3x3 block of P = K[R t]. 3. Invert P Š Œ so P Š Œ = R. K.. 4. Decompose P Š Œ using QR decomposition into R and K. 5. Normalise K (as scale of P was unknown). 6. Recover t = K. p.l p /L p G 0L.

51 Camera Calibration Example Algorithm

52 Radial Distortion So far, we have figured out the transformations that turn our camera into a notional camera with the world and the camera coordinates aligned and an ideal image plane. One often has to correct for the other optical distortions and aberrations. Radial distortion is the most common see the Q sheet. Correction for this distortion is applied before carrying out calibration.

53 Practical Camera Calibration Matlab Calibration Toolkit

54 Summary of Lecture In this lecture we have: Introduced the aims of computer vision, and some paradigms. Explored linear transformations and introduced homogeneous coordinates. Defined perspective projection from a scene, and saw that it could be made linear using homogeneous coordinates. Discussed how to pre-calibrate a camera using image of six or more known scene points.

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004 Augmented Reality II - Camera Calibration - Gudrun Klinker May, 24 Literature Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2. (Section 5,

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Agenda. Rotations. Camera calibration. Homography. Ransac

Agenda. Rotations. Camera calibration. Homography. Ransac Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Camera Model and Calibration. Lecture-12

Camera Model and Calibration. Lecture-12 Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

C / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.

C / 35. C18 Computer Vision. David Murray.   dwm/courses/4cv. C18 2015 1 / 35 C18 Computer Vision David Murray david.murray@eng.ox.ac.uk www.robots.ox.ac.uk/ dwm/courses/4cv Michaelmas 2015 C18 2015 2 / 35 Computer Vision: This time... 1. Introduction; imaging geometry;

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg

More information

Computer Vision cmput 428/615

Computer Vision cmput 428/615 Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?

More information

CS4670: Computer Vision

CS4670: Computer Vision CS467: Computer Vision Noah Snavely Lecture 13: Projection, Part 2 Perspective study of a vase by Paolo Uccello Szeliski 2.1.3-2.1.6 Reading Announcements Project 2a due Friday, 8:59pm Project 2b out Friday

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models /2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Camera Model and Calibration

Camera Model and Calibration Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Computer Vision: Lecture 3

Computer Vision: Lecture 3 Computer Vision: Lecture 3 Carl Olsson 2019-01-29 Carl Olsson Computer Vision: Lecture 3 2019-01-29 1 / 28 Todays Lecture Camera Calibration The inner parameters - K. Projective vs. Euclidean Reconstruction.

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Structure from Motion

Structure from Motion Structure from Motion Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions Structure from motion Recover both 3D scene geoemetry and camera positions SLAM: Simultaneous

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

Agenda. Perspective projection. Rotations. Camera models

Agenda. Perspective projection. Rotations. Camera models Image formation Agenda Perspective projection Rotations Camera models Light as a wave + particle Light as a wave (ignore for now) Refraction Diffraction Image formation Digital Image Film Human eye Pixel

More information

Cameras and Stereo CSE 455. Linda Shapiro

Cameras and Stereo CSE 455. Linda Shapiro Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Structure from Motion CSC 767

Structure from Motion CSC 767 Structure from Motion CSC 767 Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R,t R 2,t 2 R 3,t 3 Camera??

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Introduction to Homogeneous coordinates

Introduction to Homogeneous coordinates Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Camera calibration. Robotic vision. Ville Kyrki

Camera calibration. Robotic vision. Ville Kyrki Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection

More information

2D/3D Geometric Transformations and Scene Graphs

2D/3D Geometric Transformations and Scene Graphs 2D/3D Geometric Transformations and Scene Graphs Week 4 Acknowledgement: The course slides are adapted from the slides prepared by Steve Marschner of Cornell University 1 A little quick math background

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation. 3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly

More information

Geometry of image formation

Geometry of image formation eometry of image formation Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 3, 2008 Talk Outline

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 5: Projection Reading: Szeliski 2.1 Projection Reading: Szeliski 2.1 Projection Müller Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html Modeling

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful. Project 4 Results Representation SIFT and HoG are popular and successful. Data Hugely varying results from hard mining. Learning Non-linear classifier usually better. Zachary, Hung-I, Paul, Emanuel Project

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois Pinhole Camera Model /5/7 Computational Photography Derek Hoiem, University of Illinois Next classes: Single-view Geometry How tall is this woman? How high is the camera? What is the camera rotation? What

More information

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary

More information

METR Robotics Tutorial 2 Week 2: Homogeneous Coordinates

METR Robotics Tutorial 2 Week 2: Homogeneous Coordinates METR4202 -- Robotics Tutorial 2 Week 2: Homogeneous Coordinates The objective of this tutorial is to explore homogenous transformations. The MATLAB robotics toolbox developed by Peter Corke might be a

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Camera models and calibration

Camera models and calibration Camera models and calibration Read tutorial chapter 2 and 3. http://www.cs.unc.edu/~marc/tutorial/ Szeliski s book pp.29-73 Schedule (tentative) 2 # date topic Sep.8 Introduction and geometry 2 Sep.25

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models Projective Geometry and Camera Models Computer Vision CS 43 Brown James Hays Slides from Derek Hoiem, Alexei Efros, Steve Seitz, and David Forsyth Administrative Stuff My Office hours, CIT 375 Monday and

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe : Martin Stiaszny and Dana Qu LECTURE 0 Camera Calibration 0.. Introduction Just like the mythical frictionless plane, in real life we will

More information

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6 Math background 2D Geometric Transformations CS 4620 Lecture 6 Read: Chapter 2: Miscellaneous Math Chapter 5: Linear Algebra Notation for sets, functions, mappings Linear transformations Matrices Matrix-vector

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.1: 3D Geometry Jürgen Sturm Technische Universität München Points in 3D 3D point Augmented vector Homogeneous

More information

More Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin.

More Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin. More Mosaic Madness Jeffrey Martin (jeffrey-martin.com) CS194: Image Manipulation & Computational Photography with a lot of slides stolen from Alexei Efros, UC Berkeley, Fall 2018 Steve Seitz and Rick

More information

An idea which can be used once is a trick. If it can be used more than once it becomes a method

An idea which can be used once is a trick. If it can be used more than once it becomes a method An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized

More information

3D Modeling using multiple images Exam January 2008

3D Modeling using multiple images Exam January 2008 3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider

More information

Instance-level recognition I. - Camera geometry and image alignment

Instance-level recognition I. - Camera geometry and image alignment Reconnaissance d objets et vision artificielle 2011 Instance-level recognition I. - Camera geometry and image alignment Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Geometric Transformations

Geometric Transformations Geometric Transformations CS 4620 Lecture 9 2017 Steve Marschner 1 A little quick math background Notation for sets, functions, mappings Linear and affine transformations Matrices Matrix-vector multiplication

More information

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. - Aristotle University of Texas at Arlington Introduction

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Jayson Smith LECTURE 4 Planar Scenes and Homography 4.1. Points on Planes This lecture examines the special case of planar scenes. When talking

More information

Computer Vision CSCI-GA Assignment 1.

Computer Vision CSCI-GA Assignment 1. Computer Vision CSCI-GA.2272-001 Assignment 1. September 22, 2017 Introduction This assignment explores various methods for aligning images and feature extraction. There are four parts to the assignment:

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Lecture 14: Basic Multi-View Geometry

Lecture 14: Basic Multi-View Geometry Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram

More information

Multiview Stereo COSC450. Lecture 8

Multiview Stereo COSC450. Lecture 8 Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, 3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective

More information

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Image Transformations & Camera Calibration. Mašinska vizija, 2018. Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize

More information

CSCI 5980/8980: Assignment #4. Fundamental Matrix

CSCI 5980/8980: Assignment #4. Fundamental Matrix Submission CSCI 598/898: Assignment #4 Assignment due: March 23 Individual assignment. Write-up submission format: a single PDF up to 5 pages (more than 5 page assignment will be automatically returned.).

More information