Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Size: px
Start display at page:

Download "Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG."

Transcription

1 Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br

2 Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview on camera calibration. 2

3 World Coordinates World coordinates are used as reference coordinates for cameras or objects in the scene. Suppose we have a camera and 3D objects in the scene to be analyzed by computer vision. It may be convenient to assume an X Y Z w w w world coordinate system that is not defined by the particular camera under consideration. 3

4 World Coordinates The camera coordinate system X s Y s Z s needs then to be described with respect to the chosen world coordinates. Camera Coordinate System X s y Y s O s x u p d = [x d y d ] Τ x f X w y u Z s p u = [x u y u ] Τ Z w O w World Coordinate System P w = [X w Y w Z w ] Τ 4 Y w

5 World Coordinates The figure below exemplifies a world coordinate system at a particular moment during a camera calibration procedure. Source: R. Klette 5

6 Affine Transform World and camera coordinates are transformed into each other by an affine transform. An affine transform of the 3D space maps straight lines into straight lines and does not change ratios of distances between points lying on a straight line. However, it does not necessarily preserve angles between lines or distances between points. 6

7 Affine Transform Examples of affine transforms include translation, scaling and rotation. Here, the mathematical representation of an affine transform is by a linear transform defined by a matrix multiplication and a translation. For example, we may apply a translation as defined below: t = [ t 1 t 2 t 3 ] T 7

8 Affine Transform And a rotation as defined here: =[ ] r 11 r 12 r 13 R = r 21 r 22 r 23 = R 1 (α) R 2 (β) R 3 (γ ) r 31 r 32 r 33 Where: cos β 0 sin β cosγ sin γ 0 R 1 (α) = 0 cosα sin α R 2 (β) = R 3 (γ ) = sin γ cos γ 0 0 sin α cos α sin β 0 cosβ x-axis y-axis z-axis

9 Affine Transform R 1 (α), R 2 (β) andr 3 (γ) are the individual rotations about the three coordinate axes, with Eulerian rotation angles,, and, one for each axis. α β Observation: rotation and translation in the 3D space are uniquely determined by six parameters,,,,, and. α β γ t 1 t 2 t 3 γ 9

10 2D Rotation Matrix (Review) y!y y v v From the figure on the left, we have: x = v cosα y = v sinα!x = v cos(α +θ)!y = v sin(α +θ) θ α!x x x However, we know that: cos(α +θ) = cosα cosθ sinα sinθ sin(α +θ) = sinα cosθ + cosα sinθ 10

11 2D Rotation Matrix (Review) y!y y θ α v v!x x Then, we have:!x = v cosα cosθ v sinα sinθ = x cosθ y sinθ!y = v sinα cosθ + v cosα sinθ = y cosθ + x sinθ x By using matrix notation:! # "#!x!y! & cosθ sinθ = # %& " sinθ cosθ! & # % "# x y & %& 2D Rotation Matrix 11

12 3D Rotation Matrix (Review) y z!y In the 3D case, we have, for example: y θ α v v!x x! # # # " x!x!y!z! & # & = # & # % " cosθ sinθ 0 sinθ cosθ D Rotation Matrix with respect to z-axis! & # & # & # % " x y z & & & % 12

13 3D Rotation Matrix (Textbook Notation) y y δ β x y φ α z γ ω " cosγ sinγ 0 % R z = sinγ cosγ 0 # & x z " R y = # cosβ 0 sin β % sin β 0 cosβ & z " R x = # cosα sinα 0 sinα cosα % & 13 x

14 World and Camera Coordinates As previously mentioned, world and camera coordinates are transformed into each other by an affine transform. Consider the affine transform of a 3D point, given as P w = [ X w Y w Z w ] T in world coordinates, into in camera coordinates. P s = [ X s Y s Z s ] T 14

15 World and Camera Coordinates In this case, we have that: P s = R P w + t [ X s Y s Z s ] T = R [ X w Y w Z w ] T + t! # # # "# X s Y s Z s! & # & = # & # %& "# r 11 + r 12 + r 13 r 21 + r 22 + r 23 r 31 + r 32 + r 33! & # & + # & # %& "# t 1 t 2 t 3 & & & %& 15

16 World and Camera Coordinates The rotation matrix R and a translation vector need to be specified by calibration. t P w P s Note that and denote the same point in the 3D Euclidean space, just with respect to different 3D coordinate systems. 16

17 World and Camera Coordinates By performing the sum below we obtain:! # # # "# X s Y s Z s! & # & = # & # %& "# r 11 + r 12 + r 13 r 21 + r 22 + r 23 r 31 + r 32 + r 33! & # & + # & # %& "# X s = r 11 + r 12 + r 13 + t 1 Y s = r 21 + r 22 + r 23 + t 2 t 1 t 2 t 3 & & & %& Z s = r 31 + r 32 + r 33 + t 3 17

18 World and Image Coordinates Assume that a point in the 3D scene is projected into a camera at an image point in the coordinate system. p = [ x y ] T xy P w = [ X w Y w Z w ] T Consider also the affine transform between world and camera coordinates as previously defined: X s = r 11 + r 12 + r 13 + t 1 Y s = r 21 + r 22 + r 23 + t 2 Z s = r 31 + r 32 + r 33 + t 3 18

19 World and Image Coordinates So, by using: [ x y ] T = [ x u + c x y u + c y ] T = [ f X s Z s + c x f Y s Z s + c y ] T We have: " # x c x y c y f % " = & # x u y u f % " = f & # X s Z s Y s Z s 1 " % = f & # r 11 + r 12 + r 13 + t 1 r 31 + r 32 + r 33 + t 3 r 21 + r 22 + r 23 + t 2 r 31 + r 32 + r 33 + t 3 1 % & 19

20 World and Image Coordinates Computer vision algorithms for reconstructing the 3D structure of a scene or computing the position of objects in space need equations as below: " # x c x y c y f % " = & # x u y u f % " = f & # X s Y s 1 Z s Z s " % = f & # r 11 + r 12 + r 13 + t 1 r 31 + r 32 + r 33 + t 3 r 21 + r 22 + r 23 + t 2 r 31 + r 32 + r 33 + t 3 1 % & Note that this equation links the coordinates of points in 3D space with the coordinates of their corresponding image points. 20

21 World and Image Coordinates In those applications it is often assumed that the coordinates of points in the camera coordinate system can be obtained from pixel coordinates. And also that the camera coordinate system can be located with respect to the world coordinate system. This is equivalent to assume knowledge of some camera s characteristics, known in vision as the camera s extrinsic and intrinsic parameters. 21

22 Homogeneous Coordinates By using homogeneous coordinates, the matrix multiplication and vector addition in the affine transform reduce to one matrix multiplication. But, what are Homogeneous coordinates? They are the coordinates used in projective geometry, as Cartesian coordinates in Euclidean geometry. 22

23 Homogeneous Coordinates Formulas involving homogeneous coordinates are often simpler and more symmetric than their Cartesian counterparts. So, let s first introduce homogeneous coordinates in the plane before moving on to the 3D space. Basically, the idea is that instead of using only coordinates and, we add a third coordinate. x y w 23

24 Homogeneous Coordinates w 0 [!x!y w ] Τ Assuming that, represents now the point [ x y ] Τ = [!x w!y w ] Τ in the usual 2D inhomogeneous coordinates. w The scale of is unimportant and we will call [!x!y w ] Τ the homogeneous coordinates for a 2D point [ x y ] Τ = [!x w!y w ]. Τ 24

25 Homogeneous Coordinates w =1 Obviously, we can decide to use only for representing points in the 2D plane. In this case, we have. Of course, there is also the option to have. [!x!y 1 ] Τ w = 0 Homogeneous coordinates define existing points [ x y ] Τ, while coordinates define points at infinity. [!x!y 0 ] Τ [ x y ] Τ = [!x 1!y 1 ] Τ = [!x!y ] Τ 25

26 Homogeneous Coordinates [ X Y Z ] Τ R 3 A point is represented by [!X!Y!Z w ] Τ in homogeneous coordinates, with. [ X Y Z ] Τ = [!X w!y w!z w ] Τ So, now the affine transform relating the world and camera coordinates can be represented by a 4 x 4 matrix multiplication. Let s see an example. 26

27 Example Consider again the previous affine transform: P s = R P w + t [ X s Y s Z s ] T = R [ X w Y w Z w ] T + t! # # # "# X s Y s Z s! & # & = # & # %& "# r 11 + r 12 + r 13 r 21 + r 22 + r 23 r 31 + r 32 + r 33! & # & + # & # %& "# t 1 t 2 t 3 & & & %& 27

28 Example P w By representing in homogeneous coordinates, that affine transform could be rewritten as follows:! # # # "# r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 & &! " X w Y w Z w 1 & %& Τ % =! " R t %! " X w Y w Z w 1 %Τ =! " Xs Y s Z s Τ % Note that the steps of matrix multiplication and vector addition reduce to one matrix multiplication. 28

29 Camera Calibration Camera calibration determines the intrinsic (i.e. camera-specific) and extrinsic parameters of a given one- or multi-camera configuration. The extrinsic parameters identify the transform between the unknown camera coordinate system and a known world coordinate system. 29

30 Camera Calibration That said, the extrinsic parameters are the ones of the rotation matrix R and the translation vector involved in the previous affine transform: P s = R P w + t t It brings the corresponding axes of the camera and world coordinates systems onto each other. It describes the relative positions of the origins of the camera and world coordinates systems. 30

31 World Coordinates Camera Coordinate System =[ ] Y s x u x O X s s f y r 11 r 12 r 13 R = r 21 r 22 r 23 = R 1 (α) R 2 (β) R 3 (γ ) r 31 r 32 r 33 y u Z s t X w α World Coordinate System Z w O w γ β Y w 31

32 Camera Calibration By representing P w in homogeneous coordinates, that is, by adding a fourth coordinate 1 to P w, we may rewrite the previous equation as follows: Where " P s = [ R T ] # M e P w 1 % " & = M e # is called the matrix of extrinsic parameters. P w 1 % & 32

33 Camera Calibration The intrinsic parameters, in turn, are the set of parameters needed to characterize the optical, geometric, and digital features of the camera: Focal length f ;; The location of the center of the image plane in pixel coordinates, also called principal point [ c x c y ] Τ ;; The physical size in horizontal and vertical directions of the sensor cell [ s x s y ] Τ ;; And, if required, the radial distortion parameters. 33

34 Camera Calibration Disregarding radial distortion and the physical size of the sensor cell, we can group all the intrinsic parameters in a single 3 x 3 matrix :! # M i = # # "# f 0 c x 0 f c y M i This matrix is called matrix of intrinsic parameters. 34 Τ & & & %&

35 Camera Calibration A camera-producer specifies normally some intrinsic parameters (e.g. the physical size of sensor cells). However, the given data are often not accurate enough for computer vision applications. M i The matrix of intrinsic parameters performs the transformation between the camera coordinate system and the image coordinate system. 35

36 Camera Calibration Therefore, by considering the representation of in homogeneous coordinates, we may derive the following equation for a perspective projection: "!p = M i P s = M i M e # P w 1 " % & = #!p 1!p 2!p 3 % & P w 36

37 Camera Calibration What is interesting about vector!p = [!p 1!p 2!p 3 ] Τ is that the ratios!p 1!p 3 and!p 2!p 3 are nothing but the coordinates in pixel of the image point. Next, we will provide an overview on camera calibration such that you can use well-known software with sufficient background knowledge. However, we will not detail any particular calibration method, which is outside the scope of this course. 37

38 Camera Calibration For camera calibration, we use geometric patterns on 2D or 3D surfaces that we are able to measure very accurately. For example, we can use a calibration rig that is either attached to walls or dynamically moving in front of the camera while taking multiple images. 38

39 Camera Calibration A typical calibration rig (a checkerboard pattern). Source: R. Klette 39

40 Camera Calibration The used geometric patterns are recorded and localized in the resulting images. Next, their appearance in the image grid is compared with the available measurements about their geometry in the real world. Calibration may be done by dealing with only one camera (e.g. of a multi-camera system) at a time. 40

41 Camera Calibration In this case, we may assume that cameras are either static or moveable. For the latter case, we only calibrate internal (intrinsic) parameters. Recording may start after having the parameters needed for calibration specified and the appropriate calibration rig and software at hand. Calibration needs to be redone from time to time. 41

42 Camera Calibration When calibrating a multi-camera system, all cameras need to be time-synchronized, especially if the calibration rig moves during the procedure. Each camera contains its own camera coordinate system, having the origin at its projection center. 42

43 Camera Calibration The calibration rig is commonly used for defining the world coordinates at the moment when taking an image (see figure below). Source: R. Klette 43

44 Camera Calibration We consider the following transforms: 1 A transform from world coordinates [ X w Y w Z w ] Τ to camera coordinates [ X s Y s Z s ] Τ ;; 2 A central projection of [ X s Y s Z s ] Τ into undistorted image coordinates [ x u y u ] Τ ;; 3 The lens distortion involved, mapping [ x u y u ] Τ into the valid (i.e. distorted) coordinates [ x d y d ] Τ ;; 44

45 Camera Calibration We consider the following transforms (cont.): x d y d 4 A shift of coordinates by the principal point [ x c y c ] Τ, defining the sensor coordinates [ x s y s ] Τ ;; 5 The mapping of sensor coordinates [ x s y s ] Τ into image coordinates [ x y ] Τ (i.e. the pixel s address). 45

46 Lens Distortion The mapping from a 3D scene into 2D image points combines a perspective projection and a deviation from the model of a pinhole camera. This deviation is caused by radial lens distortion. In this case, how can we compute the coordinates of undistorted image points? 46

47 Lens Distortion Given a lens-distorted image point p d = [ x d y d ], Τ we can obtain the corresponding undistorted image point p u = [ x u y u ] Τ as follows: = x u = c x + (x d c x ) ( 1 + κ 1 rd 2 + κ 2rd 4 + e ) x y u = c y + (y d c y ) ( 1 + κ 1 rd 2 + κ 2rd 4 + e ) y For: r d = (x d c x ) 2 + (y d c y ) significant and can be assumed

48 Lens Distortion e x e y The errors and are insignificant and can be assumed to be zero. There is experimental evidence that with only the two lower-order parameters k 1 and k 2, we can correct more than 90 % of the radial distortion. After having lens distortion corrected, the camera may be viewed as pinhole-camera model. 48

49 Designing a Calibration Method First, we need to define the set of parameters to be calibrated and a corresponding camera model. For example, if the radial distortion parameters and need to be calibrated, then the camera = model needs to include the previous equations: k 2 k 1 x u = c x + (x d c x ) ( 1 + κ 1 rd 2 + κ 2rd 4 + e ) x y u = c y + (y d c y ) ( 1 + κ 1 rd 2 + κ 2rd 4 + e ) y 49

50 Designing a Calibration Method If we know the radial distortion parameters and used them for mapping distorted images into undistorted ones, we can use equations such as: " # x c x y c y f % " = & # x u y u f % " = f & # X s Z s Y s Z s 1 " % = f & # r 11 + r 12 + r 13 + t 1 r 31 + r 32 + r 33 + t 3 r 21 + r 22 + r 23 + t 2 r 31 + r 32 + r 33 + t 3 1 % & 50

51 Designing a Calibration Method P w = [ X w Y w Z w ] T A point on the calibration rig or on a calibration mark is known by its physically measured world coordinates. P w Such a point could be the corners of squares on the calibration rig or special marks in the 3D scene where calibration takes place. 51

52 Designing a Calibration Method P w For each point, we need to identify the corresponding point p = [ x y ] Τ, which is the projection of in the image plane. P w (P w, p) Having, for example, 100 different pairs, we would have 100 equations in the form of: " # x c x y c y f % " = & # x u y u f % " = f & # X s Y s 1 Z s Z s " % = f & # r 11 + r 12 + r 13 + t 1 r 31 + r 32 + r 33 + t 3 r 21 + r 22 + r 23 + t 2 r 31 + r 32 + r 33 + t 3 1 % & 52

53 Designing a Calibration Method For each equation of the form: " # x c x y c y f % " = & # x u y u f % " = f & # X s Y s 1 Z s Z s " % = f & # r 11 + r 12 + r 13 + t 1 r 31 + r 32 + r 33 + t 3 r 21 + r 22 + r 23 + t 2 r 31 + r 32 + r 33 + t 3 f,c x,c y,t 1,t 2, We have the following 09 unknowns: t 3,α, β and γ % &

54 Designing a Calibration Method So, by considering the 9 previous unknowns, at least 5 pairs(p w, p) should be provided, once each pair defines 2 equations. For the 100 different points aforementioned, we would have an overdetermined equational system (a system with more equations than unknowns). In this case, we need to apply an optimization procedure for solving it for those few unknowns. 54

55 Designing a Calibration Method It is important to emphasize that we still could refine our camera model. For example, we could include in the matrix of intrinsic parameters the dimensions in horizontal and vertical directions of the sensor cell:. [ s x s y ] Τ Accordingly, the resulting system of equations will become more complex and have more unknowns. 55

56 Designing a Calibration Method Thus, summarizing the general procedure: known points P w in the world coordinate system are related to their corresponding projections in the image. The equations defining our camera model contain X w,y w, Z w, x and y as known values and the intrinsic or extrinsic parameters as unknowns. The resulting system is necessarily nonlinear due to central projection or even radial distortion. 56 p

57 Designing a Calibration Method So, it needs to be solved for the specified unknowns, where over-determined situations provide for stability of a used numeric solution scheme. We do not discuss any further such system of equations or solution schemes in this course. 57

58 Calibration Board A rigid calibration board wearing a black and white checkerboard pattern is common. It is recommended that it has 7 x 7 squares at least. The squares need to be large enough such that their minimum size, when recorded on the image plane during calibration, is 10 x 10 pixels at least. 58

59 Calibration Board A rigid and planar board can be achieved by printing the calibration rig onto paper, which is then glued onto a rigid board. This method is relatively cheap and reliable. The grid can be created with any image-creating tool as long as the squares are exactly of the same size. 59

60 Corners in the Checkerboard For the checkerboard, calibration marks are the corners of the squares. Those corners can be identified by approximating intersection points of grid lines, thus defining the corners of the squares with subpixel accuracy. For example, assume 10 vertical and 10 horizontal grid lines on a checkerboard. 60

61 Corners in the Checkerboard Then this should result in peaks in the Hough space for detecting line segments. dα Each peak defines a detected grid line, and the intersection points of those define the corners of the checkerboard in the recorded image. Applying this method requires that lens distortion has been removed from the recorded images prior to applying the Hough-space method. 61

62 Next Lecture Stereo Vision Epipolar Geometry. Binocular Vision in Canonical Stereo Geometry. Suggested reading Section 7.3 of textbook. 62

3-D D Euclidean Space - Vectors

3-D D Euclidean Space - Vectors 3-D D Euclidean Space - Vectors Rigid Body Motion and Image Formation A free vector is defined by a pair of points : Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Coordinates of the vector : 3D Rotation

More information

Introduction to Homogeneous coordinates

Introduction to Homogeneous coordinates Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

3D Modeling using multiple images Exam January 2008

3D Modeling using multiple images Exam January 2008 3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider

More information

Camera Model and Calibration

Camera Model and Calibration Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Computer Vision cmput 428/615

Computer Vision cmput 428/615 Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

COMP 558 lecture 19 Nov. 17, 2010

COMP 558 lecture 19 Nov. 17, 2010 COMP 558 lecture 9 Nov. 7, 2 Camera calibration To estimate the geometry of 3D scenes, it helps to know the camera parameters, both external and internal. The problem of finding all these parameters is

More information

521466S Machine Vision Exercise #1 Camera models

521466S Machine Vision Exercise #1 Camera models 52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation. 3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

DD2429 Computational Photography :00-19:00

DD2429 Computational Photography :00-19:00 . Examination: DD2429 Computational Photography 202-0-8 4:00-9:00 Each problem gives max 5 points. In order to pass you need about 0-5 points. You are allowed to use the lecture notes and standard list

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection

More information

Short on camera geometry and camera calibration

Short on camera geometry and camera calibration Short on camera geometry and camera calibration Maria Magnusson, maria.magnusson@liu.se Computer Vision Laboratory, Department of Electrical Engineering, Linköping University, Sweden Report No: LiTH-ISY-R-3070

More information

Perspective Projection in Homogeneous Coordinates

Perspective Projection in Homogeneous Coordinates Perspective Projection in Homogeneous Coordinates Carlo Tomasi If standard Cartesian coordinates are used, a rigid transformation takes the form X = R(X t) and the equations of perspective projection are

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Camera Model and Calibration. Lecture-12

Camera Model and Calibration. Lecture-12 Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

COMP30019 Graphics and Interaction Three-dimensional transformation geometry and perspective

COMP30019 Graphics and Interaction Three-dimensional transformation geometry and perspective COMP30019 Graphics and Interaction Three-dimensional transformation geometry and perspective Department of Computing and Information Systems The Lecture outline Introduction Rotation about artibrary axis

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg

More information

Assignment 2 : Projection and Homography

Assignment 2 : Projection and Homography TECHNISCHE UNIVERSITÄT DRESDEN EINFÜHRUNGSPRAKTIKUM COMPUTER VISION Assignment 2 : Projection and Homography Hassan Abu Alhaija November 7,204 INTRODUCTION In this exercise session we will get a hands-on

More information

Camera Projection Models We will introduce different camera projection models that relate the location of an image point to the coordinates of the

Camera Projection Models We will introduce different camera projection models that relate the location of an image point to the coordinates of the Camera Projection Models We will introduce different camera projection models that relate the location of an image point to the coordinates of the corresponding 3D points. The projection models include:

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models /2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:

More information

N-Views (1) Homographies and Projection

N-Views (1) Homographies and Projection CS 4495 Computer Vision N-Views (1) Homographies and Projection Aaron Bobick School of Interactive Computing Administrivia PS 2: Get SDD and Normalized Correlation working for a given windows size say

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and

More information

Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point Auxiliary Orientation

Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point Auxiliary Orientation 2016 rd International Conference on Engineering Technology and Application (ICETA 2016) ISBN: 978-1-60595-8-0 Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models Projective Geometry and Camera Models Computer Vision CS 43 Brown James Hays Slides from Derek Hoiem, Alexei Efros, Steve Seitz, and David Forsyth Administrative Stuff My Office hours, CIT 375 Monday and

More information

CSE452 Computer Graphics

CSE452 Computer Graphics CSE45 Computer Graphics Lecture 8: Computer Projection CSE45 Lecture 8: Computer Projection 1 Review In the last lecture We set up a Virtual Camera Position Orientation Clipping planes Viewing angles Orthographic/Perspective

More information

Cameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models

Cameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models Cameras and Radiometry Last lecture in a nutshell CSE 252A Lecture 5 Conversion Euclidean -> Homogenous -> Euclidean In 2-D Euclidean -> Homogenous: (x, y) -> k (x,y,1) Homogenous -> Euclidean: (x, y,

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Lecture 8: Camera Models

Lecture 8: Camera Models Lecture 8: Camera Models Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab 1 14- Oct- 15 What we will learn today? Pinhole cameras Cameras & lenses The geometry of pinhole

More information

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM CIS 580, Machine Perception, Spring 2016 Homework 2 Due: 2015.02.24. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Recover camera orientation By observing

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric

More information

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship Proc. Natl. Sci. Counc. ROC(A) Vol. 25, No. 5, 2001. pp. 300-308 Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship YIH-SHYANG CHENG, RAY-CHENG CHANG, AND SHIH-YU

More information

2D Image Transforms Computer Vision (Kris Kitani) Carnegie Mellon University

2D Image Transforms Computer Vision (Kris Kitani) Carnegie Mellon University 2D Image Transforms 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Extract features from an image what do we do next? Feature matching (object recognition, 3D reconstruction, augmented

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Agenda. Rotations. Camera calibration. Homography. Ransac

Agenda. Rotations. Camera calibration. Homography. Ransac Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr

More information

Image warping , , Computational Photography Fall 2017, Lecture 10

Image warping , , Computational Photography Fall 2017, Lecture 10 Image warping http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 10 Course announcements Second make-up lecture on Friday, October 6 th, noon-1:30

More information

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior

More information

Visualisation Pipeline : The Virtual Camera

Visualisation Pipeline : The Virtual Camera Visualisation Pipeline : The Virtual Camera The Graphics Pipeline 3D Pipeline The Virtual Camera The Camera is defined by using a parallelepiped as a view volume with two of the walls used as the near

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

ECE Digital Image Processing and Introduction to Computer Vision. Outline

ECE Digital Image Processing and Introduction to Computer Vision. Outline ECE592-064 Digital Image Processing and Introduction to Computer Vision Depart. of ECE, NC State University Instructor: Tianfu (Matt) Wu Spring 2017 1. Recap Outline 2. Modeling Projection and Projection

More information

A Stereo Machine Vision System for. displacements when it is subjected to elasticplastic

A Stereo Machine Vision System for. displacements when it is subjected to elasticplastic A Stereo Machine Vision System for measuring three-dimensional crack-tip displacements when it is subjected to elasticplastic deformation Arash Karpour Supervisor: Associate Professor K.Zarrabi Co-Supervisor:

More information

Chapter 2 - Basic Mathematics for 3D Computer Graphics

Chapter 2 - Basic Mathematics for 3D Computer Graphics Chapter 2 - Basic Mathematics for 3D Computer Graphics Three-Dimensional Geometric Transformations Affine Transformations and Homogeneous Coordinates Combining Transformations Translation t + t Add a vector

More information

Graphics and Interaction Transformation geometry and homogeneous coordinates

Graphics and Interaction Transformation geometry and homogeneous coordinates 433-324 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

COS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures)

COS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures) COS429: COMPUTER VISON CMERS ND PROJECTIONS (2 lectures) Pinhole cameras Camera with lenses Sensing nalytical Euclidean geometry The intrinsic parameters of a camera The extrinsic parameters of a camera

More information

A simple method for interactive 3D reconstruction and camera calibration from a single view

A simple method for interactive 3D reconstruction and camera calibration from a single view A simple method for interactive 3D reconstruction and camera calibration from a single view Akash M Kushal Vikas Bansal Subhashis Banerjee Department of Computer Science and Engineering Indian Institute

More information

Hello, welcome to the video lecture series on Digital Image Processing. So in today's lecture

Hello, welcome to the video lecture series on Digital Image Processing. So in today's lecture Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module 02 Lecture Number 10 Basic Transform (Refer

More information

HW 1: Project Report (Camera Calibration)

HW 1: Project Report (Camera Calibration) HW 1: Project Report (Camera Calibration) ABHISHEK KUMAR (abhik@sci.utah.edu) 1 Problem The problem is to calibrate a camera for a fixed focal length using two orthogonal checkerboard planes, and to find

More information

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture

More information

CS4670: Computer Vision

CS4670: Computer Vision CS467: Computer Vision Noah Snavely Lecture 13: Projection, Part 2 Perspective study of a vase by Paolo Uccello Szeliski 2.1.3-2.1.6 Reading Announcements Project 2a due Friday, 8:59pm Project 2b out Friday

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D

More information

Camera calibration for miniature, low-cost, wide-angle imaging systems

Camera calibration for miniature, low-cost, wide-angle imaging systems Camera calibration for miniature, low-cost, wide-angle imaging systems Oliver Frank, Roman Katz, Christel-Loic Tisse and Hugh Durrant-Whyte ARC Centre of Excellence for Autonomous Systems University of

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

Epipolar Geometry and the Essential Matrix

Epipolar Geometry and the Essential Matrix Epipolar Geometry and the Essential Matrix Carlo Tomasi The epipolar geometry of a pair of cameras expresses the fundamental relationship between any two corresponding points in the two image planes, and

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2 Engineering Tripos Part IIB FOURTH YEAR Module 4F2: Computer Vision and Robotics Solutions to Examples Paper 2. Perspective projection and vanishing points (a) Consider a line in 3D space, defined in camera-centered

More information

Geometry of image formation

Geometry of image formation eometry of image formation Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 3, 2008 Talk Outline

More information

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Image Transformations & Camera Calibration. Mašinska vizija, 2018. Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize

More information

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne

More information

Camera models and calibration

Camera models and calibration Camera models and calibration Read tutorial chapter 2 and 3. http://www.cs.unc.edu/~marc/tutorial/ Szeliski s book pp.29-73 Schedule (tentative) 2 # date topic Sep.8 Introduction and geometry 2 Sep.25

More information

Early Fundamentals of Coordinate Changes and Rotation Matrices for 3D Computer Vision

Early Fundamentals of Coordinate Changes and Rotation Matrices for 3D Computer Vision Early Fundamentals of Coordinate Changes and Rotation Matrices for 3D Computer Vision Ricardo Fabbri Benjamin B. Kimia Brown University, Division of Engineering, Providence RI 02912, USA Based the first

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometr and Camera Calibration 3D Coordinate Sstems Right-handed vs. left-handed 2D Coordinate Sstems ais up vs. ais down Origin at center vs. corner Will often write (u, v) for image coordinates v

More information

TD2 : Stereoscopy and Tracking: solutions

TD2 : Stereoscopy and Tracking: solutions TD2 : Stereoscopy and Tracking: solutions Preliminary: λ = P 0 with and λ > 0. If camera undergoes the rigid transform: (R,T), then with, so that is the intrinsic parameter matrix. C(Cx,Cy,Cz) is the point

More information

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Single View Geometry. Camera model & Orientation + Position estimation. What am I? Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing point Mapping from 3D to 2D Point & Line Goal: Point Homogeneous coordinates represent coordinates in 2 dimensions

More information

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois Pinhole Camera Model /5/7 Computational Photography Derek Hoiem, University of Illinois Next classes: Single-view Geometry How tall is this woman? How high is the camera? What is the camera rotation? What

More information

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2

More information

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration 1 Lab 1 Camera Calibration Objective In this experiment, students will use stereo cameras, an image acquisition program and camera calibration algorithms to achieve the following goals: 1. Develop a procedure

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Cameras and Stereo CSE 455. Linda Shapiro

Cameras and Stereo CSE 455. Linda Shapiro Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation

More information

Robot Vision: Camera calibration

Robot Vision: Camera calibration Robot Vision: Camera calibration Ass.Prof. Friedrich Fraundorfer SS 201 1 Outline Camera calibration Cameras with lenses Properties of real lenses (distortions, focal length, field-of-view) Calibration

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 5: Projection Reading: Szeliski 2.1 Projection Reading: Szeliski 2.1 Projection Müller Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html Modeling

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured

More information

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.

More information