Intelligent Robotics

Size: px
Start display at page:

Download "Intelligent Robotics"

Transcription

1 Intelligent Robotics Intelligent Robotics lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter University of Hamburg Faculty of Mathematics, Informatics and Natural Sciences Technical Aspects of Multimodal Systems Winter semester 2013/2014 Jianwei Zhang / Eugen Richter 1

2 Outline Intelligent Robotics 1. Sensor fundamentals 2. Rotation and motion 3. Force and pressure 4. Frame transformations 5. Distance measurement 6. Scan data processing 7. Recursive state estimation 8. Vision systems 9. Fuzzy logic Jianwei Zhang / Eugen Richter 2

3 8Visionsystems University of Hamburg Outline Intelligent Robotics 1. Sensor fundamentals 2. Rotation and motion 3. Force and pressure 4. Frame transformations 5. Distance measurement 6. Scan data processing 7. Recursive state estimation 8. Vision systems Transformations Camera calibration Applications 9. Fuzzy logic Jianwei Zhang / Eugen Richter 406

4 8Visionsystems University of Hamburg Vision systems in robotics Intelligent Robotics I Linear camera (e.g. barcode scanner) I Analog CCD camera (black/white) I Analog CCD color camera (1 chip or 3 chips) I High-Dynamic-Range CMOS camera I Digital camera (USB or Firewire) I Camera + Structured light (Infrared, Laser, RGB) I Stereo systems I Omnidirectional vision systems (catadioptrical systems) I dioptrics! lenses I catoptrics! mirrors Jianwei Zhang / Eugen Richter 407

5 8Visionsystems University of Hamburg Vision systems and manipulation Intelligent Robotics Jianwei Zhang / Eugen Richter 408

6 8Visionsystems University of Hamburg Vision systems in industrial applications Intelligent Robotics I Object grasping tasks I Objects with predetermined positioning (e.g. production line) I Randomly positioned objects (e.g. bin-picking ) I Object handling tasks I Cutting, tying, wrapping, sealing, etc. I Inspection during assembly I Mounting tasks I Welding, screwing, attaching, gluing, etc. Jianwei Zhang / Eugen Richter 409

7 8Visionsystems University of Hamburg Vision systems in cognitive robotics Intelligent Robotics I Perception of objects I Static: Recognition, searching, indexing,... I Dynamic: Tracking, manipulation,... I Perception of humans I Face recognition I Gaze tracking I Gesture recognition I... Jianwei Zhang / Eugen Richter 410

8 8Visionsystems University of Hamburg Vision systems in cognitive robotics (cont.) Intelligent Robotics Modeling of the world: I Object recognition and localization (e.g. landmarks) I 3D-reconstruction I Allocation of the environment I Position and orientation of the robot I relative I absolute I... in relation to various coordinate systems Jianwei Zhang / Eugen Richter 411

9 8Visionsystems University of Hamburg Vision systems in cognitive robotics (cont.) Intelligent Robotics Movements led by the vision system: I Visual-Servoing I Coarse and fine positioning I Tracking of movable objects I Swinging, juggling, balancing,... I Collision avoidance I Based on the principle of optical flow I 3D-based distance measurement I Coordination with other robots and/or humans I I Intention recognition Motion estimation Jianwei Zhang / Eugen Richter 412

10 8Visionsystems University of Hamburg Coordinate frames of a manipulation system Intelligent Robotics Jianwei Zhang / Eugen Richter 413

11 8.1 Vision systems - Transformations Intelligent Robotics Robot - Table - Camera Jianwei Zhang / Eugen Richter 414

12 8.1 Vision systems - Transformations Intelligent Robotics Transformations Jianwei Zhang / Eugen Richter 415

13 8.1 Vision systems - Transformations Intelligent Robotics Transformations (cont.) Points inside one coordinate frame can be transferred to another coordinate frame using transformations: Z: Transformation from world coordinates to manipulator base coordinates T 6 : Complete kinematic transformation (6DOF manipulator) from the base of the manipulator to the end of the manipulator E: Transformation from the end of the manipulator to the gripper B: Transformation from world coordinates to object coordinates G: Specification of the grip position and grip orientation using object coordinates Jianwei Zhang / Eugen Richter 416

14 8.1 Vision systems - Transformations Intelligent Robotics Transformations (cont.) If the robot grasps an object, the world coordinates of the grasp point can be determined in two ways and results in: ZT 6 E = BG Jianwei Zhang / Eugen Richter 417

15 8.1 Vision systems - Transformations Intelligent Robotics Transformations (cont.) If the manipulator needs to be moved to the grasp position, the following equation needs to be solved: T 6 = Z 1 BGE 1 In order to determine the position of the object after the completion of the grasp operation, one calculates: B = ZT 6 EG 1 Jianwei Zhang / Eugen Richter 418

16 8.1 Vision systems - Transformations Intelligent Robotics Transformations (cont.) A camera included in the system yields two additional transformations: C: Transformation of camera coordinates into world coordinates (Determination of the transformation o -line through camera calibration) I: Transformation of grasp point coordinates into the camera coordinate system (Grasp point is determined using image processing techniques) Jianwei Zhang / Eugen Richter 419

17 8.1 Vision systems - Transformations Intelligent Robotics Transformations (cont.) The transformation P from the grasp point to the world coordinate system is: P = IC The camera-world transformation is determined through the following equation: C = I 1 P Jianwei Zhang / Eugen Richter 420

18 8.2 Vision systems - Camera calibration Intelligent Robotics Camera calibration Camera calibration in the context of three-dimensional image processing is the determination of the intrinsic and/or extrinsic camera parameters Intrinsic parameters: Internal geometrical structure and optical features of the camera Extrinsic parameters: Three-dimensional position and orientation of the camera s coordinate system in relation to a world coordinate system Jianwei Zhang / Eugen Richter 421

19 8.2 Vision systems - Camera calibration Intelligent Robotics Camera calibration (cont.) What information does one get? In order to reconstruct 3D-objects from two or more images, it is necessary to know the relation between the coordinate systems of the 2D-image and the 3D-object The relation between 2D and 3D can be described using two transformations: Jianwei Zhang / Eugen Richter 422

20 8.2 Vision systems - Camera calibration Intelligent Robotics Camera calibration (cont.) 1. Perspective projection of a 3D-point onto a 2D-image point I Using the estimation of a 3D-object point and the corresponding error covariance matrix, the 2D-projection can be predicted 2. Backprojection of a 2D-point onto a 3D-beam I If a 2D-image point is given, there is a ray in 3D space on which the corresponding 3D object point lies I If there are two or more views of the 3D-point, its coordinate can be determined using triangulation Jianwei Zhang / Eugen Richter 423

21 8.2 Vision systems - Camera calibration Intelligent Robotics Camera calibration (cont.) I The first transformation is useful in order to reduce the search space during feature comparison or hypothesis verification in scene analysis I The second transformation is helpful for deriving 3D-information based on features in 2D-images I These transformations can be used in various application fields: I I I I I I Automatic assembly 3D-metrology Robot calibration Tracking Trajectory analysis Automatic vehicle guidance Jianwei Zhang / Eugen Richter 424

22 8.2 Vision systems - Camera calibration Intelligent Robotics Calibration techniques I Camera calibration can be done on-line or o -line I Using a calibration object: I Identification of the camera parameters I Direct creation of coordinate transformation between camera coordinates and world coordinates I Using self-calibration approaches I Using machine learning methods Jianwei Zhang / Eugen Richter 425

23 8.2.1 Vision systems - Camera calibration - Pinhole camera model Intelligent Robotics Model of a camera without distortion Pinhole camera with and without radial lens distortion Jianwei Zhang / Eugen Richter 426

24 8.2.1 Vision systems - Camera calibration - Pinhole camera model Intelligent Robotics Model of a camera without distortion (cont.) I (x w, y w, z w ): 3D world coordinate system with the origin O w I (x, y, z): 3D coordinate system of the camera with the origin O (optical center) I (X, Y ): 2D image coordinate system with the origin O 1 I f : Focal length of the camera Jianwei Zhang / Eugen Richter 427

25 8.2.1 Vision systems - Camera calibration - Pinhole camera model Intelligent Robotics Transformation from world to camera coordinates I Let P(x w, y w, z w ) be a point in the world coordinate system I Its projection onto the image plane can be determined as follows: 2 3 x 2 3 x w 4y5 = R 4y w 5 + t z z w r 1 r 2 r 3 having R = 4r 4 r 5 r 6 5 and t = 4 r 7 r 8 r 9 t x t y t z 5 I The parameters R and t are the extrinsic parameters Jianwei Zhang / Eugen Richter 428

26 8.2.1 Vision systems - Camera calibration - Pinhole camera model Intelligent Robotics Projection of camera coordinates onto image coordinates I Point P is projected onto the corresponding (ideal) image coordinate (u, v) I Perspective projection with focal length f : u = f x z v = f y z I The image coordinate (X, Y ) is calculated from (u, v) as follows: X = s u u Y = s v v I The scaling factors s u and s v are used to convert the image coordinates from meters to pixels I s u, s v and f are the intrinsic camera parameters Jianwei Zhang / Eugen Richter 429

27 8.2.1 Vision systems - Camera calibration - Pinhole camera model Intelligent Robotics Projection of world coordinates on image coordinates I Since only two independent intrinsic parameters exist, one defines: f x fs u and f y fs v I These equations yield the distortion-free camera model: X f = f x r 1 x w + r 2 y w + r 3 z w + t x r 7 x w + r 8 y w + r 9 z w + t z Y f = f y r 4 x w + r 5 y w + r 6 z w + t y r 7 x w + r 8 y w + r 9 z w + t z Jianwei Zhang / Eugen Richter 430

28 8.2.1 Vision systems - Camera calibration - Pinhole camera model Intelligent Robotics Pixel coordinates I The coordinates (C x, C y ) of the image center are subtracted from the image coordinates (X f, Y f ) determined during perspective projection I Due to the above, one has: X = X f Y = Y f C x C y I The uncertainty regarding the image center may reach pixels Jianwei Zhang / Eugen Richter 431

29 8.2.2 Vision systems - Camera calibration - Basic concept of camera calibration Intelligent Robotics Main calibration parameters The pinhole camera model contains the following calibration parameters: I The three independent extrinsic parameters of R I The three independent extrinsic parameters of t I The intrinsic parameters f x, f y, C x and C y Jianwei Zhang / Eugen Richter 432

30 8.2.2 Vision systems - Camera calibration - Basic concept of camera calibration Intelligent Robotics Calibration points Calibration requires a set of m object points, which 1. have known world coordinates {x w,i, y w,i, z w,i }, i = 1,...,m with su ciently accurate precision 2. lie within the camera s field of view These calibration points are detected in the camera image with their respective camera coordinates {X i, Y i } Jianwei Zhang / Eugen Richter 433

31 8.2.2 Vision systems - Camera calibration - Basic concept of camera calibration Intelligent Robotics Calibration I The main problem during camera calibration is the identification of the unknown parameters of the camera model I The determination of these parameters for the distortion-free camera model yields the position of the camera in world coordinates I The most basic strategy for camera calibration determines the associated coe cients using linear-least-squares-identification of the perspective transformation matrix Jianwei Zhang / Eugen Richter 434

32 8.2.2 Vision systems - Camera calibration - Basic concept of camera calibration Intelligent Robotics Distortion-free camera model The distortion-free camera model X = f x r 1 x w + r 2 y w + r 3 z w + t x r 7 x w + r 8 y w + r 9 z w + t z, can be rearranged to Y = f y r 4 x w + r 5 y w + r 6 z w + t y r 7 x w + r 8 y w + r 9 z w + t z X = a 11x w + a 12 y w + a 13 z w + a 14 a 31 x w + a 32 y w + a 33 z w + a 34 Y = a 21x w + a 22 y w + a 23 z w + a 24 a 31 x w + a 32 y w + a 33 z w + a 34 Jianwei Zhang / Eugen Richter 435

33 8.2.2 Vision systems - Camera calibration - Basic concept of camera calibration Intelligent Robotics Perspective transformation matrix I We can assign a 34 = 1, since scaling the coe cients a 11,...,a 34 does not change the values of X and Y I The coe cients a 11,...,a 34 correspond with the so called perspective transformation matrix I The previous two equations can be summarized in the following identification model: 2 3 apple a 11 xw y w z w Xx w Xy w Xz w = x w y w z w 1 Yx w Yy w Yz w. a 33 apple X Y Jianwei Zhang / Eugen Richter 436

34 8.2.2 Vision systems - Camera calibration - Basic concept of camera calibration Intelligent Robotics Least squares approach I The eleven unknown coe cients a 11,...,a 33 are determined using the least squares method I At least six calibration points are necessary I Each pair of data points {(x w,i, y w,i, z w,i ), (X i, Y i )} yields two algebraic equations with the wanted coe cients I It can be shown, that the calibration points may not be coplanar I If this is not the case, the first matrix in the identification model is singular, since the columns 3 and 4 as well as 7 and 8 are linearly dependent Jianwei Zhang / Eugen Richter 437

35 8.2.2 Vision systems - Camera calibration - Basic concept of camera calibration Intelligent Robotics Problems I The presented solution is not globally optimal, since lens distortion has not been considered yet I It is not possible to determine the rotation matrix R and the translation vector t explicitly I This means that the presented calibration does not allow the use of a camera which is mounted to a moving robot arm I The creation of a precise 3D calibration setup is more complex than that of a 2D calibration object Jianwei Zhang / Eugen Richter 438

36 8.2.3 Vision systems - Camera calibration - Stereo vision Intelligent Robotics Stereo vision I Nevertheless, the previously presented calibration method allows a fast, although unprecise measurement of points with a stereo camera setup I For this purpose, two cameras A and B are calibrated and yield the calibraton vectors a A and a B I Then, the coordinates {x w, y w, z w } of each point which is captured by both cameras can be calculated I Each unknown point has the corresponding image coordinates {X A, Y A } and {X B, Y B } Jianwei Zhang / Eugen Richter 439

37 8.2.3 Vision systems - Camera calibration - Stereo vision Intelligent Robotics Stereo vision (cont.) Using the equation apple a11 a 31 X a 12 a 32 X a 13 a 33 X a 21 a 31 Y a 22 a 32 Y a 23 a 33 Y 2 3 x w apple 4y w 5 X a14 = Y a z 24 w for each camera, an over-determined equation system is formed, which allows the determination of the 3D-coordinate of a point from the image coordinates Jianwei Zhang / Eugen Richter 440

38 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Camera model with lens distortion I Real cameras and lenses produce a variety of imaging errors and do not satisfy constraints of the pinhole camera model I The main error sources are: 1. Spatial resolution quite low, since the resolution of the cameras is still low as well (current DV cameras: 320x200, fps; 800x600, fps; fps) 2. Most (cheap) lenses are asymmetrical and generate distortions 3. Assembly of the camera in a precise way is not possible (center of the CCD chips does not lie on the optical axis; chip is not parallel to the lens, etc.) 4. Timing errors between camera hardware and grabber hardware Jianwei Zhang / Eugen Richter 441

39 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Distortion I Distortion by the lens system results in a changed position of the image pixels on the image plane I The pinhole camera model is no longer su I It is replaced by the following model: u 0 = u + D u (u, v) v 0 = v + D v (u, v) cient where u and v are the non-observable, distortion-free image coordinates, and u 0 and v 0 the corresponding distorted coordinates Jianwei Zhang / Eugen Richter 442

40 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Distortion (cont.) Jianwei Zhang / Eugen Richter 443

41 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Distortion (cont.) Jianwei Zhang / Eugen Richter 444

42 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Types of distortion I There are two types of distortions: I I radial tangential I Radial distortion causes an o set of the ideal position inwards (barrel distortion) or outwards (pincushion distortion) I Cause: Flawed radial bend of the lens Jianwei Zhang / Eugen Richter 445

43 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Radial distortion Straight lines! no distortion Jianwei Zhang / Eugen Richter 446

44 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Tangential distortion Straight lines! no distortion Jianwei Zhang / Eugen Richter 447

45 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Modeling of the lens distortion I According to Weng et. al. (1992), three kinds of distortion are distinguished: 1. Radial distortion 2. Decentering distortion 3. Thin prism distortion I Decentering distortion and thin prism distortion are both radial and tangential I In the case of decentering distortion, optical centers of the lenses are not colinear Jianwei Zhang / Eugen Richter 448

46 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Model: Radial distortion Radial distortion D ur = ku(u 2 + v 2 )+O[(u, v) 5 ] D vr = kv(u 2 + v 2 )+O[(u, v) 5 ] Jianwei Zhang / Eugen Richter 449

47 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Simplified model Since radial lens distortion is the dominating e ect, the following equation system can be used as simplified camera model: Simplified camera model with distortion: u 0 = u(1 + k 0 r 02 ) v 0 = v(1 + k 0 r 02 ) with r 02 = u 2 + v 2 Jianwei Zhang / Eugen Richter 450

48 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Radial distortion coe cient Since u and v are unknown, they are replaced by the measurable image coordinates X and Y and one has r 02 =(X/s u ) 2 +(Y /s v ) 2 Defining k k 0 s 2 v,theradial distortion coe cient, one has µ f y f x = s v s u and r 2 µ 2 X 2 + Y 2 Jianwei Zhang / Eugen Richter 451

49 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Model for small radial distortions With the previously mentioned modifications, one gets the following camera model for small radial distortions X(1 + kr 2 ) = f x r 1 x w + r 2 y w + r 3 z w + t x r 7 x w + r 8 y w + r 9 z w + t z, Y (1 + kr 2 ) = f y r 4 x w + r 5 y w + r 6 z w + t y r 7 x w + r 8 y w + r 9 z w + t z Jianwei Zhang / Eugen Richter 452

50 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Variation AAusefultrickfortheleast squares method is the usage of the following variation of the previous model X 1 + kr 2 = f x r 1 x w + r 2 y w + r 3 z w + t x r 7 x w + r 8 y w + r 9 z w + t z, Y 1 + kr 2 = f y r 4 x w + r 5 y w + r 6 z w + t y r 7 x w + r 8 y w + r 9 z w + t z which applies under the assumption, that kr 2 << 1 Jianwei Zhang / Eugen Richter 453

51 8.2.4 Vision systems - Camera calibration - Camera model with lens distortion Intelligent Robotics Radial alignment constraint If radial distortion is the only distortion that occurs, one gets the radial alignment constraint (RAC) respectively: with X d = f x X and Y d = f y Y X Y = µ 1 r 1x w + r 2 y w + t x r 4 x w + r 5 y w + t y X d : Y d = x : y Jianwei Zhang / Eugen Richter 454

52 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Tsai s RAC-based camera calibration I Assumption: C x, C y and µ are known I Extrinsic parameters R and t and the intrinsic parameters f x, f y and k are to be determined I For calibration, a number of coplanar calibration points is used I Calibration consists of two steps 1. Determination of the rotation matrix R and the components t x and t y of the translation vector 2. Estimation of the other parameters based on the results of the first step Jianwei Zhang / Eugen Richter 455

53 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 1 1. Calculation of the image coordinates (X i, Y i ) Let N be the number of image points, then, for i = 1, 2,...,N one has X i = X f,i Y i = Y f,i C x C y where X f,i and Y f,i are the pixel values in the computer Jianwei Zhang / Eugen Richter 456

54 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 1 (cont.) 2. Determination of intermediate parameters {v 1, v 2, v 3, v 4, v 5 } I Since RAC is independent from k and f! R, t x and t y can be calculated I For that we define {v 1, v 2, v 3, v 4, v 5 } {r 1 t 1 y, r 2 t 1 y, t x t 1 y, r 4 t 1 y, r 5 t 1 y } Jianwei Zhang / Eugen Richter 457

55 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 1 (cont.) I If one divides both sides of the RAC-equation by t y for the i-th calibration point and rearranges the resulting expression, one has 2 3 xw,i Y i y w,i Y i Y i x w,i µx i v 2 y w,i µx i 6v 3 7 4v 4 5 = µx i v 5 where x w,i and y w,i are the x- andy-coordinates of the i-th calibration point v 1 Jianwei Zhang / Eugen Richter 458

56 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 1 (cont.) I The minimum number of necessary non-colinear calibration points is N = 5 I In practice, an appropriate choice would be N > 5 I Note: If t y = 0, the above equation can also be formulated as a function of t x I If one determines t x = t y = 0, the chosen camera setup needs to be changed appropriately Jianwei Zhang / Eugen Richter 459

57 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 1 (cont.) 3. Calculation of R, t x and t y apple v1 v I Define C 2 v 4 v 5 I If no line or column equals zero, one has: t 2 y = S r p S 2 r 4(v 1 v 5 v 4 v 2 ) 2 2(v 1 v 5 v 4 v 2 ) with S r v 2 1 +v 2 2 +v 2 4 +v 2 5 I Otherwise one has: t 2 y =(v 2 i + v 2 j ) 1 where v i and v j are the elements from C, which are non-zero Jianwei Zhang / Eugen Richter 460

58 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 1 (cont.) I Physically, the algebraic signs of x and X as well as y and Y should be equal I This property is used to determine the algebraic sign of t y I Assuming t y > 0 following components can be calculated r 1 = v 1 t y r 2 = v 2 t y r 4 = v 4 t y r 5 = v 5 t y t x = v 3 t y Jianwei Zhang / Eugen Richter 461

59 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 1 (cont.) I Using an arbitrary calibration point, the following coordinates can be determined: x = r 1 x w + r 2 y w + t x y = r 4 x w + r 5 y w + t y I If sign(x) =sign(x) and sign(y) =sign(y ) apply, then the assumption sign(t y )=1istrueandwekeepr 1, r 2, r 4, r 5 and t x I Otherwise, we set sign(t y )= 1 and change the algebraic signs of r 1, r 2, r 4, r 5 and t y accordingly Jianwei Zhang / Eugen Richter 462

60 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 1 (cont.) I There are two possible solutions for the rotation matrix R, if a 2 2-submatrix is known I These solutions are due to f x having a positive and negative sign I R can be calculated as follows r 3 = ±(1 r 2 1 r 2 2 ) 1/2 r 6 = ±sign(r 1 r 4 + r 2 r 5 )(1 r 2 4 r 2 5 ) 1/2 [r 7 r 8 r 9 ] T =[r 1 r 2 r 3 ] T [r 4 r 5 r 6 ] T I One of the two solution leads to a positive f x in Step 2 of the calibration method Jianwei Zhang / Eugen Richter 463

61 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 1 (cont.) Note: I The resulting matrix R might not be orthonormal I Thus, orthonormalisation steps, which are not explained in more detail here, are additionally needed Jianwei Zhang / Eugen Richter 464

62 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 2 Determination of the parameters t z, k, f x and f y I If R, t x and t y are known, the remaining parameters for the i-th calibration point can be determined using the following equation: having Xi x i x i r 2 i 2 4 t z f x kf x 3 5 = X i w i x i r 1 x w,i + r 2 y w,i + t x w i r 7 x w,i + r 8 y w,i Jianwei Zhang / Eugen Richter 465

63 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics Camera calibration according to Tsai: Step 2 (cont.) I Whenever more than three calibration points are used, an over-determined equation system is the result I The solution using the least-squares-procedure yields the parameters k, t z and f x I Using f x the other parameters can be calculated: f y = f x µ k =(kf x )f x 1 Jianwei Zhang / Eugen Richter 466

64 8.2.5 Vision systems - Camera calibration - Camera calibration according to Tsai Intelligent Robotics 3D calibration setup typical 3D calibration setup Jianwei Zhang / Eugen Richter 467

65 8.2.6 Vision systems - Camera calibration - Fast RAC-based calibration Intelligent Robotics Fast RAC-based calibration I If only calibration points on the x- andy-axis of the world coordinate system are used in the first step of the Tsai algorithm, the RAC-equation is simplified I Typically: The middle line and middle column of a calibration plate define the x w -andthey w -axis I In the Tsai-algorithm, the linear-least-squares-procedure is applied to five variables in step one, and to three in step two I Using above simplification, the linear-least-squares-procedure needs to be applied three times for two variables I Since there s a closed solution for this, the calculation time needed for calibration is reduced significantly Jianwei Zhang / Eugen Richter 468

66 8.2.6 Vision systems - Camera calibration - Fast RAC-based calibration Intelligent Robotics Fast RAC-based calibration (cont.) I Requirement for the fast version of the Tsai-algorithm is that µ, C x and C y are known in advance I As with Tsai s calibration, there are two necessary steps: 1. Usage of calibration points on the x w -andy w -axis and a simplified RAC-equation, to determine R, t x and t y 2. Determination of the other parameters using all visible calibration points Jianwei Zhang / Eugen Richter 469

67 8.2.6 Vision systems - Camera calibration - Fast RAC-based calibration Intelligent Robotics Fast RAC-based calibration (cont.) Calibration points for the first phase of the fast RAC-based calibration Jianwei Zhang / Eugen Richter 470

68 8.2.6 Vision systems - Camera calibration - Fast RAC-based calibration Intelligent Robotics Fast RAC-based calibration (cont.) Typical calibration plate Jianwei Zhang / Eugen Richter 471

69 University of Hamburg Vision systems - Camera calibration - Fast RAC-based calibration Intelligent Robotics Fast RAC-based calibration (cont.) Using a calibrated camera, the image can be rectified Jianwei Zhang / Eugen Richter 472

70 8.3.1 Vision systems - Applications - Determination of a pointing direction Intelligent Robotics Determination of a pointing direction: Motivation Motivation: I The recognition of hand gestures can be used in the field of human-computer-interaction I Applications in the field of virtual reality, multimedia, robot instruction or teleoperation Solutions: I Sensors on the hand (e.g. data glove) I Stereo vision (calibrated/uncalibrated) Jianwei Zhang / Eugen Richter 473

71 8.3.1 Vision systems - Applications - Determination of a pointing direction Intelligent Robotics Determination of a pointing direction: Stereo vision Basic stereo setup with parallel optical axes Jianwei Zhang / Eugen Richter 474

72 8.3.1 Vision systems - Applications - Determination of a pointing direction Intelligent Robotics Determination of a pointing direction: Epipolar lines The point corresponding to a point from Image 1 can be found on the corresponding epipolar line in Image 2 Jianwei Zhang / Eugen Richter 475

73 8.3.1 Vision systems - Applications - Determination of a pointing direction Intelligent Robotics Determination of a pointing direction: Epipolar lines In the case of parallel optical axes, the epipolar lines are horizontal lines Jianwei Zhang / Eugen Richter 476

74 8.3.1 Vision systems - Applications - Determination of a pointing direction Intelligent Robotics Uncalibrated stereo vision I Cipolla et. al. (1994) present an uncalibrated stereo system for the recognition of pointing gestures I Assumption: Pinpoint camera model with view on a plane I The relation between plane coordinate system (X, Y ) and image coordinate system (u, v) is: su X 4sv5 = T 4Y 5 s 1 whereas T 3 3 is a homogeneous matrix having t 33 = 1 Jianwei Zhang / Eugen Richter 477

75 8.3.1 Vision systems - Applications - Determination of a pointing direction Intelligent Robotics Uncalibrated stereo vision (cont.) I In order to determine T, at least four points need to be observed I One defines the borders of the working plane as (0, 0), (0, 1), (1, 0) and (1, 1) I For both cameras, the transformations T and T 0 are determined Jianwei Zhang / Eugen Richter 478

76 8.3.1 Vision systems - Applications - Determination of a pointing direction Intelligent Robotics Determination of the pointing spot Notation: l w : Longitudinal axis of the pointer in the world l i : Projection of l w onto the image plane l gp : Projection of l w on plane G Procedure: I Using the image of the second camera, one gets a projection l 0 gp, whose intersection point is the pointing spot l gp I l i is the image of l gp, in other words l i = Tl gp I As a consequence l gp = T 1 l i and l 0 gp = T 0 1 l i Jianwei Zhang / Eugen Richter 479

77 8.3.1 Vision systems - Applications - Determination of a pointing direction Intelligent Robotics Determination of the pointing spot (cont.) Jianwei Zhang / Eugen Richter 480

78 8.3.1 Vision systems - Applications - Determination of a pointing direction Intelligent Robotics Determination of the pointing spot (cont.) Jianwei Zhang / Eugen Richter 481

79 8.3.2 Vision systems - Applications - Hand camera calibration Intelligent Robotics Hand camera calibration Camera-, gripper- and world-coordinate-system Jianwei Zhang / Eugen Richter 482

80 8.3.2 Vision systems - Applications - Hand camera calibration Intelligent Robotics Hand camera calibration (cont.) Task: Determination of the fixed spatial relation between camera- (C) and gripper-coordinate system (G) represented by the homogeneous transformation C H G Idea: Direct determination of C H G through model based localisation of visible gripper features Jianwei Zhang / Eugen Richter 483

81 8.3.2 Vision systems - Applications - Hand camera calibration Intelligent Robotics Hand camera calibration (cont.) Solution: I Positioning of the gripper on a planar calibration object with several measuring points I Result: Gripper- and world-coordinate system coincide I Plane coincidence allows composition of the problem C H G = C H W W H G Jianwei Zhang / Eugen Richter 484

82 8.3.2 Vision systems - Applications - Hand camera calibration Intelligent Robotics Hand camera calibration (cont.) Approach: 1. Determination of intrinsic and extrinsic camera parameters using the calibration object ) C H W 2. Determination of the parameters of a 2D-transformation W H G using visible gripper features Jianwei Zhang / Eugen Richter 485

83 8.3.2 Vision systems - Applications - Hand camera calibration Intelligent Robotics Hand camera calibration (cont.) Advantages of "self-visibility": I "Self-visibility" allows calibration of the configuration without test movements of the manipulator as opposed to classical procedures I Two dot-shaped gripper features are su cient for the determination of C H G, solution in closed form I Online-surveillance of the relative position between gripper and object I Higher accuracy of o kinematic errors ine calibration due to exclusion of I Higher level of robustness due to the possibility of online-calibration Jianwei Zhang / Eugen Richter 486

84 8.3.3 Vision systems - Applications - Visually controlled grasping Intelligent Robotics Visually controlled grasping Task: Two-dimensional fine positioning of a robot hand or a gripper in relation to the object that is to be grasped Procedure: 1. O ine-specification of the target position (e.g. object features from stereo image processing) 2. Online transformation of the current di erence in relation to the target position (e.g. with a hand camera) Jianwei Zhang / Eugen Richter 487

85 8.3.3 Vision systems - Applications - Visually controlled grasping Intelligent Robotics Visually controlled grasping (cont.) Jianwei Zhang / Eugen Richter 488

86 University of Hamburg Vision systems - Applications - Visually controlled grasping Intelligent Robotics Visually controlled grasping (cont.) Jianwei Zhang / Eugen Richter 489

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation. 3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Camera models and calibration

Camera models and calibration Camera models and calibration Read tutorial chapter 2 and 3. http://www.cs.unc.edu/~marc/tutorial/ Szeliski s book pp.29-73 Schedule (tentative) 2 # date topic Sep.8 Introduction and geometry 2 Sep.25

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, 3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective

More information

Camera Model and Calibration

Camera Model and Calibration Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Stereo Vision Based Mapping and Immediate Virtual Walkthroughs

Stereo Vision Based Mapping and Immediate Virtual Walkthroughs Stereo Vision Based Mapping and Immediate Virtual Walkthroughs Heiko Hirschmüller M.Sc., Dipl.-Inform. (FH) Submitted in partial fulfilment of the requirements for the degree of DOCTOR OF PHILOSOPHY June

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2014ws/vorlesung/ir Jianwei Zhang / Eugen Richter University of Hamburg Faculty of Mathematics, Informatics

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

Cameras and Stereo CSE 455. Linda Shapiro

Cameras and Stereo CSE 455. Linda Shapiro Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Agenda. Rotations. Camera calibration. Homography. Ransac

Agenda. Rotations. Camera calibration. Homography. Ransac Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Robot Vision: Camera calibration

Robot Vision: Camera calibration Robot Vision: Camera calibration Ass.Prof. Friedrich Fraundorfer SS 201 1 Outline Camera calibration Cameras with lenses Properties of real lenses (distortions, focal length, field-of-view) Calibration

More information

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects

More information

Epipolar Geometry in Stereo, Motion and Object Recognition

Epipolar Geometry in Stereo, Motion and Object Recognition Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,

More information

A 3-D Scanner Capturing Range and Color for the Robotics Applications

A 3-D Scanner Capturing Range and Color for the Robotics Applications J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Image Transformations & Camera Calibration. Mašinska vizija, 2018. Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize

More information

Computer Vision cmput 428/615

Computer Vision cmput 428/615 Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?

More information

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

NAME VCamera camera model representation

NAME VCamera camera model representation NAME VCamera camera model representation SYNOPSIS #include void VRegisterCameraType (void); extern VRepnKind VCameraRepn; VCamera camera; ld... -lvcam -lvista -lm -lgcc... DESCRIPTION

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,

More information

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor Willia Hoff Dept of Electrical Engineering &Coputer Science http://inside.ines.edu/~whoff/ 1 Caera Calibration 2 Caera Calibration Needed for ost achine vision and photograetry tasks (object

More information

Camera Model and Calibration. Lecture-12

Camera Model and Calibration. Lecture-12 Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993 Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video

More information

Perception II: Pinhole camera and Stereo Vision

Perception II: Pinhole camera and Stereo Vision Perception II: Pinhole camera and Stereo Vision Davide Scaramuzza Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Mobile Robot Control Scheme knowledge, data base mission commands Localization

More information

Multiple Views Geometry

Multiple Views Geometry Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

Announcements. Stereo

Announcements. Stereo Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

Cameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models

Cameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models Cameras and Radiometry Last lecture in a nutshell CSE 252A Lecture 5 Conversion Euclidean -> Homogenous -> Euclidean In 2-D Euclidean -> Homogenous: (x, y) -> k (x,y,1) Homogenous -> Euclidean: (x, y,

More information

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured

More information

Robotics - Projective Geometry and Camera model. Marcello Restelli

Robotics - Projective Geometry and Camera model. Marcello Restelli Robotics - Projective Geometr and Camera model Marcello Restelli marcello.restelli@polimi.it Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano Ma 2013 Inspired from Matteo

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light I Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi Full Field Displacement And Strain Measurement And Modal Analysis Using VIC-3D-HS, High Speed 3D Digital Image Correlation System At Indian Institute of Technology New Delhi VIC-3D, 3D Digital Image Correlation

More information

Predicted. position. Observed. position. Optical center

Predicted. position. Observed. position. Optical center A Unied Procedure for Calibrating Intrinsic Parameters of Spherical Lenses S S Beauchemin, R Bajcsy and G Givaty GRASP Laboratory Department of Computer and Information Science University of Pennsylvania

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Short on camera geometry and camera calibration

Short on camera geometry and camera calibration Short on camera geometry and camera calibration Maria Magnusson, maria.magnusson@liu.se Computer Vision Laboratory, Department of Electrical Engineering, Linköping University, Sweden Report No: LiTH-ISY-R-3070

More information

STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS

STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS XVI CONGRESO INTERNACIONAL DE INGENIERÍA GRÁFICA STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS BARONE, Sandro; BRUNO, Andrea University of Pisa Dipartimento di Ingegneria

More information

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.

More information

Epipolar Geometry and the Essential Matrix

Epipolar Geometry and the Essential Matrix Epipolar Geometry and the Essential Matrix Carlo Tomasi The epipolar geometry of a pair of cameras expresses the fundamental relationship between any two corresponding points in the two image planes, and

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition

Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition Ayan Chaudhury, Abhishek Gupta, Sumita Manna, Subhadeep Mukherjee, Amlan Chakrabarti Abstract Calibration in a multi

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

An introduction to 3D image reconstruction and understanding concepts and ideas

An introduction to 3D image reconstruction and understanding concepts and ideas Introduction to 3D image reconstruction An introduction to 3D image reconstruction and understanding concepts and ideas Samuele Carli Martin Hellmich 5 febbraio 2013 1 icsc2013 Carli S. Hellmich M. (CERN)

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Announcements. Stereo

Announcements. Stereo Announcements Stereo Homework 1 is due today, 11:59 PM Homework 2 will be assigned on Thursday Reading: Chapter 7: Stereopsis CSE 252A Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where

More information

A Novel Stereo Camera System by a Biprism

A Novel Stereo Camera System by a Biprism 528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Single View Geometry. Camera model & Orientation + Position estimation. What am I? Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing point Mapping from 3D to 2D Point & Line Goal: Point Homogeneous coordinates represent coordinates in 2 dimensions

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook

More information