Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Size: px
Start display at page:

Download "Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne"

Transcription

1 Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne p.1/35

2 HZ Appendix 6: Iterative estimation methods Topics: Basic methods: Newton, Gauss-Newton, gradient descent. Levenberg-Marquardt. Sparse Levenberg-Marquardt. Applications to homography, fundamental matrix, bundle adjustment. Sparse methods for equations solving. Robust cost functions. Parameterization. Lecture notes which I found useful (methods for non-linear least squares problems): p.2/35

3 Iterative estimation methods Problem: how to find minimum of non-linear functions? p.3/35

4 Iterative estimation methods Problem: how to find minimum of non-linear functions? Examples of HZ problems: -homography estimation. -fundamental matrix estimation. -multiple image bundle adjustment. -camera calibration (Zhang paper). p.3/35

5 Iterative estimation methods Problem: how to find minimum of non-linear functions? Examples of HZ problems: -homography estimation. -fundamental matrix estimation. -multiple image bundle adjustment. -camera calibration (Zhang paper). Examples of my recent problems: -optimization of skeleton geometry given marker data. -optimization of skeleton pose given marker data. p.3/35

6 Iterative estimation methods Problem: how to find minimum of non-linear functions? Examples of HZ problems: -homography estimation. -fundamental matrix estimation. -multiple image bundle adjustment. -camera calibration (Zhang paper). Examples of my recent problems: -optimization of skeleton geometry given marker data. -optimization of skeleton pose given marker data. Central approach of Appendix 6: Levenberg-Marquardt. Questions: Pronunciation? Why LM? p.3/35

7 Newton iteration Goal: minimize X = f(p) for P. X is the measurement vector. P is the parameter vector. f is some non-linear function. p.4/35

8 Newton iteration Goal: minimize X = f(p) for P. X is the measurement vector. P is the parameter vector. f is some non-linear function. In other words: Minimize ɛ = X f(p). p.4/35

9 Newton iteration Goal: minimize X = f(p) for P. X is the measurement vector. P is the parameter vector. f is some non-linear function. In other words: Minimize ɛ = X f(p). We assume f is locally linear at each P i, then f(p i + i ) = f(p i ) +J i i, where matrixj i is the Jacobian f/ P at P i. p.4/35

10 Newton iteration Goal: minimize X = f(p) for P. X is the measurement vector. P is the parameter vector. f is some non-linear function. In other words: Minimize ɛ = X f(p). We assume f is locally linear at each P i, then f(p i + i ) = f(p i ) +J i i, where matrixj i is the Jacobian f/ P at P i. So we want to minimize ɛ i +J i i for some vector i. p.4/35

11 Newton iteration Goal: minimize X = f(p) for P. X is the measurement vector. P is the parameter vector. f is some non-linear function. In other words: Minimize ɛ = X f(p). We assume f is locally linear at each P i, then f(p i + i ) = f(p i ) +J i i, where matrixj i is the Jacobian f/ P at P i. So we want to minimize ɛ i +J i i for some vector i. Find i either using normal equations: J T ij i = J T iɛ i or using pseudo-inverse: i = J + i ɛ i. Iterate until convergence... p.4/35

12 Gauss-Newton method Suppose we want to minimize some cost function g(p) = 1 2 ɛ(p) 2 = 1 2 ɛ(p)t ɛ(p). p.5/35

13 Gauss-Newton method Suppose we want to minimize some cost function g(p) = 1 2 ɛ(p) 2 = 1 2 ɛ(p)t ɛ(p). We may expand in a Taylor series up to second degree g(p + ) = g + g P + T g PP /2, where subscript P denotes differentiation. p.5/35

14 Gauss-Newton method Suppose we want to minimize some cost function g(p) = 1 2 ɛ(p) 2 = 1 2 ɛ(p)t ɛ(p). We may expand in a Taylor series up to second degree g(p + ) = g + g P + T g PP /2, where subscript P denotes differentiation. Differentiating w.r.t., setting to zero results in g PP = g P. Using this equation we could compute if we knew g PP and g P. p.5/35

15 Gauss-Newton method Suppose we want to minimize some cost function g(p) = 1 2 ɛ(p) 2 = 1 2 ɛ(p)t ɛ(p). We may expand in a Taylor series up to second degree g(p + ) = g + g P + T g PP /2, where subscript P denotes differentiation. Differentiating w.r.t., setting to zero results in g PP = g P. Using this equation we could compute if we knew g PP and g P. Gradient vector: g P = ɛ T P ɛ = JT ɛ. Intuition? Hessian: g PP = ɛ T P ɛ P + ɛ T PP ɛ JT J. Assume linear again... p.5/35

16 Gauss-Newton method Suppose we want to minimize some cost function g(p) = 1 2 ɛ(p) 2 = 1 2 ɛ(p)t ɛ(p). We may expand in a Taylor series up to second degree g(p + ) = g + g P + T g PP /2, where subscript P denotes differentiation. Differentiating w.r.t., setting to zero results in g PP = g P. Using this equation we could compute if we knew g PP and g P. Gradient vector: g P = ɛ T P ɛ = JT ɛ. Intuition? Hessian: g PP = ɛ T P ɛ P + ɛ T PP ɛ JT J. Assume linear again... Putting it all together we getj T J = J T ɛ. So we arrive at the normal equations again. (So what was the point?) p.5/35

17 Gradient descent Gradient descent or steepest descent searches in the direction of most rapid decrease g P = ɛ T P ɛ. p.6/35

18 Gradient descent Gradient descent or steepest descent searches in the direction of most rapid decrease g P = ɛ T P ɛ. So we take steps λ = g P where λ controls the step size and is found through line search. p.6/35

19 Gradient descent Gradient descent or steepest descent searches in the direction of most rapid decrease g P = ɛ T P ɛ. So we take steps λ = g P where λ controls the step size and is found through line search. A problem is zig-zagging which can cause slow convergence: p.6/35

20 Levenberg-Marquardt Levenberg-Marquardt is a blend of Gauss-Newton and gradient descent. Update equation: (J T J + λi) = J T ɛ. p.7/35

21 Levenberg-Marquardt Levenberg-Marquardt is a blend of Gauss-Newton and gradient descent. Update equation: (J T J + λi) = J T ɛ. Algorithm: Initially set λ = Try update equation. If improvement: divide λ by 10. I.e., shift towards Gauss-Newton. Else: multiply λ by 10. I.e., shift towards gradient descent. p.7/35

22 Levenberg-Marquardt Levenberg-Marquardt is a blend of Gauss-Newton and gradient descent. Update equation: (J T J + λi) = J T ɛ. Algorithm: Initially set λ = Try update equation. If improvement: divide λ by 10. I.e., shift towards Gauss-Newton. Else: multiply λ by 10. I.e., shift towards gradient descent. The idea is (?): -take big gradient descent steps far away from minimum. -take Gauss-Newton steps near (hopefully quadratic) minimum. p.7/35

23 Sparse Levenberg-Marquardt 1/2 In many estimation problems, the Jacobian is sparse. One should this to lower the time complexity (sometimes even from O(n 3 ) to O(n)). p.8/35

24 Sparse Levenberg-Marquardt 1/2 In many estimation problems, the Jacobian is sparse. One should this to lower the time complexity (sometimes even from O(n 3 ) to O(n)). In the example, the parameters are partitioned into two blocks: P = (a T,b T ) T The Jacobian then has the formj = [A B], with A = [ ˆX/ a], B = [ ˆX/ b]. p.8/35

25 Sparse Levenberg-Marquardt 1/2 In many estimation problems, the Jacobian is sparse. One should this to lower the time complexity (sometimes even from O(n 3 ) to O(n)). In the example, the parameters are partitioned into two blocks: P = (a T,b T ) T The Jacobian then has the formj = [A B], with A = [ ˆX/ a], B = [ ˆX/ b]. Using A and B, the normal equations (J T J) = J T ɛ take on the the form [ ] ( ) ( ) A T A A T B δ a A T ɛ =. B T A B T B B T ɛ δ b p.8/35

26 Sparse Levenberg-Marquardt 2/2 If the normal equations are written as (what s with the?) [ ] ( ) ( ) U W δ a ɛ A =, W T V δ b ɛ B we can rewrite this to [ U WV 1 W T 0 W T V ] ( δ a δ b ) = ( ɛ A WV 1 ɛ B ɛ B [ ] I WV 1 by multiplying on the left by. 0 I Now first solve the top half, then the lower half using back-substitution. ) p.9/35

27 Robust cost functions 1/5 p.10/35

28 Robust cost functions 2/5 Squared-error is not usable unless outliers are filtered out. p.11/35

29 Robust cost functions 2/5 Squared-error is not usable unless outliers are filtered out. Alternatives: Blake-Zisserman: outliers are given a constant cost. Disadvantages: not a PDF, not convex. p.11/35

30 Robust cost functions 2/5 Squared-error is not usable unless outliers are filtered out. Alternatives: Blake-Zisserman: outliers are given a constant cost. Disadvantages: not a PDF, not convex. Corrupted Gaussian: blend two Gaussians, one for inliers and one for outliers. Disadvantages: not convex. p.11/35

31 Robust cost functions 2/5 Squared-error is not usable unless outliers are filtered out. Alternatives: Blake-Zisserman: outliers are given a constant cost. Disadvantages: not a PDF, not convex. Corrupted Gaussian: blend two Gaussians, one for inliers and one for outliers. Disadvantages: not convex. Cauchy: (?). disadvantages: not convex. p.11/35

32 Robust cost functions 2/5 Squared-error is not usable unless outliers are filtered out. Alternatives: Blake-Zisserman: outliers are given a constant cost. Disadvantages: not a PDF, not convex. Corrupted Gaussian: blend two Gaussians, one for inliers and one for outliers. Disadvantages: not convex. Cauchy: (?). disadvantages: not convex. L1: absolute error (not squared). Disadvantages: not differentiable at 0, minimum is not at a single point when summed. p.11/35

33 Robust cost functions 2/5 Squared-error is not usable unless outliers are filtered out. Alternatives: Blake-Zisserman: outliers are given a constant cost. Disadvantages: not a PDF, not convex. Corrupted Gaussian: blend two Gaussians, one for inliers and one for outliers. Disadvantages: not convex. Cauchy: (?). disadvantages: not convex. L1: absolute error (not squared). Disadvantages: not differentiable at 0, minimum is not at a single point when summed. Huber cost function: like L1, but rounded. Disadvantages: non-continuous derivative from 2 nd order and up. p.11/35

34 Robust cost functions 2/5 Squared-error is not usable unless outliers are filtered out. Alternatives: Blake-Zisserman: outliers are given a constant cost. Disadvantages: not a PDF, not convex. Corrupted Gaussian: blend two Gaussians, one for inliers and one for outliers. Disadvantages: not convex. Cauchy: (?). disadvantages: not convex. L1: absolute error (not squared). Disadvantages: not differentiable at 0, minimum is not at a single point when summed. Huber cost function: like L1, but rounded. Disadvantages: non-continuous derivative from 2 nd order and up. Pseudo Huber: like Huber, but with continuous derivatives. p.11/35

35 Robust cost functions 3/5 Figure A.6.5 p.12/35

36 Robust cost functions 4/5 Figure A.6.6 p.13/35

37 Robust cost functions 5/5 Summary: Squared-error cost function is very susceptible to outliers. p.14/35

38 Robust cost functions 5/5 Summary: Squared-error cost function is very susceptible to outliers. The non-convex functions (like L1 and corrupted Gaussian) may be good, but they have local minima. So do not use them unless already close to true minimum. p.14/35

39 Robust cost functions 5/5 Summary: Squared-error cost function is very susceptible to outliers. The non-convex functions (like L1 and corrupted Gaussian) may be good, but they have local minima. So do not use them unless already close to true minimum. Best: Huber and Pseudo-Huber. p.14/35

40 Fooling LM implementations Most implementations of Levenberg-Marquardt use the squared error cost function. What if you want a different cost function C instead? p.15/35

41 Fooling LM implementations Most implementations of Levenberg-Marquardt use the squared error cost function. What if you want a different cost function C instead? Replace the each difference δ i with a weighted version δ i = w i δ i such that δ i 2 = w 2 i δ i 2 = C( δ i ). p.15/35

42 Fooling LM implementations Most implementations of Levenberg-Marquardt use the squared error cost function. What if you want a different cost function C instead? Replace the each difference δ i with a weighted version δ i = w i δ i such that δ i 2 = w 2 i δ i 2 = C( δ i ). Thus w i = C( δi ). δ i (confusion about δ being a vector? why not scalar?) p.15/35

43 Parameterization for Levenberg-Marquardt A good parameterization for use with LM is singularity free (at least in area visited during optimization). This means: continuous, differentiable, one-to-one. p.16/35

44 Parameterization for Levenberg-Marquardt A good parameterization for use with LM is singularity free (at least in area visited during optimization). This means: continuous, differentiable, one-to-one. So latitude-longitude is not suitable to parameterize sphere. p.16/35

45 Parameterization for Levenberg-Marquardt A good parameterization for use with LM is singularity free (at least in area visited during optimization). This means: continuous, differentiable, one-to-one. So latitude-longitude is not suitable to parameterize sphere. And Euler angles are not suitable to parameterize rotations. p.16/35

46 Parameterization for Levenberg-Marquardt A good parameterization for use with LM is singularity free (at least in area visited during optimization). This means: continuous, differentiable, one-to-one. So latitude-longitude is not suitable to parameterize sphere. And Euler angles are not suitable to parameterize rotations. Gauge freedom? p.16/35

47 Parameterization for Levenberg-Marquardt A good parameterization for use with LM is singularity free (at least in area visited during optimization). This means: continuous, differentiable, one-to-one. So latitude-longitude is not suitable to parameterize sphere. And Euler angles are not suitable to parameterize rotations. Gauge freedom? Variance? p.16/35

48 Parameterization of 3-D rotations 3-D rotation matrix: 9 elements, only 3 degrees of freedom. Angle-axis (3-vector) representation: 3 elements, 3 d.o.f. p.17/35

49 Parameterization of 3-D rotations 3-D rotation matrix: 9 elements, only 3 degrees of freedom. Angle-axis (3-vector) representation: 3 elements, 3 d.o.f. Observations: (these are mostly just general observations about log(rotation)) Identity rotation: t = 0. p.17/35

50 Parameterization of 3-D rotations 3-D rotation matrix: 9 elements, only 3 degrees of freedom. Angle-axis (3-vector) representation: 3 elements, 3 d.o.f. Observations: (these are mostly just general observations about log(rotation)) Identity rotation: t = 0. Inverse rotation: t. p.17/35

51 Parameterization of 3-D rotations 3-D rotation matrix: 9 elements, only 3 degrees of freedom. Angle-axis (3-vector) representation: 3 elements, 3 d.o.f. Observations: (these are mostly just general observations about log(rotation)) Identity rotation: t = 0. Inverse rotation: t. Small rotation: the rotation matrix isi + [t]. p.17/35

52 Parameterization of 3-D rotations 3-D rotation matrix: 9 elements, only 3 degrees of freedom. Angle-axis (3-vector) representation: 3 elements, 3 d.o.f. Observations: (these are mostly just general observations about log(rotation)) Identity rotation: t = 0. Inverse rotation: t. Small rotation: the rotation matrix isi + [t]. For small rotations: R(t 1 )R(t 2 ) R(t 1 + t 2 ). p.17/35

53 Parameterization of 3-D rotations 3-D rotation matrix: 9 elements, only 3 degrees of freedom. Angle-axis (3-vector) representation: 3 elements, 3 d.o.f. Observations: (these are mostly just general observations about log(rotation)) Identity rotation: t = 0. Inverse rotation: t. Small rotation: the rotation matrix isi + [t]. For small rotations: R(t 1 )R(t 2 ) R(t 1 + t 2 ). All rotations can be represented by t with t π. When t = n2π, (n positive integer) you get identity rotation again (singularity). p.17/35

54 Parameterization of 3-D rotations 3-D rotation matrix: 9 elements, only 3 degrees of freedom. Angle-axis (3-vector) representation: 3 elements, 3 d.o.f. Observations: (these are mostly just general observations about log(rotation)) Identity rotation: t = 0. Inverse rotation: t. Small rotation: the rotation matrix isi + [t]. For small rotations: R(t 1 )R(t 2 ) R(t 1 + t 2 ). All rotations can be represented by t with t π. When t = n2π, (n positive integer) you get identity rotation again (singularity). Normalization: stay away from t = 2π. p.17/35

55 Parameterization of homogeneous vectors Let v be a n-d-vector (already stripped of extra homogeneous coordinate?). Then parameterize it as n + 1 vector: v = (sinc( v /2)v T, cos( v /2)) T. p.18/35

56 Parameterization of the n-sphere How to parameterize unit vectors x? Compute Householder matrix (reflection) such that H v(x) x = (0,...,0, 1) T. p.19/35

57 Parameterization of the n-sphere How to parameterize unit vectors x? Compute Householder matrix (reflection) such that H v(x) x = (0,...,0, 1) T. (i) f(y) = ŷ/ ŷ where ŷ = (y T, 1) T, (?) (ii) f(y) = (sinc( y /2)y T, cos( y /2)) T (?). both have a Jacobian f/ y = [I 0] T. p.19/35

58 Parameterization of the n-sphere How to parameterize unit vectors x? Compute Householder matrix (reflection) such that H v(x) x = (0,...,0, 1) T. (i) f(y) = ŷ/ ŷ where ŷ = (y T, 1) T, (?) (ii) f(y) = (sinc( y /2)y T, cos( y /2)) T (?). both have a Jacobian f/ y = [I 0] T. So constrained Jacobian can be computed J = C y = C x x y = C x H v(x)x[i 0] T. p.19/35

59 Zhang Paper Zhengyou Zhang A Flexible New Technique for Camera Calibration (1998) p.20/35

60 Zhang Paper Zhengyou Zhang A Flexible New Technique for Camera Calibration (1998) As implemented for: Matlab The Camera Calibration Toolbox for Matlab C++ Intel OpenCV p.20/35

61 Internal Camera Calibration 1/4 Primary use of the Zhang algorithm is internal camera calibration. It computes: focal center c x and c y. focal length f x and f y. skew s (optional). p.21/35

62 Internal Camera Calibration 1/4 Primary use of the Zhang algorithm is internal camera calibration. It computes: focal center c x and c y. focal length f x and f y. skew s (optional). In short, the camera intrinsic matrix: A = f x s c x 0 f y c y p.21/35

63 Internal Camera Calibration 2/4 The Zhang algorithm also computes radial lens distortion parameters [k 1,k 2,k 3,k 4 ]. The original paper uses x d = x + x (k 1 (x 2 + y 2 ) + k 2 (x 2 + y 2 ) 2 ), y d = y + y (k 1 (x 2 + y 2 ) + k 2 (x 2 + y 2 ) 2 ), where x and y are normalized image coordinates and x d and y d are the distorted coordinates. p.22/35

64 Internal Camera Calibration 2/4 The Zhang algorithm also computes radial lens distortion parameters [k 1,k 2,k 3,k 4 ]. The original paper uses x d = x + x (k 1 (x 2 + y 2 ) + k 2 (x 2 + y 2 ) 2 ), y d = y + y (k 1 (x 2 + y 2 ) + k 2 (x 2 + y 2 ) 2 ), where x and y are normalized image coordinates and x d and y d are the distorted coordinates. But the implementations use a more complex model x d = x + x (k 1 (x 2 + y 2 ) + k 2 (x 2 + y 2 ) 2 ) + x td, y d = y + y (k 1 (x 2 + y 2 ) + k 2 (x 2 + y 2 ) 2 ) + y td, where x td = 2k 3 x y + k 4 (3x 2 + y 2 ), y td = 2k 4 x y + k 3 (x 2 + 3y 2 ). p.22/35

65 Internal Camera Calibration 3/4 Example of internal camera calibration parameters. Camera: PixeLINK A741, 2/3 inch CMOS sensor, 1280x1024. Lens: Cosmicar 8.5mm fixed focal length. f x = pixels = 8.528mm f y = pixels = 8.529mm c x = c y = k 1 = k 2 = k 3 = k 4 = p.23/35

66 Internal Camera Calibration 4/4 Show lens distortion in DASiS video viewer... p.24/35

67 External Camera Calibration The Zhang algorithm may also be used for external camera calibration. Camera rotation and translation are computed as side-product of internal calibration. If two cameras see the same calibration pattern at the same time, their relative position and orientation may be computed. p.25/35

68 Overall approach Measure projected position of points in a plane (e.g., checkerboard). p.26/35

69 Overall approach Measure projected position of points in a plane (e.g., checkerboard). Do so for at least two different camera orientations. p.26/35

70 Overall approach Measure projected position of points in a plane (e.g., checkerboard). Do so for at least two different camera orientations. Setup equations in order to estimate camera intrinsics. p.26/35

71 Overall approach Measure projected position of points in a plane (e.g., checkerboard). Do so for at least two different camera orientations. Setup equations in order to estimate camera intrinsics. Given camera intrinsics, estimate extrinsics. p.26/35

72 Overall approach Measure projected position of points in a plane (e.g., checkerboard). Do so for at least two different camera orientations. Setup equations in order to estimate camera intrinsics. Given camera intrinsics, estimate extrinsics. Estimate radial distortion. p.26/35

73 Overall approach Measure projected position of points in a plane (e.g., checkerboard). Do so for at least two different camera orientations. Setup equations in order to estimate camera intrinsics. Given camera intrinsics, estimate extrinsics. Estimate radial distortion. Use Levenberg-Marquardt to optimize initial estimates. p.26/35

74 Basic Equations Plane ( checkerboard ) is at Z = 0. p.27/35

75 Basic Equations Plane ( checkerboard ) is at Z = 0. Homogeneous 2-D image point: m. Homogeneous 3-D world point: M = [X Y 0 1] T. p.27/35

76 Basic Equations Plane ( checkerboard ) is at Z = 0. Homogeneous 2-D image point: m. Homogeneous 3-D world point: M = [X Y 0 1] T. Projection: α γ u 0 0 β v s m = A[R t] M = [r 1 r 2 r 3 t] [X Y 0 1] T = A [r 1 r 2 t] [X Y 1] T p.27/35

77 Homography, constraints An homography H can be estimated between known points on the calibration object and the measured world points. H = [h 1 h 2 h 3 ] = λa[r 1 r 2 t] p.28/35

78 Homography, constraints An homography H can be estimated between known points on the calibration object and the measured world points. H = [h 1 h 2 h 3 ] = λa[r 1 r 2 t] We demand: C1: r T 1 r 2 = 0 (r 1, r 2 orthogonal), C2: r T 1 r 1 = r T 2 r 2 (r 1, r 2 have same length). p.28/35

79 Homography, constraints An homography H can be estimated between known points on the calibration object and the measured world points. H = [h 1 h 2 h 3 ] = λa[r 1 r 2 t] We demand: C1: r T 1 r 2 = 0 (r 1, r 2 orthogonal), C2: r T 1 r 1 = r T 2 r 2 (r 1, r 2 have same length). We know: h 1 = λar 1 r 1 = λ 1 A 1 h 1 h 2 = λar 2 r 2 = λ 1 A 1 h 2 p.28/35

80 Homography, constraints An homography H can be estimated between known points on the calibration object and the measured world points. H = [h 1 h 2 h 3 ] = λa[r 1 r 2 t] We demand: C1: r T 1 r 2 = 0 (r 1, r 2 orthogonal), C2: r T 1 r 1 = r T 2 r 2 (r 1, r 2 have same length). We know: h 1 = λar 1 r 1 = λ 1 A 1 h 1 h 2 = λar 2 r 2 = λ 1 A 1 h 2 So the constraints are: C1: h T 1 A T A 1 h 2 = 0, C2: h T 1 A T A 1 h 1 = h T 2 A T A 1 h 2. p.28/35

81 Closed-form solution using constraints 1/4 Using the constraints, we can first find A, followed by R and t. Let B 11 B 12 B 13 B = A T A 1 = B 12 B 22 B 23 = 1 α 2 γ α 2 β B 13 B 23 B 33 γ α 2 β v 0 γ u 0 β α 2 β γ 2 α 2 β β 2 γ(v 0γ u 0 β) α 2 β 2 v 0 β 2 v 0 γ u 0 β γ(v 0γ u 0 β) v α 2 β α 2 β 2 0 (v 0 γ u 0 β) 2 + v2 β 2 α 2 β 2 0 This allows to solve for α, β, etc. β p.29/35

82 Closed-form solution using constraints 2/4 If we reshuffle the six unique elements of B into a vector b = [B 11,B 12,B 22,B 13,B 23,B 33 ], we can rewrite both constraints as h T i Bh j = v T ijb, where v ij = [h i1 h j1,h i1 h j2 + h i2 h j1,h i2 h j2, h i3 h j1 + h i1 h j3,h i3 h j2 + h i2 h j3,h i3 h j3 ] T, ultimately [ resulting ] in v12 T b = 0. (v 11 v 22 ) T p.30/35

83 Closed-form solution using constraints 3/4 Next, stack all the equations from n measurements (estimated homographies) of the plane ( checkerboard ): Vb = 0, where V is a 2n 6 matrix. Solve as usual using the SVD. p.31/35

84 Closed-form solution using constraints 4/4 Once A is known, we can obtain r 1, r 2 and t: r 1 = λ 1 A 1 h 1, r 2 = λ 1 A 1 h 2, t = λ 1 A 1 h 3. p.32/35

85 Closed-form solution using constraints 4/4 Once A is known, we can obtain r 1, r 2 and t: r 1 = λ 1 A 1 h 1, r 2 = λ 1 A 1 h 2, t = λ 1 A 1 h 3. Now Zhang says r 3 = r 1 r 2, and use SVD to make matrix R orthogonal, i.e., R = UV T. p.32/35

86 Closed-form solution using constraints 4/4 Once A is known, we can obtain r 1, r 2 and t: r 1 = λ 1 A 1 h 1, r 2 = λ 1 A 1 h 2, t = λ 1 A 1 h 3. Now Zhang says r 3 = r 1 r 2, and use SVD to make matrix R orthogonal, i.e., R = UV T. I say: Make r 1, r 2 orthogonal in least-squares sense. The compute r 3 = r 1 r 2. Is simpler and boils down to the same thing. p.32/35

87 Radial distortion Using the camera intrinsics and extrinsics undistorted coordinates of points (corners on the checkerboard) can be approximated. These is used to solve for k 1, k 2 : (u u 0)(x 2 + y 2 ) (u u 0 )(x 2 + y 2 ) 2 (v v 0 )(x 2 + y 2 ) (v v 0 )(x 2 + y 2 ) 2 k 1 k 2 = ŭ u v v. p.33/35

88 Radial distortion Using the camera intrinsics and extrinsics undistorted coordinates of points (corners on the checkerboard) can be approximated. These is used to solve for k 1, k 2 : (u u 0)(x 2 + y 2 ) (u u 0 )(x 2 + y 2 ) 2 (v v 0 )(x 2 + y 2 ) (v v 0 )(x 2 + y 2 ) 2 k 1 k 2 = ŭ u v v. These equations are stacked (D[k 1 k 2 ] T = d) and we solve least squares [k 1 k 2 ] T = (D T D) 1 D T d. Then iterate both algorithm (internal+external, radial) until convergence. p.33/35

89 Maximum likelihood estimation Optimize: use Levenberg-Marquardt to find minimum of n i=1 m j=1 m ij m(a,k 1,k 2,R i,t i,m i ) 2 (n images, m points per image) All done... p.34/35

90 Notable experimental results Using three different images, the results are pretty good. Results keep getting better with more images. p.35/35

91 Notable experimental results Using three different images, the results are pretty good. Results keep getting better with more images. 45 degree angle between image plane and checkerboard seems to give best result. Loss of precision in corner detection was not taken into account (simulated data). p.35/35

92 Notable experimental results Using three different images, the results are pretty good. Results keep getting better with more images. 45 degree angle between image plane and checkerboard seems to give best result. Loss of precision in corner detection was not taken into account (simulated data). Systematic non-planarity of checkerboard has more effect than random noise (duh). p.35/35

93 Notable experimental results Using three different images, the results are pretty good. Results keep getting better with more images. 45 degree angle between image plane and checkerboard seems to give best result. Loss of precision in corner detection was not taken into account (simulated data). Systematic non-planarity of checkerboard has more effect than random noise (duh). Cylindrical non-planarity is worse than spherical non-planarity (cylindrical more common in practice?). p.35/35

94 Notable experimental results Using three different images, the results are pretty good. Results keep getting better with more images. 45 degree angle between image plane and checkerboard seems to give best result. Loss of precision in corner detection was not taken into account (simulated data). Systematic non-planarity of checkerboard has more effect than random noise (duh). Cylindrical non-planarity is worse than spherical non-planarity (cylindrical more common in practice?). Even with systematic non-planarity, results still usable. p.35/35

95 Notable experimental results Using three different images, the results are pretty good. Results keep getting better with more images. 45 degree angle between image plane and checkerboard seems to give best result. Loss of precision in corner detection was not taken into account (simulated data). Systematic non-planarity of checkerboard has more effect than random noise (duh). Cylindrical non-planarity is worse than spherical non-planarity (cylindrical more common in practice?). Even with systematic non-planarity, results still usable. Error in compute sensor center seems not to have too much effect in 3-D reconstruction. p.35/35

Camera calibration. Robotic vision. Ville Kyrki

Camera calibration. Robotic vision. Ville Kyrki Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018 CS 395T Lecture 12: Feature Matching and Bundle Adjustment Qixing Huang October 10 st 2018 Lecture Overview Dense Feature Correspondences Bundle Adjustment in Structure-from-Motion Image Matching Algorithm

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Multiview Stereo COSC450. Lecture 8

Multiview Stereo COSC450. Lecture 8 Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo -- Computational Optical Imaging - Optique Numerique -- Multiple View Geometry and Stereo -- Winter 2013 Ivo Ihrke with slides by Thorsten Thormaehlen Feature Detection and Matching Wide-Baseline-Matching

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Camera Model and Calibration

Camera Model and Calibration Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Parameter estimation. Christiano Gava Gabriele Bleser

Parameter estimation. Christiano Gava Gabriele Bleser Parameter estimation Christiano Gava Christiano.Gava@dfki.de Gabriele Bleser gabriele.bleser@dfki.de Introduction Previous lectures: P-matrix 2D projective transformations Estimation (direct linear transform)

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching -- Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching

More information

Error Minimization in 3-Dimensional Model Reconstruction Using Sparse Bundle Adjustment and the Levenberg-Marquardt Algorithm on Stereo Camera Pairs

Error Minimization in 3-Dimensional Model Reconstruction Using Sparse Bundle Adjustment and the Levenberg-Marquardt Algorithm on Stereo Camera Pairs Error Minimization in 3-Dimensional Model Reconstruction Using Sparse Bundle Adjustment and the Levenberg-Marquardt Algorithm on Stereo Camera Pairs Luke Bonde, Allison Brumfield, Ye Yuan Mathematics,

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

More information

Agenda. Rotations. Camera calibration. Homography. Ransac

Agenda. Rotations. Camera calibration. Homography. Ransac Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

Geometry for Computer Vision

Geometry for Computer Vision Geometry for Computer Vision Lecture 5b Calibrated Multi View Geometry Per-Erik Forssén 1 Overview The 5-point Algorithm Structure from Motion Bundle Adjustment 2 Planar degeneracy In the uncalibrated

More information

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16 Computer Vision I Dense Stereo Correspondences Anita Sellent Stereo Two Cameras Overlapping field of view Known transformation between cameras From disparity compute depth [ Bradski, Kaehler: Learning

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture XV (04.02.08) Contents: Function Minimization (see E. Lohrmann & V. Blobel) Optimization Problem Set of n independent variables Sometimes in addition some constraints

More information

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented

More information

COMP 558 lecture 19 Nov. 17, 2010

COMP 558 lecture 19 Nov. 17, 2010 COMP 558 lecture 9 Nov. 7, 2 Camera calibration To estimate the geometry of 3D scenes, it helps to know the camera parameters, both external and internal. The problem of finding all these parameters is

More information

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS Cha Zhang and Zhengyou Zhang Communication and Collaboration Systems Group, Microsoft Research {chazhang, zhang}@microsoft.com ABSTRACT

More information

Correcting Radial Distortion of Cameras With Wide Angle Lens Using Point Correspondences

Correcting Radial Distortion of Cameras With Wide Angle Lens Using Point Correspondences Correcting Radial istortion of Cameras With Wide Angle Lens Using Point Correspondences Leonardo Romero and Cuauhtemoc Gomez Universidad Michoacana de San Nicolas de Hidalgo Morelia, Mich., 58000, Mexico

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg

More information

Chapter 7: Computation of the Camera Matrix P

Chapter 7: Computation of the Camera Matrix P Chapter 7: Computation of the Camera Matrix P Arco Nederveen Eagle Vision March 18, 2008 Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 1 / 25 1 Chapter 7: Computation of the camera Matrix

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:

More information

Short on camera geometry and camera calibration

Short on camera geometry and camera calibration Short on camera geometry and camera calibration Maria Magnusson, maria.magnusson@liu.se Computer Vision Laboratory, Department of Electrical Engineering, Linköping University, Sweden Report No: LiTH-ISY-R-3070

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

Lecture 5.3 Camera calibration. Thomas Opsahl

Lecture 5.3 Camera calibration. Thomas Opsahl Lecture 5.3 Camera calibration Thomas Opsahl Introduction The image u u x X W v C z C World frame x C y C z C = 1 For finite projective cameras, the correspondence between points in the world and points

More information

CS 664 Structure and Motion. Daniel Huttenlocher

CS 664 Structure and Motion. Daniel Huttenlocher CS 664 Structure and Motion Daniel Huttenlocher Determining 3D Structure Consider set of 3D points X j seen by set of cameras with projection matrices P i Given only image coordinates x ij of each point

More information

Perspective Projection [2 pts]

Perspective Projection [2 pts] Instructions: CSE252a Computer Vision Assignment 1 Instructor: Ben Ochoa Due: Thursday, October 23, 11:59 PM Submit your assignment electronically by email to iskwak+252a@cs.ucsd.edu with the subject line

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Inverse Kinematics II and Motion Capture

Inverse Kinematics II and Motion Capture Mathematical Foundations of Computer Graphics and Vision Inverse Kinematics II and Motion Capture Luca Ballan Institute of Visual Computing Comparison 0 1 A B 2 C 3 Fake exponential map Real exponential

More information

Theoretical Concepts of Machine Learning

Theoretical Concepts of Machine Learning Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

More Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin.

More Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin. More Mosaic Madness Jeffrey Martin (jeffrey-martin.com) CS194: Image Manipulation & Computational Photography with a lot of slides stolen from Alexei Efros, UC Berkeley, Fall 2018 Steve Seitz and Rick

More information

Camera Model and Calibration. Lecture-12

Camera Model and Calibration. Lecture-12 Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Project 2: Structure from Motion

Project 2: Structure from Motion Project 2: Structure from Motion CIS 580, Machine Perception, Spring 2015 Preliminary report due: 2015.04.27. 11:59AM Final Due: 2015.05.06. 11:59AM This project aims to reconstruct a 3D point cloud and

More information

Today s lecture. Structure from Motion. Today s lecture. Parameterizing rotations

Today s lecture. Structure from Motion. Today s lecture. Parameterizing rotations Today s lecture Structure from Motion Computer Vision CSE576, Spring 2008 Richard Szeliski Geometric camera calibration camera matrix (Direct Linear Transform) non-linear least squares separating intrinsics

More information

Multiple-View Structure and Motion From Line Correspondences

Multiple-View Structure and Motion From Line Correspondences ICCV 03 IN PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, NICE, FRANCE, OCTOBER 003. Multiple-View Structure and Motion From Line Correspondences Adrien Bartoli Peter Sturm INRIA

More information

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects

More information

Image Registration Lecture 4: First Examples

Image Registration Lecture 4: First Examples Image Registration Lecture 4: First Examples Prof. Charlene Tsai Outline Example Intensity-based registration SSD error function Image mapping Function minimization: Gradient descent Derivative calculation

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006 Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation

More information

HW 1: Project Report (Camera Calibration)

HW 1: Project Report (Camera Calibration) HW 1: Project Report (Camera Calibration) ABHISHEK KUMAR (abhik@sci.utah.edu) 1 Problem The problem is to calibrate a camera for a fixed focal length using two orthogonal checkerboard planes, and to find

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Project: Camera Rectification and Structure from Motion

Project: Camera Rectification and Structure from Motion Project: Camera Rectification and Structure from Motion CIS 580, Machine Perception, Spring 2018 April 26, 2018 In this project, you will learn how to estimate the relative poses of two cameras and compute

More information

TD2 : Stereoscopy and Tracking: solutions

TD2 : Stereoscopy and Tracking: solutions TD2 : Stereoscopy and Tracking: solutions Preliminary: λ = P 0 with and λ > 0. If camera undergoes the rigid transform: (R,T), then with, so that is the intrinsic parameter matrix. C(Cx,Cy,Cz) is the point

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Project: Camera Rectification and Structure from Motion

Project: Camera Rectification and Structure from Motion Project: Camera Rectification and Structure from Motion CIS 580, Machine Perception, Spring 2018 April 18, 2018 In this project, you will learn how to estimate the relative poses of two cameras and compute

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Assignment 2 : Projection and Homography

Assignment 2 : Projection and Homography TECHNISCHE UNIVERSITÄT DRESDEN EINFÜHRUNGSPRAKTIKUM COMPUTER VISION Assignment 2 : Projection and Homography Hassan Abu Alhaija November 7,204 INTRODUCTION In this exercise session we will get a hands-on

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Lecture 8.2 Structure from Motion. Thomas Opsahl

Lecture 8.2 Structure from Motion. Thomas Opsahl Lecture 8.2 Structure from Motion Thomas Opsahl More-than-two-view geometry Correspondences (matching) More views enables us to reveal and remove more mismatches than we can do in the two-view case More

More information

A Flexible New Technique for Camera Calibration

A Flexible New Technique for Camera Calibration A Flexible New Technique for Camera Calibration Zhengyou Zhang December 2, 1998 (updated on December 14, 1998) (updated on March 25, 1999) (last updated on Aug. 10, 2002; a typo in Appendix B) Technical

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Structure Computation Lecture 18 March 22, 2005 2 3D Reconstruction The goal of 3D reconstruction

More information

Stereo Vision System for Real Time Inspection and 3D Reconstruction

Stereo Vision System for Real Time Inspection and 3D Reconstruction Stereo Vision System for Real Time Inspection and 3D Reconstruction Lenildo C. SILVA 1, Antonio PETRAGLIA 2, Senior Member, IEEE, Mariane R. PETRAGLIA 3, Member, IEEE 1 PEE/COPPE, UFRJ, Cidade Universitária,

More information

Vision par ordinateur

Vision par ordinateur Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

CSCI 5980: Assignment #3 Homography

CSCI 5980: Assignment #3 Homography Submission Assignment due: Feb 23 Individual assignment. Write-up submission format: a single PDF up to 3 pages (more than 3 page assignment will be automatically returned.). Code and data. Submission

More information

Robotics - Single view, Epipolar geometry, Image Features. Simone Ceriani

Robotics - Single view, Epipolar geometry, Image Features. Simone Ceriani Robotics - Single view, Epipolar geometry, Image Features Simone Ceriani ceriani@elet.polimi.it Dipartimento di Elettronica e Informazione Politecnico di Milano 12 April 2012 2/67 Outline 1 Pin Hole Model

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric

More information

Image correspondences and structure from motion

Image correspondences and structure from motion Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.

More information

Calibrating an Overhead Video Camera

Calibrating an Overhead Video Camera Calibrating an Overhead Video Camera Raul Rojas Freie Universität Berlin, Takustraße 9, 495 Berlin, Germany http://www.fu-fighters.de Abstract. In this section we discuss how to calibrate an overhead video

More information

Robot Mapping. Graph-Based SLAM with Landmarks. Cyrill Stachniss

Robot Mapping. Graph-Based SLAM with Landmarks. Cyrill Stachniss Robot Mapping Graph-Based SLAM with Landmarks Cyrill Stachniss 1 Graph-Based SLAM (Chap. 15) Use a graph to represent the problem Every node in the graph corresponds to a pose of the robot during mapping

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration 1 Lab 1 Camera Calibration Objective In this experiment, students will use stereo cameras, an image acquisition program and camera calibration algorithms to achieve the following goals: 1. Develop a procedure

More information

An idea which can be used once is a trick. If it can be used more than once it becomes a method

An idea which can be used once is a trick. If it can be used more than once it becomes a method An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized

More information

Camera models and calibration

Camera models and calibration Camera models and calibration Read tutorial chapter 2 and 3. http://www.cs.unc.edu/~marc/tutorial/ Szeliski s book pp.29-73 Schedule (tentative) 2 # date topic Sep.8 Introduction and geometry 2 Sep.25

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe : Martin Stiaszny and Dana Qu LECTURE 0 Camera Calibration 0.. Introduction Just like the mythical frictionless plane, in real life we will

More information

Bundle Adjustment. Frank Dellaert CVPR 2014 Visual SLAM Tutorial

Bundle Adjustment. Frank Dellaert CVPR 2014 Visual SLAM Tutorial Bundle Adjustment Frank Dellaert CVPR 2014 Visual SLAM Tutorial Mo@va@on VO: just two frames - > R,t using 5- pt or 3- pt Can we do bener? SFM, SLAM - > VSLAM Later: integrate IMU, other sensors Objec@ve

More information

Experimental Data and Training

Experimental Data and Training Modeling and Control of Dynamic Systems Experimental Data and Training Mihkel Pajusalu Alo Peets Tartu, 2008 1 Overview Experimental data Designing input signal Preparing data for modeling Training Criterion

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.1: 3D Geometry Jürgen Sturm Technische Universität München Points in 3D 3D point Augmented vector Homogeneous

More information

Jacobian of Point Coordinates w.r.t. Parameters of General Calibrated Projective Camera

Jacobian of Point Coordinates w.r.t. Parameters of General Calibrated Projective Camera Jacobian of Point Coordinates w.r.t. Parameters of General Calibrated Projective Camera Karel Lebeda, Simon Hadfield, Richard Bowden Introduction This is a supplementary technical report for ACCV04 paper:

More information

CS354 Computer Graphics Rotations and Quaternions

CS354 Computer Graphics Rotations and Quaternions Slide Credit: Don Fussell CS354 Computer Graphics Rotations and Quaternions Qixing Huang April 4th 2018 Orientation Position and Orientation The position of an object can be represented as a translation

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Humanoid Robotics. Least Squares. Maren Bennewitz

Humanoid Robotics. Least Squares. Maren Bennewitz Humanoid Robotics Least Squares Maren Bennewitz Goal of This Lecture Introduction into least squares Use it yourself for odometry calibration, later in the lecture: camera and whole-body self-calibration

More information

Scene Reconstruction from Uncontrolled Motion using a Low Cost 3D Sensor

Scene Reconstruction from Uncontrolled Motion using a Low Cost 3D Sensor Scene Reconstruction from Uncontrolled Motion using a Low Cost 3D Sensor Pierre Joubert and Willie Brink Applied Mathematics Department of Mathematical Sciences University of Stellenbosch, South Africa

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

arxiv: v1 [cs.cv] 18 Sep 2017

arxiv: v1 [cs.cv] 18 Sep 2017 Direct Pose Estimation with a Monocular Camera Darius Burschka and Elmar Mair arxiv:1709.05815v1 [cs.cv] 18 Sep 2017 Department of Informatics Technische Universität München, Germany {burschka elmar.mair}@mytum.de

More information

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2 Engineering Tripos Part IIB FOURTH YEAR Module 4F2: Computer Vision and Robotics Solutions to Examples Paper 2. Perspective projection and vanishing points (a) Consider a line in 3D space, defined in camera-centered

More information

Multiple View Geometry. Frank Dellaert

Multiple View Geometry. Frank Dellaert Multiple View Geometry Frank Dellaert Outline Intro Camera Review Stereo triangulation Geometry of 2 views Essential Matrix Fundamental Matrix Estimating E/F from point-matches Why Consider Multiple Views?

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

CS231M Mobile Computer Vision Structure from motion

CS231M Mobile Computer Vision Structure from motion CS231M Mobile Computer Vision Structure from motion - Cameras - Epipolar geometry - Structure from motion Pinhole camera Pinhole perspective projection f o f = focal length o = center of the camera z y

More information

CSE 527: Introduction to Computer Vision

CSE 527: Introduction to Computer Vision CSE 527: Introduction to Computer Vision Week 5 - Class 1: Matching, Stitching, Registration September 26th, 2017 ??? Recap Today Feature Matching Image Alignment Panoramas HW2! Feature Matches Feature

More information

Chapter Multidimensional Gradient Method

Chapter Multidimensional Gradient Method Chapter 09.04 Multidimensional Gradient Method After reading this chapter, you should be able to: 1. Understand how multi-dimensional gradient methods are different from direct search methods. Understand

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Self-calibration of a pair of stereo cameras in general position

Self-calibration of a pair of stereo cameras in general position Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible

More information

3-D D Euclidean Space - Vectors

3-D D Euclidean Space - Vectors 3-D D Euclidean Space - Vectors Rigid Body Motion and Image Formation A free vector is defined by a pair of points : Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Coordinates of the vector : 3D Rotation

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information