Lecture 11 MRF s (conbnued), cameras and lenses.
|
|
- Cynthia McBride
- 5 years ago
- Views:
Transcription
1 6.869 Advances in Computer Vision Bill Freeman and Antonio Torralba Spring 2011 Lecture 11 MRF s (conbnued), cameras and lenses. remember correction on Gibbs sampling
2 Motion application image patches image scene patches scene 2
3 What behavior should we see in a motion algorithm? Aperture problem Resolution through propagation of information Figure/ground discrimination 3
4 The aperture problem 4
5 The aperture problem 4
6 The aperture problem 5
7 The aperture problem 5
8 motion program demo 6
9 Inference: Motion estimation results (maxima of scene probability distributions displayed) Image data Iterations 0 and 1 Initial guesses only show motion at edges. 7
10 Motion estimation results (maxima of scene probability distributions displayed) Figure/ground still unresolved here. Iterations 2 and 3 8
11 Motion estimation results (maxima of scene probability distributions displayed) Iterations 4 and 5 Final result compares well with vector quantized true (uniform) velocities. 9
12 Vision applications of MRF s Stereo Motion estimation Labelling shading and reflectance Segmentation Many others 10
13 Random Fields for segmentation I = Image pixels (observed) h = foreground/background labels (hidden) one label per pixel θ = Parameters Posterior Joint Likelihood Prior 11
14 Random Fields for segmentation I = Image pixels (observed) h = foreground/background labels (hidden) one label per pixel θ = Parameters Posterior Joint Likelihood Prior 11
15 Random Fields for segmentation I = Image pixels (observed) h = foreground/background labels (hidden) one label per pixel θ = Parameters Posterior Joint Likelihood Prior 1. Generative approach models joint Markov random field (MRF) 2. Discriminative approach models posterior directly Conditional random field (CRF) 11
16 Generative Markov Random Field i I (pixels) j Image Plane 12
17 Generative Markov Random Field h (labels) {foreground,b ackground} h j h i i I (pixels) j Image Plane 12
18 Generative Markov Random Field h (labels) {foreground,b ackground} h j h i Pairwise Potential (MRF) ψ ij (h i, h j θ ij ) i I (pixels) j Image Plane 12
19 Generative Markov Random Field h (labels) {foreground,b ackground} h j h i i Pairwise Potential (MRF) ψ ij (h i, h j θ ij ) Unary Potential φ i (I h i,θ i ) I (pixels) j Image Plane 12
20 Generative Markov Random Field Likelihood MRF Prior h (labels) {foreground,b ackground} h j h i i Pairwise Potential (MRF) ψ ij (h i, h j θ ij ) Unary Potential φ i (I h i,θ i ) I (pixels) j Image Plane 12
21 Generative Markov Random Field Likelihood MRF Prior h (labels) {foreground,b ackground} h j h i i Pairwise Potential (MRF) ψ ij (h i, h j θ ij ) Unary Potential φ i (I h i,θ i ) I (pixels) j Image Plane Prior has no dependency 12 on I
22 Conditional Random Field Discriminative approach Lafferty, McCallum and Pereira 2001 Unary Pairwise 13
23 Conditional Random Field Discriminative approach Lafferty, McCallum and Pereira 2001 Unary Pairwise h i h j i I (pixels) j Image Plane 13
24 Conditional Random Field Discriminative approach Lafferty, McCallum and Pereira 2001 Dependency on I allows introduction of pairwise terms that make use of image. For example, neighboring labels should be similar only if pixel colors are similar Contrast term I (pixels) Unary Pairwise Image Plane h i h j i j 13
25 Conditional Random Field Discriminative approach Lafferty, McCallum and Pereira 2001 Dependency on I allows introduction of pairwise terms that make use of image. For example, neighboring labels should be similar only if pixel colors are similar Contrast term e.g Kumar and Hebert 2003 I (pixels) Unary Pairwise Image Plane h i h j i j 13
26 OBJCUT Unary Kumar, Torr & Zisserman 2005 Pairwise Color Likelihood Distance from Ω Label smoothness Contrast Ω is a shape prior on the labels from a Layered Pictorial Structure (LPS) model Ω (shape parameter) Segmentation by: - Match LPS model to image (get number of samples, each with a different pose -Marginalize over the samples using a single graph cut [Boykov & Jolly, 2001] 14
27 OBJCUT Unary Kumar, Torr & Zisserman 2005 Pairwise Color Likelihood Distance from Ω Label smoothness Contrast Ω is a shape prior on the labels from a Layered Pictorial Structure (LPS) model Segmentation by: Ω (shape parameter) h i - Match LPS model to image (get number of samples, each with a different pose h j i -Marginalize over the samples using a single graph cut [Boykov & Jolly, 2001] I (pixels) j Image Plane 14 Figure from Kumar et al., CVPR 2005
28 OBJCUT: Shape prior - Ω - Layered Pictorial Structures (LPS) Generative model Composition of parts + spatial layout Layer 2 Layer 1 Spatial Layout (Pairwise Configuration) Parts in Layer 2 can occlude parts in Layer 1 15 Kumar, et al. 2004, 2005
29 Using LPS Model for Cow In the absence of a clear boundary between object and background Image OBJCUT: Results Segmentation 16
30 Outline of MRF section Inference in MRF s. Gibbs sampling, simulated annealing Iterated conditional modes (ICM) Belief propagation Application example super-resolution Graph cuts Variational methods Learning MRF parameters. Iterative proportional fitting (IPF) 17
31 True joint probability 18
32 True joint probability Observed marginal distributions 18
33 Initial guess at joint probability 19
34 IPF update equation, for maximum likelihood estimate of joint probability Scale the previous iteration s estimate for the joint probability by the ratio of the true to the predicted marginals. Gives gradient ascent in the likelihood of the joint probability, given the observations of the marginals. See: Michael Jordan s book on graphical models 20
35 Convergence to correct marginals by IPF algorithm 21
36 Convergence of to correct marginals by IPF algorithm 22
37 IPF results for this example: comparison of joint probabilities True joint probability Initial guess Final maximum entropy estimate 23
38 Application to MRF parameter estimation Can show that for the ML estimate of the clique potentials, φ c (x c ), the empirical marginals equal the model marginals, Because the model marginals are proportional to the clique potentials, we have the IPF update rule for φ c (x c ), which scales the model marginals to equal the observed marginals: Performs coordinate ascent in the likelihood of the MRF parameters, given the observed data. Reference: unpublished notes by Michael Jordan, and by Roweis:
39 Learning MRF parameters, labeled data Iterative proportional fitting lets you make a maximum likelihood estimate a joint distribution from observations of various marginal distributions. Applied to learning MRF clique potentials: (1) measure the pairwise marginal statistics (histogram state co-occurrences in labeled training data). (2) guess the clique potentials (use measured marginals to start). (3) do inference to calculate the model s marginals for every node pair. (4) scale each clique potential for each state pair by the empirical over model marginal 25
40 Image formation 26
41 The structure of ambient light
42 The structure of ambient light
43 The structure of ambient light
44 The structure of ambient light
45 The structure of ambient light
46 The structure of ambient light
47 The structure of ambient light
48 The structure of ambient light
49 The PlenopBc FuncBon Adelson & Bergen, 91
50 Measuring the PlenopBc funcbon Why is there no picture appearing on the paper?
51 Light rays from many different parts of the scene strike the same point on the paper. Forsyth & Ponce
52 Measuring the PlenopBc funcbon The camera obscura The pinhole camera
53 Light rays from many different parts of the scene strike the same point on the paper. The pinhole camera only allows rays from one point in the scene to strike each point of the paper. Forsyth & Ponce
54 pinhole camera demos 34
55 Problem Set 7
56 Problem Set 7
57 Effect of pinhole size Wandell, Foundations of Vision, Sinauer, 1995
58 Wandell, Foundations of Vision, Sinauer, 1995
59 Playing with pinholes
60 Two pinholes
61 Anaglyph pinhole camera
62 Anaglyph pinhole camera
63 Anaglyph pinhole camera
64 Synthesis of new views Anaglyph
65 Problem set 7 Build the device Take some pictures and put them in the report Take anaglyph images Calibrate camera Recover depth for some points in the image
66 Cameras, lenses, and calibration Camera models Projections Calibration Lenses
67 Perspective projection camera world Cartesian coordinates: We have, by similar triangles, that (x, y, z) -> (f x/z, f y/z, -f) Ignore the third coordinate, and get (x,y,z) ( f x z, f y z )
68 Perspective projection camera world y Cartesian coordinates: We have, by similar triangles, that (x, y, z) -> (f x/z, f y/z, -f) Ignore the third coordinate, and get (x,y,z) ( f x z, f y z )
69 Perspective projection camera world z y Cartesian coordinates: We have, by similar triangles, that (x, y, z) -> (f x/z, f y/z, -f) Ignore the third coordinate, and get (x,y,z) ( f x z, f y z )
70 Perspective projection camera world z y y Cartesian coordinates: We have, by similar triangles, that (x, y, z) -> (f x/z, f y/z, -f) Ignore the third coordinate, and get (x,y,z) ( f x z, f y z )
71 Perspective projection camera world f z y y Cartesian coordinates: We have, by similar triangles, that (x, y, z) -> (f x/z, f y/z, -f) Ignore the third coordinate, and get (x,y,z) ( f x z, f y z )
72 Perspective projection camera world f z y y Cartesian coordinates: We have, by similar triangles, that (x, y, z) -> (f x/z, f y/z, -f) Ignore the third coordinate, and get (x,y,z) ( f x z, f y z )
73 Common to draw film plane in front of the focal point. Moving the film plane merely scales the image. Forsyth&Ponce
74 Geometric properties of projection Points go to points Lines go to lines Planes go to whole image or half-planes. Polygons go to polygons Degenerate cases line through focal point to point plane through focal point to line
75 Line in 3-space
76 Line in 3-space Perspective projection of that line x'(t) = fx z = f (x 0 + at) z 0 + ct y'(t) = fy z = f (y 0 + bt) z 0 + ct
77 Line in 3-space Perspective projection of that line x'(t) = fx z = f (x 0 + at) z 0 + ct y'(t) = fy z = f (y 0 + bt) z 0 + ct In the limit as we have (for ):
78 Line in 3-space Perspective projection of that line x'(t) = fx z = f (x 0 + at) z 0 + ct y'(t) = fy z = f (y 0 + bt) z 0 + ct In the limit as we have (for ): x'(t) y'(t) fa c fb c
79 Line in 3-space Perspective projection of that line x'(t) = fx z = f (x 0 + at) z 0 + ct y'(t) = fy z = f (y 0 + bt) z 0 + ct In the limit as we have (for ): This tells us that any set of parallel lines (same a, b, c parameters) project to the same point (called the vanishing point). x'(t) y'(t) fa c fb c
80 Vanishing point y camera z
81 Vanishing point y camera z
82 Vanishing point y camera z
83 Vanishing point y camera z
84 Vanishing point y camera z
85 Vanishing point y camera z
86 Vanishing point y camera z
87 Vanishing point y camera z
88 graphics/two_point_perspective.html
89 Vanishing points Each set of parallel lines (=direction) meets at a different point The vanishing point for this direction Sets of parallel lines on the same plane lead to collinear vanishing points. The line is called the horizon for that plane
90 What if you photograph a brick wall head-on? y x
91 What if you photograph a brick wall head-on? y x
92 Brick wall line in 3-space
93 Brick wall line in 3-space Perspective projection of that line x'(t) = f (x 0 + at) z 0 y'(t) = f y 0 z 0
94 Brick wall line in 3-space Perspective projection of that line x'(t) = f (x 0 + at) z 0 y'(t) = f y 0 z 0 All bricks have same z 0. Those in same row have same y 0 Thus, a brick wall, photographed head-on, gets rendered as set of parallel lines in the image plane.
95 Other projection models: Orthographic projection
96 Other projection models: Weak perspective Issue perspective effects, but not over the scale of individual objects collect points into a group at about the same depth, then divide each point by the depth of its group Adv: easy Disadv: only approximate
97 Three camera projections 3-d point 2-d image position (1) Perspective: (2) Weak perspective: (3) Orthographic:
98 Homogeneous coordinates Is the perspective projection a linear transformation? Slide by Steve Seitz
99 Homogeneous coordinates Is the perspective projection a linear transformation? no division by z is nonlinear Slide by Steve Seitz
100 Homogeneous coordinates Is the perspective projection a linear transformation? no division by z is nonlinear Trick: add one more coordinate: homogeneous image coordinates homogeneous scene coordinates Converting from homogeneous coordinates Slide by Steve Seitz
101 Perspective Projection Projection is a matrix multiply using homogeneous coordinates: x y z 0 0 1/ f 0 1 x = y z / f f x z, f y z This is known as perspective projection The matrix is the projection matrix Slide by Steve Seitz
102 Perspective Projection How does scaling the projection matrix change the transformation? Slide by Steve Seitz / f 0 x y z 1 = x y z / f f x z, f y z
103 Perspective Projection How does scaling the projection matrix change the transformation? Slide by Steve Seitz / f 0 x y z 1 = x y z / f f x z, f y z f f x y z 1
104 Perspective Projection How does scaling the projection matrix change the transformation? Slide by Steve Seitz / f 0 x y z 1 = x y z / f f x z, f y z f f x y z 1 = fx fy z
105 Perspective Projection How does scaling the projection matrix change the transformation? Slide by Steve Seitz / f 0 x y z 1 = x y z / f f x z, f y z f f x y z 1 = fx fy z f x z, f y z
106 Orthographic Projection Special case of perspective projection Distance from the COP to the PP is infinite Image World Also called parallel projection What s the projection matrix?? Slide by Steve Seitz
107 Orthographic Projection Special case of perspective projection Distance from the COP to the PP is infinite Image World Also called parallel projection What s the projection matrix?? Slide by Steve Seitz
108 Orthographic Projection Special case of perspective projection Distance from the COP to the PP is infinite Image World Also called parallel projection What s the projection matrix? Slide by Steve Seitz
109 Orthographic Projection Special case of perspective projection Distance from the COP to the PP is infinite Image World Also called parallel projection What s the projection matrix? Slide by Steve Seitz
110 Camera calibration Use the camera to tell you things about the world: Relationship between coordinates in the world and coordinates in the image: geometric camera calibration, see Szeliski, section 5.2, 5.3 for references (Relationship between intensities in the world and intensities in the image: photometric image formation, see Szeliski, sect. 2.2.)
111 One reason to calibrate a camera Measuring height v r vanishing line (horizon) v v t 0 H t R H v b 0 b
112 Another reason to calibrate a camera World image point Depth of image point Foca optical center (left) baseline optical center (right) Slide credit: Kristen
113 Translation and rotation A p x p A p y A p z B A t
114 Translation and rotation B p = B R A p + B t A A A p y A p x A p z B A t p
115 Translation and rotation as described in the coordinates of frame B B p = B R A p + B t A A A p y A p x A p z B A t p
116 Translation and rotation as described in the coordinates of frame B Let s write as a single matrix equation: B p = B R A p + B t A A A p y A p x A p z B A t p
117 Translation and rotation as described in the coordinates of frame B Let s write as a single matrix equation: B p = B R A p + B t A A A p y A p x A p z B A t p B p x B p y B p z 1 = B A R B A t A p x A p y A p z 1
118 Translation and rotation, written in each set of coordinates Non-homogeneous coordinates B p = B R A p + B t A A
119 Translation and rotation, written in each set of coordinates Non-homogeneous coordinates B p = B R A p + B t A A Homogeneous coordinates B p = B C A p A where B A C = B A R B A t
120 Intrinsic parameters: from idealized world coordinates to pixel values Forsyth&Ponce Perspective projection
121 Intrinsic parameters But pixels are in some arbitrary spatial units
122 Intrinsic parameters Maybe pixels are not square
123 Intrinsic parameters We don t know the origin of our camera pixel coordinates
124 Intrinsic parameters May be skew between camera pixel axes
125 Intrinsic parameters May be skew between camera pixel axes
126 Intrinsic parameters May be skew between camera pixel axes
127 Intrinsic parameters, homogeneous coordinates
128 Intrinsic parameters, homogeneous coordinates Using homogenous coordinates, we can write this as:
129 Intrinsic parameters, homogeneous coordinates u v 1 = α α cot(θ) u 0 0 β sin(θ) v x y z 1 Using homogenous coordinates, we can write this as:
130 Intrinsic parameters, homogeneous coordinates u v 1 = α α cot(θ) u 0 0 β sin(θ) v x y z 1 Using homogenous coordinates, we can write this as: or:
131 Intrinsic parameters, homogeneous coordinates or: Using homogenous coordinates, we can write this as: u v = 1 α α cot(θ) u 0 β 0 v 0 sin(θ) p = K 0 x y 0 0 z 1 C p
132 Intrinsic parameters, homogeneous coordinates or: Using homogenous coordinates, we can write this as: In pixels u v = 1 α α cot(θ) u 0 β 0 v 0 sin(θ) p = K In camera-based coords 0 x y 0 0 z 1 C p
133 Extrinsic parameters: translation and rotation of camera frame
134 Extrinsic parameters: translation and rotation of camera frame C p = C R W p + C t W W Non-homogeneous coordinates
135 Extrinsic parameters: translation and rotation of camera frame C p = C R W p + C t W W Non-homogeneous coordinates C p C = W R C W t W p Homogeneous coordinates
136 Combining extrinsic and intrinsic calibration parameters, in homogeneous coordinates C p = K C p p C = W R C W t W p Intrinsic Extrinsic Forsyth&Ponce
137 Combining extrinsic and intrinsic calibration parameters, in homogeneous coordinates C p = K C p p C = W R C W t W p Intrinsic World coordinate Extrinsic Forsyth&Ponce
138 Combining extrinsic and intrinsic calibration parameters, in homogeneous coordinates Camera coordinates C p = K C p p C = W R C W t W p Intrinsic World coordinate Extrinsic Forsyth&Ponce
139 Combining extrinsic and intrinsic calibration parameters, in homogeneous coordinates pixels Camera coordinates C p = K C p p C = W R C W t W p Intrinsic World coordinate Extrinsic Forsyth&Ponce
140 Combining extrinsic and intrinsic calibration parameters, in homogeneous coordinates pixels Camera coordinates C p = K C p p C = W R C W t p = K C W R ( ) W C W t W p p Intrinsic World coordinate Extrinsic Forsyth&Ponce
141 Combining extrinsic and intrinsic calibration parameters, in homogeneous coordinates pixels Camera coordinates Forsyth&Ponce C p = K C p p C = W R C W t p = K C W R p = M W p ( ) W C W t W p p Intrinsic World coordinate Extrinsic
142 Combining extrinsic and intrinsic calibration parameters, in homogeneous coordinates pixels Camera coordinates Forsyth&Ponce C p = K C p p C = W R C W t p = K C W R p = M W p ( ) W C W t W p p Intrinsic World coordinate Extrinsic
143 Combining extrinsic and intrinsic calibration parameters, in homogeneous coordinates pixels Camera coordinates Forsyth&Ponce C p = K C p p C = W R C W t p = K C W R p = M W p ( ) W C W t W p p Intrinsic World coordinate Extrinsic
144 Other ways to write the same equation pixel coordinates p = M W p world coordinates u v = 1 T. m 1 T. m 2 T. m W p x W p y W p z 1 Conversion back from homogeneous coordinates leads to: u = m T 1 P m T 3 P v = m 2 P m T 3 P T
145 Calibration target Find the position, u i and v i, in pixels, of each calibration object feature point.
146 Camera calibration
147 Camera calibration From before, we had these equations relating image positions, u,v, to points at 3-d positions P (in homogeneous coordinates):
148 Camera calibration From before, we had these equations relating image positions, u,v, to points at 3-d positions P (in homogeneous coordinates): So for each feature point, i, we have:
149 Camera calibration Stack all these measurements of i=1 n points into a big matrix:
150 Camera calibration Stack all these measurements of i=1 n points into a big matrix:
151 In vector form: Camera calibration Showing all the elements:
152 Camera calibration Q m = 0 We want to solve for the unit vector m (the stacked one) that minimizes The minimum eigenvector of the matrix Q T Q gives us that (see Forsyth&Ponce, 3.1), because it is the unit vector x that minimizes x T Q T Q x.
153 Camera calibration Once you have the M matrix, can recover the intrinsic and extrinsic parameters.
154 Why do we need lenses?
155 The reason for lenses: light gathering ability Hidden assumption in using a lens: the BRDF of the surfaces you look at will be well-behaved
156 Animal Eyes Animal Eyes. Land & Nilsson. Oxford Univ. Press
157 λ 1 = c ω n 1 λ 1 = Lsin(α 1 ) n 1 sin(α 1 ) = Derivation of Snell s law c ω L = n 2 sin(α 2 ) α 1 α 2 α 1 L Wavelength of light waves scales inversely with n. This requires that plane waves bend, according to n 1 sin(α 1 ) = n 2 sin(α 2 ) n 1 n 2 α 2
158 Refraction: Snell s law For small angles,
159 Spherical lens
160 Forsyth and Ponce
161 First order optics f D/2
162 Paraxial refraction equation Relates distance from sphere and sphere radius to bending angles of light ray, for lens with one spherical surface.
163 Paraxial refraction equation Relates distance from sphere and sphere radius to bending angles of light ray, for lens with one spherical surface.
164 Paraxial refraction equation Relates distance from sphere and sphere radius to bending angles of light ray, for lens with one spherical surface.
165 Deriving the lensmaker s formula
166 The thin lens, first order optics The lensmaker s equation: Forsyth&Ponce
167 What camera projection model applies for a thin lens?
168 What camera projection model applies for a thin lens? The perspective projection of a pinhole camera. But note that many more of the rays leaving from P arrive at P
169 Lens demonstration Verify: Focusing property Lens maker s equation
170 More accurate models of real lenses Finite lens thickness Higher order approximation to Chromatic aberration Vignetting
171 Thick lens Forsyth&Ponce
172 Third order optics f D/2
173 Paraxial refraction equation, 3 rd order optics Forsyth&Ponce
174 Spherical aberration (from 3 rd order optics Transverse spherical aberration Longitudinal spherical aberration Forsyth&Ponce
175 Other 3 rd order effects Coma, astigmatism, field curvature, distortion. Forsyth&Ponce no distortion pincushion distortion barrel distortion
176 Lens systems Lens systems can be designed to correct for aberrations described by 3 rd order optics Forsyth&Ponce
177 Vignetting Forsyth&Ponce
178 Chromatic aberration (desirable for prisms, bad for lenses)
179 Other (possibly annoying) phenomena Chromatic aberration Light at different wavelengths follows different paths; hence, some wavelengths are defocussed Machines: coat the lens Humans: live with it Scattering at the lens surface Some light entering the lens system is reflected off each surface it encounters (Fresnel s law gives details) Machines: coat the lens, interior Humans: live with it (various scattering phenomena are visible in the human eye)
180 Summary Want to make images Pinhole camera models the geometry of perspective projection Lenses make it work in practice Models for lenses Thin lens, spherical surfaces, first order optics Thick lens, higher-order optics, vignetting.
Belief propagation and MRF s
Belief propagation and MRF s Bill Freeman 6.869 March 7, 2011 1 1 Undirected graphical models A set of nodes joined by undirected edges. The graph makes conditional independencies explicit: If two nodes
More informationImage formation. Thanks to Peter Corke and Chuck Dyer for the use of some slides
Image formation Thanks to Peter Corke and Chuck Dyer for the use of some slides Image Formation Vision infers world properties form images. How do images depend on these properties? Two key elements Geometry
More informationImage Formation I Chapter 2 (R. Szelisky)
Image Formation I Chapter 2 (R. Selisky) Guido Gerig CS 632 Spring 22 cknowledgements: Slides used from Prof. Trevor Darrell, (http://www.eecs.berkeley.edu/~trevor/cs28.html) Some slides modified from
More informationImage Formation I Chapter 1 (Forsyth&Ponce) Cameras
Image Formation I Chapter 1 (Forsyth&Ponce) Cameras Guido Gerig CS 632 Spring 215 cknowledgements: Slides used from Prof. Trevor Darrell, (http://www.eecs.berkeley.edu/~trevor/cs28.html) Some slides modified
More informationImage Formation I Chapter 1 (Forsyth&Ponce) Cameras
Image Formation I Chapter 1 (Forsyth&Ponce) Cameras Guido Gerig CS 632 Spring 213 cknowledgements: Slides used from Prof. Trevor Darrell, (http://www.eecs.berkeley.edu/~trevor/cs28.html) Some slides modified
More informationCOSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor
COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The
More informationCameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models
Cameras and Radiometry Last lecture in a nutshell CSE 252A Lecture 5 Conversion Euclidean -> Homogenous -> Euclidean In 2-D Euclidean -> Homogenous: (x, y) -> k (x,y,1) Homogenous -> Euclidean: (x, y,
More informationHow to achieve this goal? (1) Cameras
How to achieve this goal? (1) Cameras History, progression and comparisons of different Cameras and optics. Geometry, Linear Algebra Images Image from Chris Jaynes, U. Kentucky Discrete vs. Continuous
More informationPinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois
Pinhole Camera Model /5/7 Computational Photography Derek Hoiem, University of Illinois Next classes: Single-view Geometry How tall is this woman? How high is the camera? What is the camera rotation? What
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationProjective Geometry and Camera Models
/2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:
More informationComputer Vision Projective Geometry and Calibration. Pinhole cameras
Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole
More information6.819 / 6.869: Advances in Computer Vision Antonio Torralba and Bill Freeman. Lecture 11 Geometry, Camera Calibration, and Stereo.
6.819 / 6.869: Advances in Computer Vision Antonio Torralba and Bill Freeman Lecture 11 Geometry, Camera Calibration, and Stereo. 2d from 3d; 3d from multiple 2d measurements? 2d 3d? Perspective projection
More informationLecture 8: Camera Models
Lecture 8: Camera Models Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab 1 14- Oct- 15 What we will learn today? Pinhole cameras Cameras & lenses The geometry of pinhole
More informationComputer Vision cmput 428/615
Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?
More informationProjective Geometry and Camera Models
Projective Geometry and Camera Models Computer Vision CS 43 Brown James Hays Slides from Derek Hoiem, Alexei Efros, Steve Seitz, and David Forsyth Administrative Stuff My Office hours, CIT 375 Monday and
More informationBelief propagation and MRF s
Belief propagation and MRF s Bill Freeman 6.869 March 7, 2011 1 1 Outline of MRF section Inference in MRF s. Gibbs sampling, simulated annealing Iterated conditional modes (ICM) Belief propagation Application
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 5: Projection Reading: Szeliski 2.1 Projection Reading: Szeliski 2.1 Projection Müller Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html Modeling
More informationPart Images Formed by Flat Mirrors. This Chapter. Phys. 281B Geometric Optics. Chapter 2 : Image Formation. Chapter 2: Image Formation
Phys. 281B Geometric Optics This Chapter 3 Physics Department Yarmouk University 21163 Irbid Jordan 1- Images Formed by Flat Mirrors 2- Images Formed by Spherical Mirrors 3- Images Formed by Refraction
More informationPerspective projection. A. Mantegna, Martyrdom of St. Christopher, c. 1450
Perspective projection A. Mantegna, Martyrdom of St. Christopher, c. 1450 Overview of next two lectures The pinhole projection model Qualitative properties Perspective projection matrix Cameras with lenses
More informationComputer Vision CS 776 Fall 2018
Computer Vision CS 776 Fall 2018 Cameras & Photogrammetry 1 Prof. Alex Berg (Slide credits to many folks on individual slides) Cameras & Photogrammetry 1 Albrecht Dürer early 1500s Brunelleschi, early
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More informationComputer Vision and Applications. Prof. Trevor. Darrell. Class overview Administrivia & Policies Lecture 1
6.89 Computer Vision and pplications Class overview dministrivia & olicies Lecture rof. Trevor. Darrell erspective projection (review) Rigid motions (review) Camera Calibration Readings: Forsythe & once,.,
More informationFinal Exam. Today s Review of Optics Polarization Reflection and transmission Linear and circular polarization Stokes parameters/jones calculus
Physics 42200 Waves & Oscillations Lecture 40 Review Spring 206 Semester Matthew Jones Final Exam Date:Tuesday, May 3 th Time:7:00 to 9:00 pm Room: Phys 2 You can bring one double-sided pages of notes/formulas.
More informationAnnouncements. Image Formation: Outline. Homogenous coordinates. Image Formation and Cameras (cont.)
Announcements Image Formation and Cameras (cont.) HW1 due, InitialProject topics due today CSE 190 Lecture 6 Image Formation: Outline Factors in producing images Projection Perspective Vanishing points
More informationWaves & Oscillations
Physics 42200 Waves & Oscillations Lecture 40 Review Spring 2016 Semester Matthew Jones Final Exam Date:Tuesday, May 3 th Time:7:00 to 9:00 pm Room: Phys 112 You can bring one double-sided pages of notes/formulas.
More informationUnderstanding Variability
Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion
More information3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,
3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationCamera Calibration. COS 429 Princeton University
Camera Calibration COS 429 Princeton University Point Correspondences What can you figure out from point correspondences? Noah Snavely Point Correspondences X 1 X 4 X 3 X 2 X 5 X 6 X 7 p 1,1 p 1,2 p 1,3
More informationCapturing light. Source: A. Efros
Capturing light Source: A. Efros Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection? Lenses
More informationRigid Body Motion and Image Formation. Jana Kosecka, CS 482
Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3
More informationCMPSCI 670: Computer Vision! Image formation. University of Massachusetts, Amherst September 8, 2014 Instructor: Subhransu Maji
CMPSCI 670: Computer Vision! Image formation University of Massachusetts, Amherst September 8, 2014 Instructor: Subhransu Maji MATLAB setup and tutorial Does everyone have access to MATLAB yet? EdLab accounts
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD
ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2
More informationIntroduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation
Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic
More informationDD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication
DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationWaves & Oscillations
Physics 42200 Waves & Oscillations Lecture 41 Review Spring 2013 Semester Matthew Jones Final Exam Date:Tuesday, April 30 th Time:1:00 to 3:00 pm Room: Phys 112 You can bring two double-sided pages of
More informationModeling Light. On Simulating the Visual Experience
Modeling Light 15-463: Rendering and Image Processing Alexei Efros On Simulating the Visual Experience Just feed the eyes the right data No one will know the difference! Philosophy: Ancient question: Does
More informationAgenda. Perspective projection. Rotations. Camera models
Image formation Agenda Perspective projection Rotations Camera models Light as a wave + particle Light as a wave (ignore for now) Refraction Diffraction Image formation Digital Image Film Human eye Pixel
More informationToday s lecture. Image Alignment and Stitching. Readings. Motion models
Today s lecture Image Alignment and Stitching Computer Vision CSE576, Spring 2005 Richard Szeliski Image alignment and stitching motion models cylindrical and spherical warping point-based alignment global
More informationCameras and Stereo CSE 455. Linda Shapiro
Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation
More informationRay Optics I. Last time, finished EM theory Looked at complex boundary problems TIR: Snell s law complex Metal mirrors: index complex
Phys 531 Lecture 8 20 September 2005 Ray Optics I Last time, finished EM theory Looked at complex boundary problems TIR: Snell s law complex Metal mirrors: index complex Today shift gears, start applying
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationCS4670: Computer Vision
CS467: Computer Vision Noah Snavely Lecture 13: Projection, Part 2 Perspective study of a vase by Paolo Uccello Szeliski 2.1.3-2.1.6 Reading Announcements Project 2a due Friday, 8:59pm Project 2b out Friday
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationAnnouncements. The equation of projection. Image Formation and Cameras
Announcements Image ormation and Cameras Introduction to Computer Vision CSE 52 Lecture 4 Read Trucco & Verri: pp. 5-4 HW will be on web site tomorrow or Saturda. Irfanview: http://www.irfanview.com/ is
More informationLight: Geometric Optics
Light: Geometric Optics The Ray Model of Light Light very often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization, but
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationMore Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin.
More Mosaic Madness Jeffrey Martin (jeffrey-martin.com) CS194: Image Manipulation & Computational Photography with a lot of slides stolen from Alexei Efros, UC Berkeley, Fall 2018 Steve Seitz and Rick
More informationChapter 26 Geometrical Optics
Chapter 26 Geometrical Optics The Reflection of Light: Mirrors: Mirrors produce images because the light that strikes them is reflected, rather than absorbed. Reflected light does much more than produce
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationLight: Geometric Optics (Chapter 23)
Light: Geometric Optics (Chapter 23) Units of Chapter 23 The Ray Model of Light Reflection; Image Formed by a Plane Mirror Formation of Images by Spherical Index of Refraction Refraction: Snell s Law 1
More informationAnnouncements. Camera Calibration. Thin Lens: Image of Point. Limits for pinhole cameras. f O Z
Announcements Introduction to Computer Vision CSE 152 Lecture 5 Assignment 1 has been posted. See links on web page for reading Irfanview: http://www.irfanview.com/ is a good windows utility for manipulating
More informationCOS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures)
COS429: COMPUTER VISON CMERS ND PROJECTIONS (2 lectures) Pinhole cameras Camera with lenses Sensing nalytical Euclidean geometry The intrinsic parameters of a camera The extrinsic parameters of a camera
More informationSingle-view 3D Reconstruction
Single-view 3D Reconstruction 10/12/17 Computational Photography Derek Hoiem, University of Illinois Some slides from Alyosha Efros, Steve Seitz Notes about Project 4 (Image-based Lighting) You can work
More informationCamera Model and Calibration. Lecture-12
Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationMAN-522: COMPUTER VISION SET-2 Projections and Camera Calibration
MAN-522: COMPUTER VISION SET-2 Projections and Camera Calibration Image formation How are objects in the world captured in an image? Phsical parameters of image formation Geometric Tpe of projection Camera
More informationIntroduction to Computer Vision
Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics
More informationComputer Vision Projective Geometry and Calibration. Pinhole cameras
Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole
More informationCamera Projection Models We will introduce different camera projection models that relate the location of an image point to the coordinates of the
Camera Projection Models We will introduce different camera projection models that relate the location of an image point to the coordinates of the corresponding 3D points. The projection models include:
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera
More informationDispersion (23.5) Neil Alberding (SFU Physics) Physics 121: Optics, Electricity & Magnetism Spring / 17
Neil Alberding (SFU Physics) Physics 121: Optics, Electricity & Magnetism Spring 2010 1 / 17 Dispersion (23.5) The speed of light in a material depends on its wavelength White light is a mixture of wavelengths
More informationChapter 26 Geometrical Optics
Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray
More informationLight: Geometric Optics
Light: Geometric Optics 23.1 The Ray Model of Light Light very often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization,
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationFinal Exam Study Guide
Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.
More informationCamera model and multiple view geometry
Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then
More informationChapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.
Chapter 32 Light: Reflection and Refraction Units of Chapter 32 The Ray Model of Light Reflection; Image Formation by a Plane Mirror Formation of Images by Spherical Mirrors Index of Refraction Refraction:
More informationAll forms of EM waves travel at the speed of light in a vacuum = 3.00 x 10 8 m/s This speed is constant in air as well
Pre AP Physics Light & Optics Chapters 14-16 Light is an electromagnetic wave Electromagnetic waves: Oscillating electric and magnetic fields that are perpendicular to the direction the wave moves Difference
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationProblem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007
Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007 Programming For this assignment you will write a simple ray tracer. It will be written in C++ without
More informationAlgebra Based Physics
Slide 1 / 66 Slide 2 / 66 Algebra Based Physics Geometric Optics 2015-12-01 www.njctl.org Table of ontents Slide 3 / 66 lick on the topic to go to that section Reflection Spherical Mirror Refraction and
More informationPhysics 123 Optics Review
Physics 123 Optics Review I. Definitions & Facts concave converging convex diverging real image virtual image real object virtual object upright inverted dispersion nearsighted, farsighted near point,
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationSingle View Geometry. Camera model & Orientation + Position estimation. What am I?
Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing point Mapping from 3D to 2D Point & Line Goal: Point Homogeneous coordinates represent coordinates in 2 dimensions
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationRepresenting the World
Table of Contents Representing the World...1 Sensory Transducers...1 The Lateral Geniculate Nucleus (LGN)... 2 Areas V1 to V5 the Visual Cortex... 2 Computer Vision... 3 Intensity Images... 3 Image Focusing...
More informationPhotometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7
Lighting and Photometric Stereo Photometric Stereo HW will be on web later today CSE5A Lecture 7 Radiometry of thin lenses δa Last lecture in a nutshell δa δa'cosα δacos β δω = = ( z' / cosα ) ( z / cosα
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationChapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light.
Chapter 7: Geometrical Optics The branch of physics which studies the properties of light using the ray model of light. Overview Geometrical Optics Spherical Mirror Refraction Thin Lens f u v r and f 2
More informationOptics II. Reflection and Mirrors
Optics II Reflection and Mirrors Geometric Optics Using a Ray Approximation Light travels in a straight-line path in a homogeneous medium until it encounters a boundary between two different media The
More informationAt the interface between two materials, where light can be reflected or refracted. Within a material, where the light can be scattered or absorbed.
At the interface between two materials, where light can be reflected or refracted. Within a material, where the light can be scattered or absorbed. The eye sees by focusing a diverging bundle of rays from
More informationPhys102 Lecture 21/22 Light: Reflection and Refraction
Phys102 Lecture 21/22 Light: Reflection and Refraction Key Points The Ray Model of Light Reflection and Mirrors Refraction, Snell s Law Total internal Reflection References 23-1,2,3,4,5,6. The Ray Model
More informationChapter 23. Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian
Chapter 23 Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian Reflection and Refraction at a Plane Surface The light radiate from a point object in all directions The light reflected from a plane
More informationReflectance & Lighting
Reflectance & Lighting Computer Vision I CSE5A Lecture 6 Last lecture in a nutshell Need for lenses (blur from pinhole) Thin lens equation Distortion and aberrations Vignetting CS5A, Winter 007 Computer
More informationOutline. ETN-FPI Training School on Plenoptic Sensing
Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationRay-Tracing. Misha Kazhdan
Ray-Tracing Misha Kazhdan Ray-Tracing In graphics, we often represent the surface of a 3D shape by a set of triangles. Goal: Ray-Tracing Take a collection of triangles representing a 3D scene and render
More informationComputer Vision Project-1
University of Utah, School Of Computing Computer Vision Project- Singla, Sumedha sumedha.singla@utah.edu (00877456 February, 205 Theoretical Problems. Pinhole Camera (a A straight line in the world space
More informationRefraction at a single curved spherical surface
Refraction at a single curved spherical surface This is the beginning of a sequence of classes which will introduce simple and complex lens systems We will start with some terminology which will become
More informationAgenda. Rotations. Camera models. Camera calibration. Homographies
Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More information3-D D Euclidean Space - Vectors
3-D D Euclidean Space - Vectors Rigid Body Motion and Image Formation A free vector is defined by a pair of points : Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Coordinates of the vector : 3D Rotation
More information