Epipolar geometry contd.

Similar documents
Other approaches to obtaining 3D structure

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Announcements. Photometric Stereo. Shading reveals 3-D surface geometry. Photometric Stereo Rigs: One viewpoint, changing lighting

Epipolar Geometry and Stereo Vision

Photometric Stereo. Photometric Stereo. Shading reveals 3-D surface geometry BRDF. HW3 is assigned. An example of photometric stereo

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo

Lambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science

BIL Computer Vision Apr 16, 2014

Lecture 22: Basic Image Formation CAP 5415

Epipolar Geometry and Stereo Vision

Camera Geometry II. COS 429 Princeton University

Two-view geometry Computer Vision Spring 2018, Lecture 10

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Capturing light. Source: A. Efros

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons

Understanding Variability

Lecture 5 Epipolar Geometry

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

Last lecture. Passive Stereo Spacetime Stereo

CS201 Computer Vision Camera Geometry

Undergrad HTAs / TAs. Help me make the course better! HTA deadline today (! sorry) TA deadline March 21 st, opens March 15th

Lecture 14: Basic Multi-View Geometry

Announcement. Photometric Stereo. Computer Vision I. Shading reveals 3-D surface geometry. Shape-from-X. CSE252A Lecture 8

Photometric Stereo.

Ligh%ng and Reflectance

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Structure from motion

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

CS4670/5760: Computer Vision

Stereo vision. Many slides adapted from Steve Seitz

And if that 120MP Camera was cool

Stereo Vision. MAN-522 Computer Vision

Lecture 10: Multi-view geometry

Unit 3 Multiple View Geometry

Shading. Brian Curless CSE 557 Autumn 2017

Photometric stereo. Recovering the surface f(x,y) Three Source Photometric stereo: Step1. Reflectance Map of Lambertian Surface

Lecture 10: Multi view geometry

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template

Dense 3D Reconstruction. Christiano Gava

Structure from Motion

Passive 3D Photography

Cameras and Stereo CSE 455. Linda Shapiro

Starting this chapter

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Shading. Reading. Pinhole camera. Basic 3D graphics. Brian Curless CSE 557 Fall Required: Shirley, Chapter 10

Multiple View Geometry

CS201 Computer Vision Lect 4 - Image Formation

CS5670: Computer Vision

COMP 558 lecture 16 Nov. 8, 2010

Raytracing CS148 AS3. Due :59pm PDT

w Foley, Section16.1 Reading

Information page for written examinations at Linköping University TER2

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3

Toward a Stratification of Helmholtz Stereopsis

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

3D Motion Analysis Based on 2D Point Displacements

Assignment #2. (Due date: 11/6/2012)

Lecture 3: Camera Calibration, DLT, SVD

CS6670: Computer Vision

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Lecture 9: Epipolar Geometry

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading

Stereo. Many slides adapted from Steve Seitz

General Principles of 3D Image Analysis

calibrated coordinates Linear transformation pixel coordinates

The Rendering Equation. Computer Graphics CMU /15-662

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Dense 3D Reconstruction. Christiano Gava

Introduction. Lighting model Light reflection model Local illumination model Reflectance model BRDF

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Multi-View 3D-Reconstruction

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

CS231A Midterm Review. Friday 5/6/2016

3D Geometry and Camera Calibration

Structure from motion

CS6670: Computer Vision

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Toward a Stratification of Helmholtz Stereopsis

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Announcements. Image Formation: Light and Shading. Photometric image formation. Geometric image formation

Flexible Calibration of a Portable Structured Light System through Surface Plane

Final project bits and pieces

Introduction to Computer Vision. Week 8, Fall 2010 Instructor: Prof. Ko Nishino

Other Reconstruction Techniques

Epipolar geometry. x x

Chaplin, Modern Times, 1936

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Structure from Motion. Lecture-15

Announcements. Camera Calibration. Thin Lens: Image of Point. Limits for pinhole cameras. f O Z

Radiometry and reflectance

Structure from Motion

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Lighting affects appearance

Announcements. Lighting. Camera s sensor. HW1 has been posted See links on web page for readings on color. Intro Computer Vision.

Transcription:

Epipolar geometry contd.

Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives a linear equation é f F = f ë f 11 21 31 f f f 12 22 32 f f f 13 23 33 ù û uu' f + vu' f12 + u' f13 + uv' f21 + vv' f22 + v' f23 + uf 31 + vf32 + f33 11 = 0

8-point algorithm 0 1 1 1 33 32 31 23 22 21 13 12 11 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 = û ù ë é û ù ë é f f f f f f f f f v u v v v v u u u v u u v u v v v v u u u v u u v u v v v u v u v u u u n n n n n n n n n n n n!!!!!!!!! In reality, instead of solving, we seek f to minimize, least eigenvector of. = 0 Af Af A A T

8-point algorithm Problem? F should have rank 2 To enforce that F is of rank 2, F is replaced by F that minimizes F - F' subject to the rank constraint. This is achieved by SVD. Let F = UΣV T, where Σ = és 1 0 ë 0 0 s 0 2 0 ù 0, let s 3û és 1 0 Σ' = 0 s 2 ë 0 0 0ù 0 0û then F' = UΣ' V T is the solution.

Recovering camera parameters from F / E Can we recover R and t between the cameras from F? F = K2 T [t] RK1 1 No: K 1 and K 2 are in principle arbitrary matrices What if we knew K 1 and K 2 to be identity? E =[t] R

Recovering camera parameters from E E =[t] R t T E = t T [t] R =0 E T t =0 t is a solution to E T x = 0 Can t distinguish between t and ct for constant scalar c How do we recover R?

Recovering camera parameters from E E =[t] R We know E and t Consider taking SVD of E and [t] X [t] = U V T E = U 0 0 V 0T U 0 0 V 0T = E =[t] R = U V T R U 0 0 V 0T = U V T R V 0T = V T R

Recovering camera parameters from E E =[t] R t T E = t T [t] R =0 E T t =0 t is a solution to E T x = 0 Can t distinguish between t and ct for constant scalar c

8-point algorithm Pros: it is linear, easy to implement and fast Cons: susceptible to noise Degenerate: if points are on same plane Normalized 8-point algorithm: Hartley Position origin at centroid of image points Rescale coordinates so that center to farthest point is sqrt (2)

Other approaches to obtaining 3D structure

Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera projector L. Zhang, B. Curless, and S. M. Seitz. Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming. 3DPVT 2002

Active stereo with structured light L. Zhang, B. Curless, and S. M. Seitz. Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming. 3DPVT 2002

Microsoft Kinect

Light and geometry

Till now: 3D structure from multiple cameras Problems: requires calibrated cameras requires correspondence Other cues to 3D structure? Key Idea: use feature motion to understand shape

What does 3D structure mean? We have been talking about the depth of a pixel (x,y) (X,Y,Z) Pinhole

What does 3D structure mean? But we can also look at the orientation of the surface at each pixel: surface normal (x,y) (N X,N Y,N Z ) Pinhole Not enough by itself to reveal absolute locations, but gives enough of a clue to object shape

Shading is a cue to surface orientation

Modeling Image Formation Now we need to reason about: How light interacts with the scene How a pixel value is related to light energy in the world Track a ray of light all the way from light source to the sensor

How does light interact with the scene? Light is a bunch of photons Photons are energy packets Light starts from the light source, is reflected / absorbed by surfaces and lands on the camera Two key quantities: Irradiance Radiance

Radiance How do we measure the strength of a beam of light? Idea: put a sensor and see how much energy it gets If the sensor is slanted it gets less energy A larger sensor captures more energy

Radiance How do we measure the strength of a beam of light? Radiance: power in a particular direction per unit area when surface is orthogonal to direction If the sensor is slanted it gets less energy A larger sensor captures more energy

Radiance Pixels measure radiance This pixel Measures radiance along this ray

Where do the rays come from? Rays from the light source reflect off a surface and reach camera Reflection: Surface absorbs light energy and radiates it back

Irradiance Radiance measures the energy of a light beam But what is the energy received by a surface? Depends on the area of the surface and the orientation Acos θ θ A

Irradiance Power received by a surface patch of area A from a beam of radiance L coming at angle θ= LAcosθ Acos θ θ A

Irradiance Power received by a surface patch of unit area from a beam of radiance L coming at angle θ= Lcosθ Called Irradiance Irradiance = Radiance of ray* cosθ Acos θ θ A

Light rays interacting with a surface Light of radiance L & comes from light source at an incoming direction θ & It sends out a ray of radiance L ( in the outgoing direction θ ( How does L ( relate to L &? θ & θ ( N is surface normal L is direction of light, making θ & with normal V is viewing direction, making θ ( with normal

Light rays interacting with a surface θ & θ ( N is surface normal L is direction of light, making θ & with normal V is viewing direction, making θ ( with normal Output radiance along V L ( = ρ θ &, θ ( L & cos θ & Incoming irradiance along L Bi-directional reflectance function (BRDF)

Light rays interacting with a surface θ & θ ( L ( = ρ θ &, θ ( L & cos θ & Special case 1: Perfect mirror ρ θ &, θ ( = 0 unless θ & = θ ( Special case 2: Matte surface ρ θ &, θ ( = ρ, (constant)

Special case 1: Perfect mirror ρ θ &, θ ( = 0 unless θ & = θ ( Also called Specular surfaces Reflects light in a single, particular direction

Special case 2: Matte surface ρ θ &, θ ( = ρ, Also called Lambertian surfaces Reflected light is independent of viewing direction

Lambertian surfaces For a lambertian surface: L r = L i cos i ) L r = L i L N θ & θ ( ρ is called albedo Think of this as paint High albedo: white colored surface Low albedo: black surface Varies from point to point

Lambertian surfaces Assume the light is directional: all rays from light source are parallel Equivalent to a light source infinitely far away All pixels get light from the same direction L and of the same intensity L i θ & θ (

Lambertian surfaces Intrinsic Image Decomposition I(x, y) = (x, y)l i L N(x, y) Reflectance image Shading image

Lambertian surfaces

Lambertian surfaces Far Near shape / depth Reflectance Shading image of Z and L I=R S(Z, L) Lambertian reflectance illumination

Reconstructing Lambertian surfaces I(x, y) = (x, y)l i L N(x, y) Equation is a constraint on albedo and normals Can we solve for albedo and normals?

Solution 1: Shape from Shading Assume L i is 1 Assume L is known Assume some normals known Assume surface smooth: normals change slowly In practice, SFS doesn t work very well: assumptions are too restrictive, too much ambiguity in nontrivial scenes. I(x, y) = (x, y)l i L N(x, y)