General Principles of 3D Image Analysis

Similar documents
Image Processing 1 (IP1) Bildverarbeitung 1

3D Motion Analysis Based on 2D Point Displacements

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7

Image Processing 1 (IP1) Bildverarbeitung 1

EE Light & Image Formation

Shape from shading. Surface brightness and Surface Orientation --> Reflectance map READING: Nalwa Chapter 5. BKP Horn, Chapter 10.

Photometric Stereo.

Photometric Stereo. Photometric Stereo. Shading reveals 3-D surface geometry BRDF. HW3 is assigned. An example of photometric stereo

Capturing light. Source: A. Efros

Photometric Stereo, Shape from Shading SfS Chapter Szelisky

COMP 558 lecture 16 Nov. 8, 2010

Announcements. Photometric Stereo. Shading reveals 3-D surface geometry. Photometric Stereo Rigs: One viewpoint, changing lighting

Comment on Numerical shape from shading and occluding boundaries

Lambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Other approaches to obtaining 3D structure

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

A Survey of Light Source Detection Methods

Lecture 22: Basic Image Formation CAP 5415

Understanding Variability

DIRECT SHAPE FROM ISOPHOTES

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Massachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I

Announcements. Hough Transform [ Patented 1962 ] Generalized Hough Transform, line fitting. Assignment 2: Due today Midterm: Thursday, May 5 in class

A Simple Vision System

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons

Epipolar geometry contd.

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo imaging ideal geometry

Announcement. Photometric Stereo. Computer Vision I. Shading reveals 3-D surface geometry. Shape-from-X. CSE252A Lecture 8

Stereo Vision. MAN-522 Computer Vision

Radiometry and reflectance

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17

And if that 120MP Camera was cool

Optic Flow and Basics Towards Horn-Schunck 1

CS201 Computer Vision Lect 4 - Image Formation

Constructing a 3D Object Model from Multiple Visual Features

Range Sensors (time of flight) (1)

Introduction to Computer Vision. Week 8, Fall 2010 Instructor: Prof. Ko Nishino

Lecture 24: More on Reflectance CAP 5415

EXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003,

Radiometry. Radiometry. Measuring Angle. Solid Angle. Radiance

Reconstruction of Discrete Surfaces from Shading Images by Propagation of Geometric Features

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Passive 3D Photography

3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

3-D Stereo Using Photometric Ratios

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

Homework 4 Computer Vision CS 4731, Fall 2011 Due Date: Nov. 15, 2011 Total Points: 40

Multiple View Geometry

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

The Light Field. Last lecture: Radiometry and photometry

Assignment #2. (Due date: 11/6/2012)

COMPUTER AND ROBOT VISION

Chapter 1 Introduction

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

OPPA European Social Fund Prague & EU: We invest in your future.

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Representing the World

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Structure from Motion. Prof. Marco Marcon

Visual Recognition: Image Formation

A Shape from Shading Approach for the Reconstruction of Polyhedral Objects using Genetic Algorithm

CS6320: 3D Computer Vision Project 3 Shape from Shading

Integrating Shape from Shading and Shape from Stereo for Variable Reflectance Surface Reconstruction from SEM Images

CS4670/5760: Computer Vision

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

Feature Tracking and Optical Flow

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Perception, Part 2 Gleitman et al. (2011), Chapter 5

Perception of Shape from Shading

Ligh%ng and Reflectance

ABSTRACT. SETHURAM, AMRUTHA SHREE. Reconstruction of Lambertian Surfaces from Photometric Stereo. (Under the direction of Dr. Wesley E.

Projective geometry for Computer Vision

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map?

Chapter 7. Conclusions and Future Work

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

Introduction Rasterization Z-buffering Shading. Graphics 2012/2013, 4th quarter. Lecture 09: graphics pipeline (rasterization and shading)

Radiance. Radiance properties. Radiance properties. Computer Graphics (Fall 2008)

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Computer Vision I - Appearance-based Matching and Projective Geometry

Starting this chapter

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Hand-Eye Calibration from Image Derivatives

A Machine learning approach for Shape From Shading

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template

13 Distribution Ray Tracing

Introduction to Radiosity

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3

What is Computer Vision? Introduction. We all make mistakes. Why is this hard? What was happening. What do you see? Intro Computer Vision

Other Reconstruction Techniques

dq dt I = Irradiance or Light Intensity is Flux Φ per area A (W/m 2 ) Φ =

Reflection & Mirrors

Computer Vision cmput 428/615

Transcription:

General Principles of 3D Image Analysis high-level interpretations objects scene elements Extraction of 3D information from an image (sequence) is important for - vision in general (= scene reconstruction) - many tasks (e.g. robot grasping and navigation, traffic analysis) - not all tasks (e.g. image retrieval, quality control, monitoring) image elements raw images Recovery of 3D information is possible by multiple cameras (e.g. binocular stereo) by a monocular image sequence with motion + weak assumptions by a single image + strong assumptions or prior knowledge about the scene Single Image 3D Analysis Humans exploit various cues for a tentative (heuristic) depth analysis: - size of known objects - texture gradient - occlusion - colour intensities - angle of observation - continuity assumption - generality assumption 2

Generality Assumption Assume that - viewpoint - illumination - physical surface properties are general, i.e. do not produce coincidental structures in the image. Example: Do not interpret this figure as a 3D wireframe cube, because this view is not general. General view: The generality assumption is the basis for several specialized interpretation methods, e.g. - shape from texture - shape from shading... - "shape from X" 3 Texture Gradient Assume that texture does not mimick projective effects Interpret texture gradient as a 3D projection effect (Witkin 8) 4 2

Assume - homogeneous texture on 3D surface and - 3D surface continuity Shape from Texture Reconstruct 3D shape from perspective texture variations (Barrow and Tenenbaum 8) 5 Depth Cues from Colour Saturation Humans interpret regions with less saturated colours as farther away. hills in haze nearby Graz 6 3

Surface Shape from Contour Assume "non-special" illumination and surface properties 2D image contour 3D surface shape maximizes probability of observed contours and minimizes probability of additional contours possible 3D reconstructions a b c 7 3D Line Shape from 2D Projections Assume that lines connected in 2D are also connected in 3D Reconstruct 3D line shape by minimizing spatial curvature and torsion 2D collinear lines are also 3D collinear 8 4

3D Shape from Multiple Lines Assume that similar line shapes result from similar surface shapes Parallel lines lie locally on a cylinder (Stevens 8) 9 3D Junction Interpretation rules for junctions of curved lines a a b (Binford 8) b c a not behind b a, b and c meet rules for blocksworld junctions (Waltz 86) 0 0 + + 0 0 + - 0 ++ + 0 0 0 edge labels "general" ensemble "special" ensemble 0 5

3D Line Orientation from Vanishing Points From the laws of perspective projection: The projections of 3D parallel straight lines intersect in a single point, the vanishing point. Assume that more than 2 straight lines do not intersect in a single point by coincidence If more than 2 straight lines intersect, assume that they are parallel in 3D Obtaining 3D Shape from Shading Information Under certain conditions, a 3D surface model may be reconstructed from the greyvalue variations of a monocular image. From "Shape from Shading", B.K.P. Horn and M.J. Brooks (eds.), MIT Press 989 2 6

Principle of Shape from Shading See "Shape from Shading" (B.K.P. Horn, M.J. Brooks, eds.), MIT Press 989 Physical surface properties, surface orientation, illumination and viewing direction determine the greyvalue of a surface patch in a sensor signal. For a single object surface viewed in one image, greyvalue changes are mainly caused by surface orientation changes. The reconstruction of arbitrary surface shapes is not possible because different surface orientations may give rise to identical greyvalues. Surface shapes may be uniquely reconstructed from shading information if possible surface shapes are constrained by smoothness assumptions. Principle of incremental procedure for surface shape reconstruction: b c b a a: patch with known orientation b, c: neighbouring patches with similar orientations b : radical different orientation may not be neighbour of a 3 Photometric Surface Properties illumination direction surface normal θ i θ v viewing direction θ i, θ v polar (zenith) angles φ i, φ v azimuth angles y φ i x φ v In general, the ability of a surface to reflect light is given by the Bi-directional Reflectance Distribution Function (BRDF) r: δl(θ v, φ v ) r(θ i, φ i ; θ v, φ v ) = δe(θi, φ i ) δe = irradiance of light source received by the surface patch δl = radiance of surface patch towards viewer For many materials the reflectance properties are rotation invariant, in this case the BRDF depends on θ i, θ v, φ, where φ = φ i - φ v. 4 7

Irradiance of Imaging Device f Θ n surface patch produces radiance L δo δi α sensor patch receives irradiance E lens with aperture d irradiance = light energy falling on unit patch of imaging sensor, sensor signal is proportional to irradiance E = L π 2 d cos 4 α 4 f sensor signal depends on span-off angle α of surface element ("vignetting") off-center pixels in wide-angle images are darker 5 Lambertian Surfaces A Lambertian surface is an ideally matte surface which looks equally bright from all viewing directions under uniform or collimated illumination. Its brightness is proportional to the cosine of the illumination angle. surface receives energy per unit area cos θ i surface reflects energy cos θ v due to matte reflectance properties sensor element receives energy from surface area /cos θ v uniform light source θ i θ v sensor element cancel out r Lambert (θ i, θ v, φ) = ρ(λ)/π surface unit area ρ(λ) = L Ω Ω E i "albedo" = proportion of incident energy reflected back into half space Ω above surface 6 8

Surface Gradients 0 p For 3D reconstruction of surfaces, it is useful to represent reflectance properties as a function of surface orientation. z z(x, y) surface tangent vector in x direction 0 q x tangent vector in y direction p = δz/δx q = δz/δy -p -q x-component of surface gradient y-component of surface gradient vector in surface normal direction If the z-axis is chosen to coincide with the viewer direction, we have n = + p 2 + q 2 -p -q surface normal vector cosθ v = + p 2 + q 2 cosθ i = + p i p + q i q + p 2 + q 2 + p i 2 + q i 2 cosϕ = + p i 2 + q i 2 The dependency of the BRDF on θ i, θ v and φ may be expressed in terms of p and q (with p i and q i for the light source direction). 7 Simplified Image Irradiance Equation Assume that the object has uniform reflecting properties, the light sources are distant so that the irradiation is approximately constant and equally oriented, the viewer is distant so that the received radiance does not depend on the distance but only on the orientation towards the surface. With these simplifications the sensor greyvalues depend only on the surface gradient components p and q. E(x,y) = R(p(x, y),q(x,y)) = R( z x, z y ) "Simplified Image Irradiance Equation" R(p, q) is the reflectance function for a particular illumination geometry. E(x, y) is the sensor greyvalue measured at (x, y). Based on this equation and a smoothness constraint, shape-from-shading methods recover surface orientations. 8 9

Reflectance Maps R(p, q) may be plotted as a reflectance map with iso-brightness contours. Reflectance map for Lambertian surface illuminated from pi = 0.7 and qi = 0.3 Reflectance map for matte surface with specular component q q p p 9 Characteristic Strip Method Given a surface point (x, y, z) with known height z, orientation p and q, and second derivatives r = zxx, s = zxy = zyx, t = zyy, the height z+δz and orientation p+δp, q+δq in a neighbourhood x+δx, y+δy can be calculated from the image irradiance equation E(x, y) = R(p, q). Infinitesimal change of height: δz = p δx + q δy Changes of p and q for a step δx, δy: δp = r δx + s δy δq = s δx + t δy Differentiation of image irradiance equation w.r.t. x and y gives Ex = r Rp + s Rq E y = s Rp + t R q Choose step δξ in gradient direction of of the reflectance map ("characteristic strip"): δx = Rp δξ δy = Rq δξ For this direction the image irradiance equation can be replaced by δx/δξ = Rp δy/δξ = Rq δz/δξ = p Rp+ q Rq δp/δξ = Ex δq/δξ = Ey Boundary conditions and initial points may be given by - occluding contours with surface normal perpendicular to viewing direction - singular points with surface normal towards light source. 20 0

Recovery of the Shape of a Nose Pictures from B.K.P. Horn "Robot Vision", MIT Press,986, p. 255 nose with crudely quantized greyvalues superimposed characteristic curves superimposed elevations at characteristic curves Nose has been powdered to provide Lambertian reflectance map 2 Shape from Shading by Global Optimization Given a monocular image and a known image irradiance equation, surface orientations are ambiguously constrained. Disambiguation may be achieved by optimizing a global smoothness criterion. Minimize D(x, y) = E(x,y) R(p,q) [ ] 2 + λ ( 2 p) 2 + ( 2 q) 2 violation of reflectance constraint Lagrange multiplier violation of smoothness constraint There exist standard techniques for solving this minimization problem iteratively. In general, the solution may not be unique. Due to several uncertain assumptions (illumination, reflectance function, smoothness of surface) solutions may not be reliable. 22

Principle of Photometric Stereo In photometric stereo, several images with different known light source orientations are used to uniquely recover 3D orientation of a surface with known reflectance. The reflectance maps R (p, q), R 2 (p, q), R 3 (p, q) specify the possible surface orientations of each pixel in terms of isobrightness contours ("isophotes"). The intersection of the isophotes corresponding to the 3 brightness values measured for a pixel (x, y) uniquely determines the surface orientation (p(x, y), q(x, y)). From "Shape from Shading", B.K.P. Horn and M.J. Brooks (eds.), MIT Press 989 23 Analytical Solution for Photometric Stereo For a Lambertian surface: E(x, y) = R(p, q) = ρ cos(θ i ) = ρ i T n i = light source direction, n = surface normal, ρ = constant If K images are taken with K different light sources i k, k =... K, there are K brightness measurements E k for each image position (x, y): E k (x, y) = ρ i k T n In matrix notation: E(x, y) = ρ L n where L = For K=3, L may be inverted, hence i T. i T K n(x,y) = L E(x,y) L E(x,y) In general, the pseudo-inverse must be computed: ( LT L) L T E(x,y) n(x,y) = ( L T L) L T E(x,y) 24 2