Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry*

Similar documents
A Survey of Light Source Detection Methods

Skeleton Cube for Lighting Environment Estimation

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Lecture 22: Basic Image Formation CAP 5415

Recovering light directions and camera poses from a single sphere.

Other Linear Filters CS 211A

Light source estimation using feature points from specular highlights and cast shadows

Face Re-Lighting from a Single Image under Harsh Lighting Conditions

Re-rendering from a Dense/Sparse Set of Images

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Multi-View 3D Reconstruction of Highly-Specular Objects

Using a Raster Display Device for Photometric Stereo

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Lecture 8: Fitting. Tuesday, Sept 25

Class-based Multiple Light Detection: An Application to Faces

Point Light Source Estimation based on Scenes Recorded by a RGB-D camera

Lecture 24: More on Reflectance CAP 5415

Photometric stereo. Recovering the surface f(x,y) Three Source Photometric stereo: Step1. Reflectance Map of Lambertian Surface

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Shading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Other approaches to obtaining 3D structure

Edge Detection. EE/CSE 576 Linda Shapiro

Physics-based Vision: an Introduction

3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption

Face Relighting with Radiance Environment Maps

A 3D Pattern for Post Estimation for Object Capture

Illumination Estimation from Shadow Borders

Peripheral drift illusion

Homework 4 Computer Vision CS 4731, Fall 2011 Due Date: Nov. 15, 2011 Total Points: 40

Color Image Segmentation

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Epipolar geometry contd.

Motion Detection. Final project by. Neta Sokolovsky

Distributed Ray Tracing

Rendering Synthetic Objects into Real Scenes. based on [Debevec98]

HOUGH TRANSFORM CS 6350 C V

Light Transport CS434. Daniel G. Aliaga Department of Computer Science Purdue University

Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL

Real Time Relighting with Dynamic Light Environment Using an RGB-D Camera

Lighting and Shading Computer Graphics I Lecture 7. Light Sources Phong Illumination Model Normal Vectors [Angel, Ch

Relighting for an Arbitrary Shape Object Under Unknown Illumination Environment

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

Reconstruction of Discrete Surfaces from Shading Images by Propagation of Geometric Features

Segmentation and Tracking of Partial Planar Templates

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Lecture 6: Edge Detection

Face Relighting with Radiance Environment Maps

MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES

Self-similarity Based Editing of 3D Surface Textures

The SIFT (Scale Invariant Feature

CONTENTS. High-Accuracy Stereo Depth Maps Using Structured Light. Yeojin Yoon

Illumination Models and Shading

The Shading Probe: Fast Appearance Acquisition for Mobile AR

Fitting: The Hough transform

Anno accademico 2006/2007. Davide Migliore

Announcements, schedule. Lecture 8: Fitting. Weighted graph representation. Outline. Segmentation by Graph Cuts. Images as graphs

Capturing light. Source: A. Efros

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template

Multi-stable Perception. Necker Cube

On the distribution of colors in natural images

Recovering illumination and texture using ratio images

Combining Photometric Normals and Multi-View Stereo for 3D Reconstruction

Light Reflection Models

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

A MATLAB PHYSICAL OPTICS RCS PREDICTION CODE

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Lecture 9: Hough Transform and Thresholding base Segmentation

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding

EE795: Computer Vision and Intelligent Systems

Lambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science

Statistical multiple light source detection

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3

Using a Raster Display for Photometric Stereo

A Novel Illumination-Invariant Loss for Monocular 3D Pose Estimation

Image Processing: Final Exam November 10, :30 10:30

Advanced d Computer Graphics CS 563: Real Time Ocean Rendering

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding

Filtering Applications & Edge Detection. GV12/3072 Image Processing.

Rendering Light Reflection Models

Part-based and local feature models for generic object recognition

Announcements. Image Formation: Light and Shading. Photometric image formation. Geometric image formation

Photometric stereo , , Computational Photography Fall 2018, Lecture 17

Rendering Light Reflection Models

Mapping textures on 3D geometric model using reflectance image

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Shadow and Environment Maps

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

12 m. 30 m. The Volume of a sphere is 36 cubic units. Find the length of the radius.

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING

High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Lecture 15: Segmentation (Edge Based, Hough Transform)

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Photorealism: Ray Tracing

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

EE 584 MACHINE VISION

Transcription:

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Yang Wang, Dimitris Samaras Computer Science Department, SUNY-Stony Stony Brook *Support for this research was provided by DOE grant MO-068. Overview Assumptions Lambertian object of arbitrary known geometry Directional light sources L = a I n Advantanges Single image No need for particular calibration objects Robust to noise Use of global information more suitable to Lambertian surfaces 1

Related Work Critical Points & Occluding Boundaries: Yang et al., CVPR 91 [20] Zhang et al., CVPR 00 [21] Sensitive to noise. Needs calibration object Convolution Basri et al., ICCV 01 [1] Ramamoorthi et al., SIGGRAPH 01 [15] Can not compute exact positions of directional light sources on Lambertian Surfaces Specular Sphere Debevec,, SIGGRAPH 98 [3] Interacts with environment. Need calibration object Basic Definitions Critical Point A point in the image is called a critical point if the surface normal at the corresponding point on the surface of the object is perpendicular to one of the light sources. Critical Boundary All critical points corresponding to a real light will be grouped into a cut-off curve which is called a critical boundary. Circle of maximum circumference on the sphere. 2 Light Sources 2

Real Light Source Detection Virtual Light Patch Critical boundaries will segment the whole sphere image into several regions. Each segmented region corresponds to one virtual light that minimizes Σ i (Ii L Ni) 2. Each region is called a virtual light patch. Intuitively, the difference between two virtual lights is caused by a real light source, e.g., v3-v1 v1 = (L1+L2)-L1 = L2 Input Image Segmented Image Our Algorithm 1. Detect critical points. 2. Find initial critical boundaries by Hough transform. 3. Adjust critical boundaries. Adjust every critical boundary by moving it by a small step, and a reduction in the least-squares squares error indicates a better solution. Update boundaries using a greedy algorithm to minimize the total error. 4. Merge spurious critical boundaries. If two critical boundaries are closer than a threshold angle (e.g.5 degrees), they can be replaced by their average. 5. Remove spurious critical boundaries. Test every critical boundary, and remove it if the least-squares squares error does not increase. Test boundaries in increasing order of Hough transform votes (first( test boundaries that are not as trustworthy). 6. Calculate the real lights along a boundary by subtracting neighboring virtual lights. 3

Synthetic Sphere 15 Light Sources Original Image Rerendered Image Virtual Light Patches Average Pixel Intensity (0-255 range) Error: 0.42 gray levels Objects of Arbitrary Geometry Normals are mapped to a sphere High number of missing data points on the sphere (in green) 4

Hough Transform Use global information to get boundaries. Problems: Noise causes fake boundaries. Sparse data cause missing boundaries. Solution: Evaluating the Least-Squares error using information from every available pixel inside a region (virtual light patch). Lambertian Ball Lambertian Vase Virtual Light Patches Lambertian Ball One spurious critical boundary is removed Lambertian Vase One spurious critical boundary is removed Two critical boundaries are merged 5

Synthetic Vase 15 Light Sources Original Image Rerendered Image Virtual Light Patches Average Pixel Intensity (0-255 range) Error: 0.74 gray levels Real Image of a Rubber Ball 5 Light Sources Original Image Rerendered Image Error Image Average Pixel Intensity (0-255 range) Error: 3.39 gray levels 6

Real Image of a Rubber Ball 5 Light Sources Initial and Final Virtual Light Patches Real Image of a Rubber Duck 4Light Sources Original Image Rerendered Image Error Image Average Pixel Intensity (0-255 range) Error: 6.55 gray levels 7

Real Image of a Rubber Duck 4Light Sources 3D Shape (Noise in acquisition) Sphere mapping Real Image of a Rubber Duck 4Light Sources Initial patches Final patches 8

Future Work Study of the properties of arbitrary surfaces, so that we can avoid the intermediate sphere mapping. Speed up of the least-squares squares method. Extend the method to non-lambertian diffuse reflectance for rough surfaces. Explore combinations of our method with shadow based light estimation methods and with specularity detection methods. Future Work (Preview) Augmented Reality Application 3 Light Sources Original Image Rerendering with one light switched off Superimposing an object 9