Estimating the surface normal of artwork using a DLP projector

Similar documents
Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

Structured light , , Computational Photography Fall 2017, Lecture 27

Temporal Dithering of Illumination for Fast Active Vision

Stereo and structured light

Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo

Capturing light. Source: A. Efros

Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo

Overview of Active Vision Techniques

High Dynamic Range Imaging.

Fast HDR Image-Based Lighting Using Summed-Area Tables

Other approaches to obtaining 3D structure

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources

Fast Spectral Reflectance Recovery using DLP Projector

And if that 120MP Camera was cool

Unstructured Light Scanning to Overcome Interreflections

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

Surround Structured Lighting for Full Object Scanning

Mapping textures on 3D geometric model using reflectance image

Scientific imaging of Cultural Heritage: Minimizing Visual Editing and Relighting

The Focused Plenoptic Camera. Lightfield photographers, focus your cameras! Karl Marx

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Compensating for Motion During Direct-Global Separation

Photometric Stereo with Auto-Radiometric Calibration

An Empirical Study on the Effects of Translucency on Photometric Stereo

Using a Raster Display for Photometric Stereo

Epipolar geometry contd.

Sampling Light Field for Photometric Stereo

Projection Center Calibration for a Co-located Projector Camera System

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Radiometric Calibration with Illumination Change for Outdoor Scene Analysis

Shape and Appearance from Images and Range Data

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Abstract. 1 Introduction. Information and Communication Engineering July 2nd Fast Spectral Reflectance Recovery using DLP Projector

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter

3D object recognition used by team robotto

Image-based Lighting

Physics-based Vision: an Introduction

Chapter 1 Introduction

Tutorial Notes for the DAGM 2001 A Framework for the Acquisition, Processing and Interactive Display of High Quality 3D Models

A Multiscale Analysis of the Touch-Up Problem

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Rendering Light Reflection Models

What is Computer Vision? Introduction. We all make mistakes. Why is this hard? What was happening. What do you see? Intro Computer Vision

Shadow and Environment Maps

Chapter 7. Conclusions and Future Work

3D Photography: Stereo

Capture and Displays CS 211A

Lecture 22: Basic Image Formation CAP 5415

Using a Raster Display Device for Photometric Stereo

Matting, Transparency, and Illumination. Slides from Alexei Efros

Estimation of Surface Spectral Reflectance on 3D. Painted Objects Using a Gonio-Spectral Imaging System

Shading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham

Understanding Variability

Discriminative Illumination:

Making One Object Look Like Another: Controlling Appearance Using a Projector-Camera System

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination

Micro Phase Shifting. Shree K. Nayar Columbia University New York, NY Mohit Gupta Columbia University New York, NY

A projector-camera setup for geometry-invariant frequency demultiplexing

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?

An Evaluation Framework for the Accuracy of Camera Transfer Functions Estimated from Differently Exposed Images

Stereo vision. Many slides adapted from Steve Seitz

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

Photometric stereo , , Computational Photography Fall 2018, Lecture 17

Rendering Light Reflection Models

Optical Active 3D Scanning. Gianpaolo Palma

Lambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science

View-Dependent Texture Mapping

The Shading Probe: Fast Appearance Acquisition for Mobile AR

Self-calibrating Photometric Stereo

CS6670: Computer Vision

Other Reconstruction Techniques

Recovery of Fingerprints using Photometric Stereo

Light Reflection Models

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3

Coding and Modulation in Cameras

Real-Time Projector Tracking on Complex Geometry Using Ordinary Imagery

Research White Paper WHP 143. Multi-camera radiometric surface modelling for image-based re-lighting BRITISH BROADCASTING CORPORATION.

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

Recovering illumination and texture using ratio images

Multi-Camera Calibration, Object Tracking and Query Generation

Overview. Hierarchy. Level of detail hierarchy Texture maps Procedural shading and texturing Texture synthesis and noise.

CS6670: Computer Vision

Radiometry and reflectance

3D Photography: Active Ranging, Structured Light, ICP

A Theory of Inverse Light Transport

Integrated three-dimensional reconstruction using reflectance fields

Time-to-Contact from Image Intensity

Simulating Spatially Varying Lighting on a Live Performance

Announcements. Written Assignment 2 out (due March 8) Computer Graphics

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Deep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns

Surround Structured Lighting for Full Object Scanning

Light Field Transfer: Global Illumination Between Real and Synthetic Objects

Announcements. Introduction. Why is this hard? What is Computer Vision? We all make mistakes. What do you see? Class Web Page is up:

WHITE PAPER. Application of Imaging Sphere for BSDF Measurements of Arbitrary Materials

Surface Normal Deconvolution: Photometric Stereo for Optically Thick Translucent Objects

Seeing Virtual Objects: Simulating Reflective Surfaces on Emissive Displays

Simultaneous 2D images and 3D geometric model registration for texture mapping utilizing reflectance attribute

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Transcription:

Estimating the surface normal of artwork using a DLP projector KOICHI TAKASE 1 AND ROY S. BERNS 2 1 TOPPAN Printing co., ltd. 2 Munsell Color Science Laboratory, Rochester Institute of Technology Summary: The surface normals of a flat surface and a work of art were measured by a DLP projector and a traditional collimated tungsten source. The DLP projector resulted in reduced performance, a result of spatial non-uniformity caused by its low resolution. By defocusing the equivalent performance was achieved. Introduction Realistic image rendering requires knowledge of a work of art s BRDF, surface normal, and topography. Illuminating the object from a number of angles using a conventional light source and collimating optics enables the estimation of BRDF and surface normal. Often, topography is determined using either a laser scanner or structured light (Batlle et al. 1998, Curless 1999). Thus, two different imaging systems are required for complete characterization. Alternatively, it is possible to use a computer-controlled projector and sequentially display uniform and structured light, thereby reducing the system complexity and cost. However, there are resolution tradeoffs because of the inherent low resolution of projectors. Furthermore, the design differences between LCD, LCOS, and DLP projectors result in differing performance. The most appropriate technology for this approach is a demonstrated by Narasimhan et al. (2008). Experiments were performed to evaluate the accuracy of estimating the surface normal using a DLP projector compared with traditional lighting. Measurement and Evaluation Three illumination conditions were used, described in Table 1. A voltage-regulated Oriel 60000 QTH tungsten source with collimating optics was used as the traditional source. The DLP projector was an Optoma EzPro 755, with a resolution of 1024x768 pixels. The camera was a Canon EOS 5D with a resolution of 4368x2912 pixels. The light sources and camera angles were fixed at 40. Targets were rotated in a single plane using the MCSL three-axis goniometer (2008). The experimental configuration is shown in Figure 1. Table 1. Illumination conditions. # Device Focus 1 Tungsten light source - 2 DLP projector In focus 3 DLP projector Out of focus 1

Matte paper Targets Projector Camera Projector or Tungsten light source Camera 210cm 40 160cm 3-axis goniometer Targets Fig 1. Experimental setup. Two samples were used in the experiments: a Gray 5.5 Color-aid matte paper affixed to a firm support and a small, unvarnished oil painting. Images were captured at 12 different rotations and the surface normal at each pixel was estimated using photometric stereo methods. See (Reinhard et al. 1998) for a review. Using photometric stereo methods, the surface normal n at each position was estimated as follows: pseudoinverse( S) l n =, (1) ρ ρ = S 1 l, (2) where the function pseudoinverse calculates a pseudoinverse matrix of an input, S is an n-by-3 matrix, which is composed of n illumination direction vectors, l is an n-by-1 vector of digital counts for each illumination direction, and ρ is albedo. In photometoric stereo methods, the light source irradiance is assumed to be uniform. Therefore flat fielding was performed using images of an Avian fluorilon diffuser. The rotated target images were manually aligned, and the surface normals were estimated. A portion of the raw captured images for the matte gray for each condition is shown in Figure 2. Irradiance non-uniformity caused by the low resolution of the focused projector is clearly evident. Because of the non-uniformity, the estimation accuracy in condition #2 was lower than in conditions #1 and #3, as shown in Figure 3. The boxes and the error bars represent mean and standard deviation of the estimation error for each condition, respectively. Although flat fielding could reduce the non-uniformity and remove the radiance reduction caused by camera dust, the flat fielding could not remove the non-uniformity perfectly, which is shown in Figure 4. The estimation accuracy under conditions #1 and #3 was the same because the illumination by the out-of-focus projector was more similar to that from the traditional light source. Example renderings using the estimated surface normals followed by histogram equalization to accentuate differences are shown in Figure 5. The paper was illuminated from 40 by a collimated light source. The bump in the rendered image of condition # 2 could be seen despite the flat surface. 2

The influence of the number of captured images for the estimation was also evaluated, shown in Figure 6. The accuracy was improved by increasing the number of images since it can reduce the influence of irradiance non-uniformity for condition # 2 and camera noise for each condition. The surface normals of a work of art were also estimated in each condition. For the artwork, HDR images were captured using multiple exposure times (Debevec and Malik 1997) because a wide dynamic range was necessary to capture the range of reflectance levels found over the surface of the object with a minimum of noise. The images rendered using the estimated surface normals are shown in Figure 7. The bump could be found on the flat portion of the painting in the rendered image of condition # 2, which was caused by the low resolution of the projector. A portion of the canvas bump disappeared or blurred in all conditions. This error was caused by image resampling for image rotation, alignment error, shadows and interreflections. The resampling and alignment problems can be solved by moving the light source instead of the objects. Tungsten light source in-focus out-of-focus Fig 2. A portion of raw captured images for the color-aid matte paper. Average error (degrees) Tungsten light source in-focus out-of-focus Fig 3. Average error and one standard deviation of surface normal estimation of the color-aid matte paper based on 12 rotation angles. 3

Captured image Fig 4. Effect of flat fielding for a DLP projector. Flat fielded image Tungsten light source in-focus out-of-focus Fig 5. A portion of rendered images using the estimated surface normals followed by histogram equalization. Average error (degrees) Tungsten light source in focus out of focus Number of illumination angles Fig 6. Estimation error as a function of the number of angles. 4

Captured image Location 1 Location 2 Location 1 Location 2 Tungsten light source in-focus out-of-focus Fig 7. Captured image and image rendered using the estimated surface normals of an unvarnished oil painting on canvas board. 5

Conclusions In this research, the accuracy of estimating surface normal using a DLP projector was evaluated. The surface normals of a flat surface and a work of art were measured under three illumination conditions and estimated with a photometric stereo method. It was found that the DLP projector can be used for the measurement of surface normals when it is out of focus. The accuracy of artwork estimation in each illumination condition was low, which was caused by shadows and interreflections. The interreflections can be eliminated with the projector using Nayar et al. s method (2006). By estimating surface normals without the shadow part in the interreflection-free images, the accuracy can be improved, the subject of our future work. References Batlle, J., Mouaddib, E., and Salvi, J.: Recent Progress in Coded Structured Light as a Technique to Solve the Correspondence Problem: A Survey. Pattern Recognition, 31, 7, 963-982 (1998). Curless, B., Overview of Active Vision Techniques. Proc. of SIGGRAPH 99 Course on 3D Photography (1999). Debevec, P., and Malik, J.: Recovering high dynamic range radiance maps from photographs. Proc. ACM SIGGRAPH, 369 378 (1997). Narasimhan, S., Koppal, S., and Yamazaki, S.: Temporal Dithering of Illumination for Fast Active Vision. Proc. European Conference on Computer Vision, 830-844 (2008). Nayar, S., Krishnan, G., Grossberg, M., and Raskar, R.: Fast Separation of Direct and Global Components of a Scene using High Frequency Illumination. Proc. ACM SIGGRAPH, 935 944 (2006). Reinhard, K., Kozera, R., and Schlüns, K.: Shape from Shading and Photometric Stereo Methods. Communication and Information Technology Research Technical Report, 20 (1998). MCSL Three-Axis Goniometer. http://www.art-si.org/gonio/goniomovie.htm. (2008). Acknowledgements This research was supported by the TOPPAN Printing co., ltd. and the Andrew W. Mellon foundation. Contact information Koichi Takase E-mail: kztpci@cis.rit.edu Phone: 585-475-4443 Roy S. Berns E-mail: berns@cis.rit.edu Phone: 585-475-2230 Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, 54 Lomb Memorial Drive, Rochester, NY 14623 6