Image Based Lighting with Near Light Sources

Similar documents
Image Based Lighting with Near Light Sources

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

Skeleton Cube for Lighting Environment Estimation

Lighting & 3D Graphics. Images from 3D Creative Magazine

3D Editing System for Captured Real Scenes

The Shading Probe: Fast Appearance Acquisition for Mobile AR

Real-Time Image Based Lighting in Software Using HDR Panoramas

Research White Paper WHP 143. Multi-camera radiometric surface modelling for image-based re-lighting BRITISH BROADCASTING CORPORATION.

High Dynamic Range Imaging.

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

High Dynamic Range Image Texture Mapping based on VRML

Image-Based Deformation of Objects in Real Scenes

MIT Monte-Carlo Ray Tracing. MIT EECS 6.837, Cutler and Durand 1

A Survey of Light Source Detection Methods

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template

Computer Graphics. Illumination and Shading

Rendering Synthetic Objects into Real Scenes. based on [Debevec98]

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Photorealistic Augmented Reality

Shading I Computer Graphics I, Fall 2008

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Light source estimation using feature points from specular highlights and cast shadows

Eigen-Texture Method : Appearance Compression based on 3D Model

Lecture 22: Basic Image Formation CAP 5415

IMAGE BASED RENDERING: Using High Dynamic Range Photographs to Light Architectural Scenes

Computer Graphics (CS 4731) Lecture 16: Lighting, Shading and Materials (Part 1)

Stochastic Path Tracing and Image-based lighting

Computer Graphics (CS 543) Lecture 7b: Intro to lighting, Shading and Materials + Phong Lighting Model

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Introduction to 3D Concepts

Virtual Reality for Human Computer Interaction

CS770/870 Spring 2017 Radiosity

Introduction. Chapter Computer Graphics

OpenGl Pipeline. triangles, lines, points, images. Per-vertex ops. Primitive assembly. Texturing. Rasterization. Per-fragment ops.

Face Relighting with Radiance Environment Maps

Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons. Abstract. 2. Related studies. 1.

Schedule. MIT Monte-Carlo Ray Tracing. Radiosity. Review of last week? Limitations of radiosity. Radiosity

Re-rendering from a Dense/Sparse Set of Images

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

CS770/870 Spring 2017 Radiosity

Precomputed Radiance Transfer: Theory and Practice

Image-based Lighting (Part 2)

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

Face Relighting with Radiance Environment Maps

Chapter 7. Conclusions and Future Work

Overview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality.

Comp 410/510 Computer Graphics. Spring Shading

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

EECS 487: Interactive Computer Graphics

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

CEng 477 Introduction to Computer Graphics Fall

Other approaches to obtaining 3D structure

Rendering Hair-Like Objects with Indirect Illumination

Global Illumination. COMP 575/770 Spring 2013

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Overview. Radiometry and Photometry. Foundations of Computer Graphics (Spring 2012)

Radiance. Radiance properties. Radiance properties. Computer Graphics (Fall 2008)

x ~ Hemispheric Lighting

Capturing light. Source: A. Efros

Local vs. Global Illumination & Radiosity

Global Illumination with Glossy Surfaces

Computer Graphics - Chapter 1 Graphics Systems and Models

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

We want to put a CG object in this room

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch

Image-based Lighting

CS : Assignment 2 Real-Time / Image-Based Rendering

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Reflection Mapping

Estimation of Reflection Properties of Silk Textile with Multi-band Camera

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

The Animation Process. Lighting: Illusions of Illumination

Image Base Rendering: An Introduction

Lighting. Figure 10.1

Color and Light. CSCI 4229/5229 Computer Graphics Summer 2008

Radiometry & BRDFs CS295, Spring 2017 Shuang Zhao

CPSC GLOBAL ILLUMINATION

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

CS5670: Computer Vision

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

COS 116 The Computational Universe Laboratory 10: Computer Graphics

Acquisition and Visualization of Colored 3D Objects

Image-Based Lighting : Computational Photography Alexei Efros, CMU, Fall Eirik Holmøyvik. with a lot of slides donated by Paul Debevec

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Classification of Illumination Methods for Mixed Reality

CS130 : Computer Graphics Lecture 8: Lighting and Shading. Tamar Shinar Computer Science & Engineering UC Riverside

Advanced d Computer Graphics CS 563: Real Time Ocean Rendering

Understanding Variability

3D Modeling of Objects Using Laser Scanning

Chapter 1 Introduction

Models and Architectures

Transcription:

Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some researches on realistic computer graphics (CG) focus on synthesis of photographs and virtual objects. Image Based Lighting (IBL) is one of the techniques to generate naturally illuminated images by using photographs. This paper reports a new experiment to render virtual objects into real scenes with near light sources, which have not been considered in original IBL. Our experiment shows that we can generate more realistic images by faithfully constructing near light source environment. 1. Introduction We have a long history of research on realistic computer graphics (CG). We also have a long history of geometric modeling of artificial objects, such as automobile and building, by using computer aided design (CAD) systems. Generally we need high-level skill and a long time for very realistic CG image creation, if we manually design all objects in a scene. Recent CG researchers have focused on synthesis of photographs and manually designed artificial objects, as a reasonable technique to generate more realistic images. Let us say the scene in the photograph as real scene, and artificial objects to be synthesized to the photograph as virtual object. We should balance two kinds of light sources, where one is the virtual light source illuminating the virtual objects, and the other is the real light source illuminating the real scene in the photograph. Originally design of light sources needs a lot of skill and time, and moreover, the reality of CG image strongly depends on the design of light sources, even when we do not synthesis with the photograph. Balance of two kinds of light sources is a further difficult problem. Image Based Lighting (IBL) is a novel technique to realize natural illumination by using photographs as light sources [1]. IBL uses an omni-directional photograph taken at the position where virtual objects locate, as the light source for global illumination. Designers do not need to finely set up virtual light sources in IBL. Therefore, even non-expert designers can create realistic naturally illuminated scenes by applying IBL. However, existing IBL techniques usually assume light sources infinitely far away from virtual objects. This is one of the problems of IBL, because illumination of the real world cannot be limited to distant light sources. We often have to set up light sources locating near virtual objects to generate realistic synthesized images, but original IBL does not suppose such situation. We think this problem is important in some use cases; for example, let us suppose to use IBL for production of pamphlets or promotional films in the manufacturing industry. Especially we think IBL is useful to synthesize unreleased new product designed using CAD systems onto real scenes. In this situation we can easily assume the scenes including light sources near the position where we would like to put the virtual objects. We think it is important to reconstruct the near light sources in the photograph to realistically illuminate the virtual object. This paper reports a new experiment to render virtual objects into real scenes, with near light sources, which have not been considered in the original IBL. The technique presented in this paper reconstructs the near light sources, which are in the real scene and nearby the virtual objects. It calculates illumination using the reconstructed near light sources, in addition to the omni-directional photograph as distant light sources, such as a fluorescent light on a ceiling of the room. 1

2. Related works The technique presented in this paper is an extension of IBL proposed by Debevec [1]. IBL requires following three items to generate synthesized image: Virtual objects are defined as three dimensional (3D) geometric models designed using CG or CAD systems. Local image is defined as a photograph taken at an optional viewpoint. IBL generates an image synthesizing the local image and the virtual objects. Distant image is defined as a photograph of omni-directional scene taken at the position where virtual objects locate. Figure 1 denotes the procedure of IBL. Here, IBL assumes a floor, where virtual objects locate around the origin of a three-dimensional coordinate system. It also assumes walls which surround virtual objects from far away, where a distant image is mapped on the walls. It supposes pixels of the distant image as light source, to illuminate the virtual objects. It finally synthesizes the local image and the virtual objects considering of the light distribution computed by the above process. 1) Mapping distant 2) Illuminating relatively the image on the wall brightness of pixels on the wall 3) Synthesizing local image and virtual objects Figure 1: Procedure of IBL. IBL simplifies the process of lighting design for global illumination, because the only we need to prepare is an omni-directional photograph. However, the original IBL [1] limits the irradiation of the light to virtual objects only light sources in the distant image. We therefore should assume that there are no near light sources while using the original IBL. Conversely, when light sources 2 are shot in the local image and nearby virtual objects, it may be inconsistent between the calculation result of global illumination and the lighting in the local image. It is possible to take a photograph for distant image so that it contains the near light sources, in order to solve this problem. However, we do not think it is the fundamental solution for this problem, because one reason of the inconsistency is difference of degree of incidence of the illumination between distant and near light sources. Unless we set up independent near light sources in addition to the distant image, we cannot consider of the difference of irradiation effects between distant and near light sources. Sato et al. also presented an IBL-like technique [2] which determines light environments by shooting omni-directional photographs of the real world using a fish-eye lens. The light environments measured from the omni-directional photograph are approximated as three dimensional triangular elements, whose vertices are mainly locate at the portions those intensities are high. The technique then calculates the illumination and shade of virtual objects using the light environments, and finally synthesizes them to a photograph. The technique is applicable for both indoor and outdoor scenes; however, it does not import near light sources. Okabe et al presented an image analysis technique under near light sources, and an image composition technique as an application of the analysis [3]. However, the technique requires several images under various lighting conditions. 3. Rendering virtual objects into real scenes 3.1 Overview Similar to the original IBL, the technique presented in this paper also requires three items, virtual objects, a local image, and a distant image, as explained in Section 2. The technique realistically synthesizes the virtual objects into the real scene, by using the three items and implementing three steps shown in Figure 1. Our technique requires HDR format for local and distant images, similar to Debevec s technique [1]. Figure 2 shows the processing flow of our technique. It first constructs the light source environment based on

the illumination distribution obtained from the distant image, and near light sources obtained from the local image, as Step 1 in Figure 2. It considers the result of Step1 as initial condition of light energy, and then calculates the stable state of light energy using radiosity method, as Step 2 in Figure 2. It finally generates synthetic image referring the computed light energy, as Step 3 in Figure 2. Preparing local image, distant image, and virtual objects reflected to the chromium ball (specular surface ball) as well as described in [1], as a distant image. We assume to use High Dynamic Range (HDR) format for distant and local images. HDR format assigns more than 8bits for RGB values of each pixel, and therefore it can finely represent the distribution of illumination. Figure 5 shows an example of virtual objects we used in our experiments. We assume that 3D geometry of virtual objects is given as triangular mesh. Step1 Constructing light sources Step2 Calculating light energy Step3 Image composition Figure 2: Processing flow of the presented technique. Figure 5: An example of virtual objects (mug) modeled as triangular mesh. 3.2 Step1: Constructing light sources This section explains the step for constructing light sources. Figure 6 shows an illustration which denotes how we construct light source environment. Distant image mapped on a hemisphere Local image Virtual object Bottom Figure 6: Constructing the light source environment with two HDR images. Figure 3: An example of the local image. Figure 4: An example of the distant image. Figures 3 and 4 show local and distant images we used in our experiments. We assume that a local image shoots near light sources. We use the photograph of scenery 3 Figure 7: Mapping distant image onto a hemisphere. In this technique, we suppose that virtual objects locate around the center of a circular bottom in a space surrounded by a hemispherical wall from far away. Moreover, looking from an optional view point, we suppose to locate a local image in the back of virtual objects. Also, the technique constructs distant light sources by mapping a distant image onto the

hemispherical wall modeled as triangular mesh, which enclose all virtual objects and the local image, as shown in Figure 7. At this moment, the technique calculates energy E 1 of every triangles of the hemisphere, by averaging RGB values of pixels of the distant image inside each triangle. Here, E 1 denotes the initial energy of light which is needed in Step 2. Also, it supposes E 1 of triangular elements of the floor as 0. The technique supposes that users design the geometry of near light sources as triangular meshes as well as the hemispherical wall. We assume that the geometry consists of light and non-light parts (i.e. plastic stand). Currently we manually design rough 3D geometry of near light sources using CG software, as shown in Figure 8. Naturally, it is preferable to automatically reconstruct their 3D geometry from the local images. Here, there are many techniques on reconstruction of 3D geometry from two or more images, in the field of stereo vision. Moreover, there are several techniques on manual reconstruction of 3D geometry, such as mapping primitive geometry to objects in the photograph [4], in the field of Image Based Rendering. Application of such techniques will be one of our future works. however, we think that we should explore a technique to automatically compute the preferable weight value. 3.3 Step2: Calculating light energy After assigning the initial energy E to the triangles of hemisphere and near light sources as described in Section 3.2, we calculate the stable values of the energy E of each triangle. Current our implementation applies the radiosity method for calculating the energy. The technique calculates E 2 twice: one is for the scene with the virtual objects, and the other is for the scene without them. Figures 9 and 10 show the results of distributions of light energy calculated by radiosity method. Current our implementation provides an OpenGL-based viewer to show the radiosity results, which supports interactive viewing operation for manual calibration between the local image and the radiosity results. If users have an environment to shoot local images with marking multiple positions, we can automate the manual calibration. Figure 9: Distribution of light energy when a virtual object is in the scene. Figure 8: Simple 3D geometry data of a near light source designed using CG software. After designing the near light sources as triangular meshes, the technique assigns the values proportional to RGB values of lights in the local image, to E 1 of the Figure 10: the distribution of light intensity triangular elements corresponding to the light parts. Also, when a virtual object is not in the scene. it assigns E 1 as 0, to the other triangular elements 3.4 Step3: Image composition corresponding to the non-light parts. This section explains how to generate a synthetic image. In order to appropriately balance the optical energy At this step, new RGB values are computed based on between the distant and near light sources, we have to RGB values of the local image, and energy E 2 of each assign the weights to the light sources, because their triangle computed by the procedure described in Section actual light energy do not always proportional to the 3.3. This step assumes that virtual objects locate in front RGB values in distant and local images. Currently we of real objects shot in the local image, and supposes that explore a preferable weight value by try-and-error; the virtual objects are not partially occluded by the real 4

objects. Similar to the original IBL described in [1], our technique determines whether local image or virtual objects is rendered, for each pixel of the synthesized image. Here, current our implementation generates a mask image, as shown in Figure 11, where black pixels denote they are not occluded by the virtual objects. Simultaneously, it captures RGB values of each pixel of two images shown in Figures 9 and 10 by using the function glreadpixels(). Figure 11: Mask image for determination of each pixel for image composition. Our implementation calculates RGB values of pixels, where are black in the mask image, from the RGB values of the local image. In this case, it calculates the RGB values by multiplying the ration of energy before and after locating the virtual objects by the RGB values of the local image, as the following equation: ( R G B ) = ( R' G ' B' ) ( R G B) ( R' G' B' ) I I I R G B I I I The calculation reflects the optical effects of virtual objects, such as reflection of light energy from the virtual objects to the floor, or shadows on the floor made by the virtual objects. Our implementation calculates RGB values of pixels, where are white in the mask image, by applying well-known shading models; however, currently it considers only diffuse reflection. As future works, we have to treat not only to diffuse reflection but to specular R G B : RGB values of a synthetic image : RGB values of an original image : The ratio of energy before and after locating the virtual objects. 5 reflection etc., and to express the virtual object to compound with reality by applying well-known shading models such as BRDF (Bidirectional Reflectance Distribution Function). 4. Experiments This section shows our experiments. We implemented and executed all steps on Windows XP and GNU gcc 3.4. We created HDR images by unifying the brightness distribution of three or more images taken under different exposure to prepare distant image and local image. Moreover, we designed 3D geometric models of virtual objects and a near light source using 3D Studio Max, and prepared by saving this as triangular mesh data in VRML format. After saving the image generated by Step 3 in HDR format, we converted the image into JPEG format by using commercial software and applying tone mapping. Figure 12: Example of an image with consideration of the near light source. Figure 13: Example of an image without consideration of the near light source. Figure 12 shows example of an image generated by our technique. We can observe the shadow of cup caused by the illumination of stand light. Fig.12 shows that consideration of near light source is effective in IBL. We also generated a synthetic image without taking a

near light source into consideration by using existing method, as shown in Figure 13. While generating both images in Figures 12 and 13, fluorescent lights of a ceiling of a room as distant light sources illuminated a mug. Processing flows of image synthesis in Figures 12 and 13 were same, except E 1 values of near light sources were 0 while generating the image shown in Figure 13. Moreover, we actually placed the mug near the stand light, and shot a photograph, as shown in Figure 14. brightness when users use distant image and local image which were shot and edited under different conditions The purpose of this report was to generate a real synthetic still image; however, we think our concept in this paper can be applied to PRT (Pre-computed Radiance Transfer) [5] for real-time realistic image synthesis. References [1] P. Debevec, Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography, SIGGRAPH 1998, pp. 189-198, 1998. [2] I. Sato, Y. Sato, K. Ikeuchi, Seamless Integration of Computer Generated Objects into a Real Scene Based on a Real Illumination Distribution, Journal of the Institute of Electronics, Information, and Communication Engineers, J81-D-II, No. 5, pp. 861-871, 1998. Figure 14: Photograph which actually placed mug. [3] T. Okabe, Y. Sato, Effects of Image Segmentation for Approximation Object Appearance under Near Lighting, MIRU 2005, pp. 80-87, 2005. 80-87, 2005. [4] P. Debevec, C. Taylor, J. Malik, Modeling and Rendering Architecture from Photographs, ACM SIGGRAPH 96, pp. 11-20, 1996. Figure 15: Comparison of synthetic portions. [5] P. Sloan, J. Kautz, J. Snyder, Precomputed Radiance (left) From Figure 12. (middle) From Figure 14. Transfer for Real-Time Rendering in Dynamic, (right) From Figure 13. Figure 15 shows the zoomed portion of the Low-Frequency Lighting Environments, SIGGRAPH 2002, July, pp. 527-536, 2002 synthesized virtual object and the real object. Considering of the near light source, our technique represented the shadow near the virtual object very similar to the real shadow observed in Figure 14. Thus, we emphasize constructing light source environments of the scene more faithfully in order to generate a natural synthetic image. 5. Conclusion and future works In this paper, we presented how to render virtual objects into real scenes with near light sources using IBL. The following points are our future works. Semi-automatic generation of the 3D geometry of near light sources to ease a burden of a users input. Re-examination about the consistency of the 6