Color Alignment in Texture Mapping of Images under Point Light Source and General Lighting Condition Hiroki Unten Graduate School of Information Science and Technology The University of Tokyo unten@cvliisu-tokyoacjp Katsushi Ikeuchi Graduate School of Interfaculty Initiative in Information Studies The University of Tokyo ki@cvliisu-tokyoacjp Abstract Recently, creating digital contents through measurements of the real world has become an important technique in the field of virtual reality, digital preservation of cultural heritages and so forth One of the effective methods for creating such digital contents is texture mapping, which maps color images on a 3D geometric model This method has two problems: geometric and photometric The geometric problem lies mostly in how to estimate the camera position and rotation relative to the 3D position of the target object The photometric problem deals with color consistencies between input textures Most studies on texture mapping focus on the geometric problem, while only a few works deal with the photometric problem In this paper we propose two novel methods for color alignment in texture mapping One method is based on the fact that chromaticity of images is independent from their geometric data The other method utilizes the basis image of a point light source on an illumination sphere In both methods, the pseudo-albedo is estimated and the color consistency between the input images is accomplished 1 Introduction Virtual reality systems are expected to expand in applications for Virtual Mall, Virtual Museum, Entertainment and so forth There are numerous researches on virtual reality systems, particularly on an effective method to create virtual reality models Currently, most of the VR-models are manually developed and it takes much cost and time to create models Automation of modeling is thus strongly desired and methods to create digital contents through measurements of the real world have been developed One of the most important steps in model creation is texture mapping Texture mapping[1, 2, 3] is a method to map color images onto a 3D shape model; the method provides a realistic appearance to the model An efficient method for texture mapping has two major problems: Geometric problem Photometric problem The geometric problem is concerned with the estimation of the camera position and rotation relative to the 3D position of target objects Most studies of texture mapping have focused on this problem Neugebauer and Klein [4] utilize the outline of the object and the intensities of images to align textures and 3D geometric models Kurazume et al [5, 6] proposed an automatic method to align textures to 3D geometric models Their method detects edges from reflectance which is available in a measurement by laser range sensors, and determines camera parameters by minimizing the errors between reflectance edges and texture edges The photometric problem has to do with eliminating color discontinuities between textures Image intensity in textures is affected by illumination conditions and the photometric and geometric properties of a object The appearance of textures measured under different conditions is quite different; that difference leads to color discontinuities when those textures are mapped to a 3D geometric model In order to eliminate the discontinuities, color alignment between textures is necessary Neugebauer and Klein[4] calculated weighted-average between multiple textures based on sampling density, smooth transition, and elimination of outliers Heidrich Lensch et al [7] utilized an alpha blending to obtain continuities between textures These averagebased methods have difficulty in aligning color images under different illumination conditions Another way to align colors of textures is to recover the illumination distributions Marschner and Greenberg[8] recovered illumination distribution and re-lighted the input textures They treated an input image as the linear combination of basis images under basis lights on illumination sphere However, this method is not applicable to color alignment in texture mapping since this method requires 0-7695-2158-4/04 $2000 (C) 2004 IEEE
reflectance of object Beauchesne and Roy[9] proposed a method for aligning texture colors by re-lighting input textures under a common lighting condition This method requires perfect normals for each point and ignores self shadow effects In this paper, we propose novel methods for color alignment in texture mapping We assume that the property of the object surface is lambertian We will allow illumination condition (the number of light sources and the directions) to change among image acquisitions First, we will consider a method based on chromaticity segmentation Then, we will derive a method to utilize basis images of point light source on illumination sphere The latter method is applicable to the images containing self-shadows because shadows and shadings are calculated by rendering In both methods, pseudo-albedo is estimated and thus the color consistency between the input images is accomplished In the context of object modeling, measuring albedo is important We can generate images under arbitrary illumination conditions from 3D geometric model and albedo by rendering The remainder of the paper is organized as follows Section 2 and Section 3 present a color alignment based on chromaticity consistency (method I) and color alignment based on illumination sphere (method II), respectively In Section 4, we demonstrate the efficiency of our methods by describing our experiments Finally, we conclude the paper in Section 5 2 Method I :Color Alignment based on chromaticity consistency Image intensity of a lambertian object under a point light source can be written as: I x (c) =L(c)S x (c)cosθ x, (1) where c = {r, g, b}, x is pixel index, I x (c) is image intensity at x, L(c) is illumination color, S x (c) is surface color(albedo), and θ x is the angle between the normal direction at x and illumination direction We refer to L(c)S x (c) as pseudo-albedo Figure 1 shows an overview of pseudo-albedo estimation We estimate the illumination direction(block (b) in Figure 1) by utilizing the normal direction(block (a) in Figure 1) of the object calculated from 3D geometric model Image chromaticity(block (c) in Figure 1) is defined as I x,c = I x(c) c I (2) x(c) Considering equation (1) and (2), I x,c depends only on albedo and illumination color We assume that the regions Figure 1: Overview of pseudo-albedo estimation with the same chromaticity have the same albedo We refer to a region which has one albedo as A Let the normal direction on x and the illumination direction be n(x) and L, respectively Because S x (c) is a constant(s const (c)) ina, the following equation is derived I x (c) =L(c)S const (c)cosθ x = kn(x) L, (3) where k is a scalar constant By substituting I x (c) and n(x) in the region A into equation (3), we can estimate L by least-square method and acquire cosθ x (Block (d) in Figure 1) from L If I x (c) and cosθ x are known, pseudo-albedo of each pixel is independently estimated as L(c)S x (c) = I x (c)/cosθ x from equation (1) However, if a 3D model is not perfectly measured and its surface orientations are unreliable, the obtained pseudo-albedo values on the pixels with unreliable surface orientations are also unreliable Therefore, in our method, we do not independently estimate pseudo-albedo of each pixel but estimate pseudo-albedo of each region with the same chromaticity by utilizing that image chromaticity is independent from the model of geometry We introduce normalized illumination color(l c = L(c)/L) and normalized albedo (S x,c = S x (c)/s x ), where L = c L(c) and S x = c S x(c) are total illumination color and total albedo, respectively By substituting these into (1) and (2), we get L(c)S x (c) = I x,c LS x L c S x,c c = I x,c T x, (4) where T x = LS x c L cs x,c We make the following assumption on the target object: total albedo is the same in the region where normalized albedo is the same; in other words, total albedo is not directly dependent on the position And S x is represented as a function of I x,c because the illumination color and direction are constant in an image From the discussion above, 0-7695-2158-4/04 $2000 (C) 2004 IEEE
Figure 2: Overview of chromaticity-t map estimation T x is represented as a function of I x,c, and referred to as chromaticity-t map(t (I r,i b,i g )) (Block (e) in Figure 1) Note that the chromaticity-t map does not directly depend on x, but rather depends on I x,c We make the chromaticity-t map by quantizing chromaticity space and estimating T value on each point Note that T x = c I x(c)/cosθ x from equation (1) and (4) Figure 2 shows the outline of process for generating chromaticity-t map We calculate T x = c I x(c)/cosθ x for each pixel(block (a) in Figure 2), and vote the value to I x,c on chromaticity space(block (b) in Figure2) Then, we calculate a histogram of c I x(c)/cosθ x at each point on chromaticity space from all the pixel in the image Chromaticity-T map(block (d) in Figure 2) is defined as the median of the histogram(block (c) in Figure 2) From the chromaticity-t map, we can calculate pseudo-albedo by equation (4) Even though our assumption on target object is not universally applicable, it gives a sufficient chromaticity T-map for most objects Once pseudo-albedo is estimated for input images from different viewpoints, we can get a textured 3D geometric model with color consistency by mapping the pseudoalbedo to a 3D geometric model If the illumination colors are known by the white calibration board, etc, we can estimate the albedo from the pseudo-albedo 3 Method II :Color Alignment based on Illumination sphere The previous method can determine albedo distributions vary easily and is quite handy On the other hand, the method has difficulty in handling curved surfaces and extended light sources For example, it is difficult to apply the method to images under multiple light sources In this section, we will describe how we extend the method to be able to handle those cases by using the concept of illumination sphere First, we introduce the illumination sphere an infinitely far light sphere in the scene In method II, we restrict illumination to distant light sources Each direction of the illumi- Figure 3: Overview of method II nation sphere represents the direction of the light source and the intensity of the illumination sphere represents the intensity of the light source We approximate the illumination sphere by a series of distant point light sources on a sphere Let A m be a series of rendered images of 3D geometric model under the point light sources In the rendering process, we use a constant lambertian parameter for the model We refer to this series of images as basis images(block (a1) and (a2) in Figure 3) Let I 1 (x), I 2 (x) be images acquired by an image sensor(input images); the following equation is then derived I n (x) = S(x)(a 1 n A1 n (x)+am n Am n (x)+am n AM n (x)) = S(x)L n (x), (5) where m = 1, 2M is an index of a basis image, n =1, 2 is an index of an input image, x is an index of pixel, and S(x) is albedo at x L n (x) contains the shade and shadow information of the input image, and we refer to L n (x) as irradiance image If we get an irradiance image, we can estimate albedo from (5) Because irradiance image represent the shading and shadow, we can estimate albedo at the position of self-shadow Second, we define k(x) =I 1 (x)/i 2 (x)(block (b) in Figure 3) and from equation (5) the following equation is derived k(x) = a1 1 A1 1 (x)+ am 1 Am 1 (x)+ + am 1 AM 1 (x) a 1 2 A1 2 (x)+ + am 2 Am 2 (x)+ + am 2 AM 2 (x) (6) The following equation is derived(block (c) in Figure 3) from equation (6) Ua =0, (7) 0-7695-2158-4/04 $2000 (C) 2004 IEEE
where A 1 1 (1) AM 1 (1) k(1)a1 2 (1) k(1)am 2 (1) U = A 1 1 (n) AM 1 (n) k(n)a1 2 (n) k(n)am 2 (n) (8) A 1 1 (N) AM 1 (N) k(n)a1 2 (N) k(n)am 2 (N) a = ( a 1 1 am 1 a1 2 am 2 ) t (9) Because U can be known from input images and basis images, we can determine a m n with scale ambiguity Then, we get L 1 (x) and L 2 (x)(block (d1) and (d2) in Figure 3) and considering equation (5), we can estimate albedo for each input image(block (e1) and (e2) in Figure 3) Because an image of the convex surface of a lambertian does not have high frequency components, it is difficult to reconstruct an illumination distribution from the image[8] However, the important thing is that, for our purpose, there is no necessity to estimate the actual illumination distribution; only the illumination distribution that accounts for the shade and shadow of the input image is needed Up to now, the discussion in this section has been on gray scale images However, when considering color images, we have to estimate a m n for each color channel In this case, scale ambiguity of a m n affects the albedo in color channels because a m n for each color channel is estimated independently Therefore, in the case of color input images, what we get is pseudo-albedo as well as method I 4 Experiments 41 Method I: color alignment based on chromaticity Some of the input images and a 3D geometric model are shown in Figure 4 Textures were measured by a Sony DXC-900, and the 3D geometric model By a Minolta VIVID 900 Although the VIVID 900 measures textures, we decided to use the DXC-900 for texture measurements because of its high quality images First, we estimated camera parameters from the image and the 3D geometric model of a calibration box Then, we mapped input images to the 3D geometric model We took into consideration the focal length, principal point and skew as intrinsic parameters, and the rotation and translation as extrinsic parameters And normal directions of the 3D geometric model were estimated for each point By utilizing the normal directions, illumination directions were estimated for each input image Figure 5 shows the relationship between cosθ and image intensity of the input image Figure 5 confirms equation (3) that cosθ is in proportion to image intensity Figure 4: Some of input images and a 3D geometric model Figure 5: The relationship between cosθ and intensity Then, for each texture, a chromaticity-t map was estimated One of the maps is shown in Figure 6 Figure 6 shows that different points on chromaticity space have different T values By utilizing this chromaticity-t map, pseudo-albedo was estimated from each input image Figure 7 shows pseudo-albedo estimated from two input images Figure 8 shows histograms of the differences between the pseudo-albedos of overlapping regions Results from 12 input images covering almost all the region of the objects are shown in Figure 9 For the points which correspond to more than one texture, we adopted median of intensities on the points There is no color discontinuity in the final results And we see no errors on the geometric edges Figure 6: Chromaticity-T map 0-7695-2158-4/04 $2000 (C) 2004 IEEE
Figure 7: Estimated pseudo-albedo: (a) from image1, (b) from image2 Figure 8: Histograms of the difference between the estimated pseudo-albedo from image1 and the one from image2: (a) red, (b) green, (c) blue 42 Method II: color alignment based on Illumination sphere First we demonstrate results of the color alignment method based on illumination sphere when applied to synthetic images We assume lambertian reflectance on the object In this experiment, the viewpoints of the two input images are the same and only illumination conditions are different Figure 10(a) and (d) shows input images From each input image, irradiance images were estimated (Figure 10(b) and (e)) and consequently, pseudo-albedo images were calculated for the region where illuminated (Figure 10(c) and (f)) We confirmed that this method is effective where selfshadow exits Second, we show results applied to real images The input images and a 3D geometric model were measured in the same way as in Section41 In this experiment, two input images, which were measured from the different viewpoints and under different illumination conditions, were used Images of pseudo-albedo estimated from each input image are mapped to the 3D geometric model(figure 11) For the points which corresponded to more than one texture, we adopted a texture whose angle between the viewing direction and the normal direction of the point was the smallest There were no seam lines between two images(figure 11(b)) The absolute difference between the pseudo-albedo estimated from image1 and the one from image2 is shown in Figure 9: Merged pseudo-albedo Figure 12 Black and white points represent the difference of less than 10 and 10 or more, respectively Histograms of the difference between the pseudo-albedo estimated from image1 and the one from image2 are also shown in Figure 13 5 Conclusion We have presented two novel methods for color alignment in texture mapping Method I is based on chromaticity consistency on a 3D geometric model and is effective when a 3D geometric model is not perfectly acquired Method II utilizes basis images of point light sources on illumination sphere Method II is applicable to the images containing self-shadows because shadows and shadings are calculated by rendering In both methods, pseudo-albedo is estimated; thus, the color consistency between the input images is accomplished We demonstrated the effectiveness of the proposed methods by describing experiments performed on synthetic and real images Acknowledgments This project is funded by Core Research for Evolutional Science and Technology (CREST) of Japan Science and Technology Agency (JST) Supatana Auethavekiat provided many insightful comments on this paper 0-7695-2158-4/04 $2000 (C) 2004 IEEE
Figure 13: Histograms of the difference between the estimated pseudo-albedo from image1 and the one from image2: (a) red, (b) green, (c) blue Figure 10: Synthetic test Input images are image1 (a) and image2(d) The estimated irradiance images are (b) and (e) The estimated pseudo-albedo images are (c) and (f) Note that the top row is for image1 and the bottom row is for image2 References [1] E Praum, A Finkelstein, and H Hoppe, Lapped textures, in SIGGRAPH2000, 2000, pp 465 470 [2] P V Sander, J Snyder, S J Gortler, and H Hoppe, Texture mapping progressive meshes, in SIG- GRAPH2001, 2001, pp 355 360 [3] B Levy, Constrained texture mapping for polygonal meshes, in SIGGRAPH2001, 2001, pp 417 424 [4] P J Neugebauer and K Klein, Texturing 3D models of real world objects from multiple unregistered photographic views, in EUROGRAPHICS 99, 1999 [5] R Kurazume, M D Wheeler, and K Ikeuchi, Mapping textures on 3D geometric model using reflectance image, in Data Fusion Workshop in IEEE Int Conf on Robotics and Automation, 2001 Figure 11: Estimated pseudo-albedos: (a)image1, (b)both, (c)image2 [6] R Kurazume, K Nishino, Z Zhang, and K Ikeuchi, Simultaneous 2D images and 3D geometric model registration for texture mapping utilizing reflectance attribute, in Fifth Asian Conference on Computer Vision (ACCV), 2002, vol 1, pp 99 106 [7] H Lensch, W Heidrich, and H-P Seidel, Automated texture registration and stitching for real world models, in Pacific Graphics 00, 2000, pp 317 326 [8] Stephen R Marschner and Donald P Greenberg, Inverse lighting for photography, in IS&T/SID Fifth Color Imaging Conference, Nov 1997, pp 262 265 [9] E Beauchesne and S Roy, Automatic relighting of overlapping textures of a 3D model, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2003, pp 166 173 Figure 12: Absolute difference between the estimated pseudo-albedo from image1 and the one from image2 0-7695-2158-4/04 $2000 (C) 2004 IEEE