Object Shape and Reflectance Modeling from Color Image Sequence

Size: px
Start display at page:

Download "Object Shape and Reflectance Modeling from Color Image Sequence"

Transcription

1 Object Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the field of Robotics The Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania January Yoichi Sato This work was sponsored in part by the Advanced Research Projects Agency under the Department of the Army, Army Research Office under grant number DAAH04-94-G-0006, and partially by NSF under Contract IRI Views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of the United States Government.

2

3 i Abstract This thesis describes the automatic reconstruction of 3D object models from observation of real objects. As a result of the significant advancement of graphics hardware and image rendering algorithms, 3D computer graphics capability has become available even on low-end computers. However, it is often the case that 3D object models are created manually by users. That input process is normally time-consuming and can be a bottleneck for realistic image synthesis. Therefore, techniques to obtain object models automatically by observing real objects could have great significance in practical applications. For generating realistic images of a 3D object, two aspects of information are necessary: the object s shape and its reflectance properties such as color and specularity. A number of techniques have been developed for modeling object shapes by observing real objects. However, attempts to model reflectance properties of real objects have been rather limited. In most cases, modeled reflectance properties are too simple or too complicated to be used for synthesizing realistic images of the object. One of the main reasons why modeling of reflectance properties has been unsuccessful, compared with modeling of object shapes, is that both diffusely reflected lights and specularly reflected lights, i.e., the diffuse and specular reflection components, are treated together, and therefore, estimation of reflectance properties becomes unreliable. To eliminate this problem, the two reflection components should be separated prior to estimation of reflectance properties. For this purpose, we developed a new method called goniochromatic space analysis (GSA) which separates two fundamental reflection components from a color image sequence. Based on GSA, we studied two approaches for generating 3D models from observation of real objects. For objects with smooth surfaces, we developed a new method which examines a sequence of color images taken under a moving light source. The diffuse and specular reflection components are first separated from the color image sequence; then, object surface shapes and reflectance parameters are simultaneously estimated based on the separation results. For creating object models with more complex shapes and reflectance properties, we proposed another method which uses a sequence of range and color images. In this method, GSA is further extended to handle a color image sequence taken by changing object posture. To extend GSA to a wider range of applications, we also developed a method for shape and reflectance recovery from a sequence of color images taken under solar illumination. The method was designed to handle various problems particular to images taken using solar illuminations, e.g., more complex illumination and shape ambiguity caused by the sun s coplanar motion. This thesis presents new approaches for modeling object surface reflectance properties, as well as shapes, by observing real objects in both indoor and outdoor environments. The methods are based on a novel method called goniochromatic space analysis for separating the two fundamental reflection components from a color image sequence.

4 ii

5 iii Acknowledgments I would like to express my deepest gratitude to my wife, Imari Sato, and to my parents, Yoshitaka Sato and Kazuko Sato, who always have been supportive throughout my years at Carnegie Mellon University. I would also like to express my gratitude to Katsushi Ikeuchi for being my adviser and mentor. From him, I have learned how to conduct research in the field of computer vision. I have greatly benefited from his support and enthusiasm over the past five years. I am also grateful to my thesis committee members Martial Hebert, Steve Shafer, and Shree Nayar for their careful reading of this thesis and for providing valuable feedback regarding my work. For taking the time to proofread this thesis, I am very grateful to Marie Elm. She always has been kind to spare her time for correcting my writing and improving my writing skills. I was fortunate to have many great people to work with in the VASC group at CMU. In particular I would like to thank members of our Task Oriented Vision Lab group for their insights and ideas which are embedded in my work: Prem Janardhan, Sing Bing Kang, George Paul, Harry Shum, Fred Solomon, and Mark Wheeler; special thanks go to Fred Solomon, who patiently taught me numerous hands-on skills necessary for conducting experiments. I have also benefited from the help of visiting scientists in our group, including Santiago Conant-Pablos, Kazunori Higuchi, Yunde Jiar, Masato Kawade, Hiroshi Kimura, Tetsuo Kiuchi, Jun Miura, Kotaro Ohba, Ken Shakunaga, Yutaka Takeuchi, and Taku Yamazaki. We all had many fun barbecue parties at Katsu s place during my stay in Pittsburgh. I will miss very much those parties and Katsu's excellent homemade wine. Finally, I would once again like to thank my family for their love, support, and encouragement, especially my wife, Imari. Since Imari and I married, my life has always been quite wonderful; she has made the hard times seem as nothing, and the good times an absolute delight.

6 iv

7 v Table of Contents Chapter 1 Introduction and Overview Goniochromatic Space Analysis of Reflection Object Modeling from Color Image Sequence Object Modeling from Range and Color Image Sequences Reflectance Analysis under Solar Illumination Thesis Outline Chapter 2 Goniochromatic Space Analysis of Reflection Background The RGB Color Space The I-q (Intensity - Illuminating/Viewing Angle) Space The Goniochromatic Space Chapter 3

8 vi Object Modeling from Color Image Sequence Reflection Model The Lambertian Model The Torrance-Sparrow Reflection Model Image Formation Model Decomposition of Reflection Components Estimation of the Specular Reflection Color Previously Developed Methods Lee s Method Tominaga and Wandell s Method Klinker, Shafer, and Kanade s Method Our Method for Estimating an Illuminant Color Estimation of the Diffuse Reflection Color Experimental Results Experimental Setup Estimation of Surface Normal and Reflectance Parameters Shiny Dielectric Object Matte Dielectric Object Metal Object Shape Recovery Reflection Component Separation with Non-uniform Reflectance Summary Chapter 4

9 Object Modeling from Range and Color Images: Object Models Without Texture Background Image Acquisition System Shape Reconstruction from Multiple Range Images Our Method for Merging Multiple Range Images Measurement Shape Recovery Mapping Color Images onto Recovered Object Shape Reflectance Parameter Estimation Reflection Model Reflection Component Separation Reflectance Parameter Estimation for Segmented Regions Synthesized Images with Realistic Reflection Summary vii Chapter 5 Object Modeling from Range and Color Images: Object Models With Texture Dense Surface Normal Estimation Diffuse Reflection Parameter Estimation Specular Reflection Parameter Estimation Experimental Results Summary

10 viii Chapter 6 Reflectance Analysis under Solar Illumination Background Reflection Model Under Solar Illumination Removal of the Reflection Component from the Skylight Removal of the Specular Component from the Sunlight Obtaining Surface Normals Two Sets of Surface Normals Unique Surface Normal Solution Experimental Results: Laboratory Setup Experimental Result: Outdoor Scene (Water Tower) Summary Chapter 7 Conclusions Summary Object Modeling from Color Image Sequence Object Modeling from Range and Color Images Reflectance Analysis under Solar Illumination Thesis Contributions Directions for Future Research More Complex Reflectance Model Planning of Image Sampling Reflectance Analysis for Shape from Motion

11 ix More Realistic Illumination Model for Outdoor Scene Analysis Color Figures Bibliography

12 x

13 xi List of Figures Figure 1 Object model generation Figure 2 Object model for computer graphics Figure 3 (a) a gonioreflectometer and (b) a typical measurement of BRDF Figure 4 Goniochromatic space Figure 5 Reflection component separation Figure 6 Synthesized image of an object without texture Figure 7 Synthesized images of an object with texture Figure 8 Image taken under solar illumination Figure 9 A sphere and its color histogram as T shape in the RGB color space Figure 10 Viewer-centered coordinate system Figure 11 The space Figure 12 The goniochromatic space (synthesized data) Figure 13 Polar plot of the three reflection components Figure 14 Reflection model used in our analysis Figure 15 Internal scattering and surface reflection Figure 16 Solid angles of a light source and illuminated surface Figure 17 Geometry for the Torrance-Sparrow reflection model [85] Figure 18 Measurement at one pixel (synthesized data) Figure 19 Diffuse and specular reflection planes (synthesized data)

14 xii Figure 20 x-y chromaticity diagram showing the ideal loci of chromaticities corresponding to colors from five surfaces of different colors Figure 21 Estimation of illuminant color as an intersection of color signal planes.. 37 Figure 22 T-shape color histogram and two color vectors Figure 23 Estimation of the color vector Figure 24 Geometry matrix (synthesized data) Figure 25 Geometry of the experimental setup Figure 26 Geometry of the extended light source Figure 27 Green shiny plastic cylinder Figure 28 Measured intensities in the goniochromatic space Figure 29 Decomposed two reflection components Figure 30 Loci of two reflection components in the goniochromatic space Figure 31 Diffuse and specular reflection planes Figure 32 Result of fitting Figure 33 Green matte plastic cylinder Figure 34 Measured intensities in the goniochromatic space Figure 35 Two decomposed reflection components Figure 36 Result of fitting Figure 37 Aluminum triangular prism Figure 38 Loci of the intensity in the goniochromatic space Figure 39 Two decomposed reflection components Figure 40 Purple plastic cylinder Figure 41 Needle map Figure 42 Recovered object shape Figure 43 Estimation of illuminant color in the x-y chromaticity diagram Figure 44 Multicolored object Figure 45 Diffuse reflection image Figure 46 Specular reflection image Figure 47 Image acquisition system

15 xiii Figure 48 Input range data Figure 49 Input color images Figure 50 Recovered object shape Figure 51 View mapping result Figure 52 Intensity change with strong specularity Figure 53 Intensity change with little specularity Figure 54 Geometry for simplified Torrance-Sparrow model Figure 55 Separated reflection components with strong specularity Figure 56 Separated reflection component with little specularity Figure 57 Diffuse image and specular image: example Figure 58 Diffuse image and specular image: example Figure 59 Segmentation result (grey levels represent regions) Figure 60 Synthesized image Figure 61 Synthesized image Figure 62 Synthesized image Figure 63 Object modeling with reflectance parameter mapping Figure 64 Surface normal estimation from input 3D points Figure 65 Diffuse saturation shown in the RGB color space Figure 66 Input range data Figure 67 Input color images Figure 68 Recovered object shape Figure 69 Simplified shape model Figure 70 Estimated surface normals and polygonal normals Figure 71 Color image mapping result Figure 72 Estimated diffuse reflection parameters Figure 73 Selected vertices for specular parameter estimation Figure 74 Interpolated and Figure 75 Synthesized object images Figure 76 Comparison of input color images and synthesized images

16 xiv Figure 77 Comparison of the spectra of sunlight and skylight [48] Figure 78 Change of color of sun with altitude [48] Figure 79 Three reflection components from solar illumination Figure 80 Sun direction, viewing direction and surface normal in 3D case Figure 81 Diffuse reflection component image (frame 8) Figure 82 Two sets of surface normals Figure 83 The boundary region obtained from two surface normal sets Figure 84 The boundary after medial axis transformation Figure 85 Segmented regions (gray levels represent regions) Figure 86 Right surface normal set Figure 87 Recovered object shape Figure 88 Observed color image sequence of a water tank Figure 89 Extracted region of interest Figure 90 Water tank image without sky reflection component Figure 91 Water tank image after highlight removal Figure 92 Surface normals Figure 93 Recovered shape of the water tank

17 1 Chapter 1 Introduction and Overview As a result of the significant advancement of graphics hardware and image rendering algorithms, 3D computer graphics capability has become available even on low-end computers. At the same time, the rapid spread of the internet technology has caused a significant increase in the demand for 3D computer graphics. For instance, a new format for 3D computer graphics on the internet, called VRML, is becoming an industrial standard format, and the number of applications using the format is quickly increasing. Therefore, it is important to create suitable 3D object models for synthesizing realistic computer graphics images. An object model for computer graphics applications should contain two aspects of information: the shape and reflectance properties of the object. Surface reflectance properties of object models are particularly important for synthesizing realistic computer graphics images since appearance of objects greatly depends on reflectance properties, i.e., how incident lights are reflected on the object surfaces. For instance, a polished metal sphere will look completely different after it is coated with diffuse white paint even though the shape remains exactly the same. Unfortunately, it is often the case that 3D object models are created manually by users. That input process is normally time-consuming and can be a bottleneck for realistic image synthesis. Alternatively, there might be CAD models of 3D objects available. Even in this case, however, reflectance properties are usually not available as a part of CAD models, and therefore need to be determined. Thus, techniques to obtain object model data automatically by observing real objects could have great significance in practical applications. This is the

18 2 Chapter 1 main motivation of this thesis work. In this thesis, we describe a novel method for automatically creating 3D object models with shape and reflectance properties, by observing real objects. Previously developed techniques for modeling object shapes by observing real objects use various approaches which include: for instance, range image merging, shape-frommotion, shape-from-shadings, shape-from-focus, and photometric stereo. In addition, there exist sensing devices such as range finders, which can measure 3D object shapes directly. In fact, there are many 3D scanning sensors commercially available today. Various kinds of range sensors have been developed and are being marketed. They include: triangulationbased laser range sensors, time-of-flight laser range sensors, and light-pattern-projection type range sensors. The drawback of these sensors is that they are not designed to measure object reflectance properties. CAD model object model measurement by hand automatic generation real object Figure 1 Object model generation

19 3 Shape real object Reflectance synthesized image illumination & camera Figure 2 Object model for computer graphics Attempts to model reflectance properties of real objects have been rather limited. In most cases, modeled reflectance properties are too simple to be used for synthesizing realistic images of the object. If only observed color texture or diffuse texture of a real object surface is used (e.g., texture mapping), shading effects such as highlights cannot be reproduced correctly in synthesized images. For instance, if highlights on the object surface are observed in original color images, the highlights are treated as diffuse textures on the object surface and, therefore, remain on the object surface permanently regardless of illuminating and viewing conditions. This should be avoided for realistic image synthesis because highlights on object surfaces are known to play an important role in conveying a great deal of information about object surface finish and material type. For modeling surface reflectance properties of real objects, there are two approaches. The first approach is to intensively measure a distribution of reflected lights, i.e., a bidirectional reflectance distribution function (BRDF), and to record the distribution as reflectance properties. The second approach is to estimate parameters of some sort of parametric reflection model function, based on relatively coarse measurements of reflected lights. A BRDF is measured by using a device called a gonioreflectometer. The usual design for such a device incorporates a single photometer that is designed to move in relation to a light source, all under the control of a computer. Because a BRDF is, in general, a function of four angles, two incident and two reflected, such a device must have four degrees of mechanical freedom to measure the complete function. This requires substantial complexity in the apparatus design as well as long periods of time to measure a single surface. Also, real object surfaces very often have non-uniform reflectance: therefore, a single measurement of

20 4 Chapter 1 BRDF per object is not enough. More particularly, accuracy of measured BRDFs is often questionable even if they are carefully measured [88]. For these reasons, BRDFs have rarely been used for synthesizing computer graphics images in the past. light source sensor test material (a) (b) Figure 3 (a) a gonioreflectometer and (b) a typical measurement of BRDF the BRDF is redrawn from [75]. Alternatively, we can use a parametric reflectance model to reduce the complexity involved in using BRDFs for synthesizing images. When we have relatively coarse measurement of reflected light distribution, the measurement has to be somehow interpolated, so that real distribution of reflected lights can be inferred. Here, the best chance would be to assume some underlying reflection model, i.e., the Torrance-Sparrow reflection model, as a starting point. By estimating parameters of the reflection model, we can interpolate the measured distribution of reflected lights. Depending on object material types and sampling methods of reflected lights, an appropriate reflection model should be selected from currently available reflection models. Those currently available reflection models were developed either empirically or analytically. For example, reflection models commonly used in computer vision and computer graphics include: the Lambert model, the Phong model [59], the Blinn-Phong model [8], the Torrance-Sparrow model [85], the Cook-Torrance model [11], the Beckmann-Spizzino model [5], the He model [20], the Strauss model [80], and the Ward model [87]. The estimation of parameters of a reflection model function has been investigated by other researchers. In some cases, reflectance parameters are estimated only from multiple intensity images. In other cases, both range and intensity images are used to obtain object surface shapes and reflectance parameters. However, all of the previously proposed methods

21 1.1 Goniochromatic Space Analysis of Reflection 5 for reflectance parameter estimation are limited in one way or another. For instance, some methods can handle only objects with uniform reflectance, and in other methods, estimation of reflectance parameters becomes very sensitive to image noise. To our knowledge, no method is currently being used for estimating reflectance properties of real objects in real applications. One of the main reasons why modeling of reflectance properties has been unsuccessful, as compared with modeling of object shapes, is that both diffusely reflected lights and specularly reflected lights, i.e., the diffuse and specular reflection components, are examined simultaneously, and therefore, estimation of reflectance properties becomes unreliable. For instance, estimation of the diffuse reflection parameters may be affected by specularly reflected lights observed in input images. Also, estimation of the specular reflection component s parameters often becomes unreliable when specularly reflected lights are not observed strongly in the input images. In this thesis, we tackle the problem of object modeling by using a new approach to analyze a sequence of color images. The new approach allows us to estimate shape and reflectance parameters of real objects in a robust manner even when both the diffuse and specular reflection components are observed. 1.1 Goniochromatic Space Analysis of Reflection We propose a new framework to analyze object shape and surface properties from a sequence of color images. This framework plays an important role in reflectance analysis described in this thesis. We observe how a color of an object surface varies on change in angular illuminating-viewing conditions using a four dimensional RGB plus illuminating/ viewing angle space. We call this space the goniochromatic space based on Standard Terminology of Appearance 1 from American Society for Testing and Materials [78]. The goniochromatic space is closely related to the two spaces previously used for analyzing color or gray-scale images: the Red-Green-Blue (RGB) color space and the I θ (image intensity - illumination/viewing direction) space. Typically, the RGB color space is used for analyzing color information from a single color image. One of the epochmaking works using the RGB color space was done by Shafer [72][73]. He demonstrated that, illuminated by a single light source, a cluster of uniformly colored dielectric objects in the RGB color space forms a parallelogram defined by two color vectors, namely the diffuse 1. goniochromatism: change in any or all attributes of color of a specimen on change in angular illuminating-viewing conditions but without change in light source or observer.

22 6 Chapter 1 reflection vector and the specular reflection vector [72]. Subsequently, Klinker et al. [39] demonstrated that the cluster actually forms a T-shape in the color space instead of a parallelogram; they separated the diffuse and specular reflection components by geometrically clustering a scatter plot of the image in the RGB color space. However, their method requires that objects be uniformly colored and that surface shapes are not planar. For example, if an object has a multiple-colored or highly textured surface, a cluster in the RGB color space becomes cluttered, and therefore, separation of the two reflection components becomes impossible. If the object s surface is planar, the cluster eventually concentrates to a point in the RGB color space, and again, the separation becomes impossible. As a result, the method can be applied to only a limited class of objects. On the other hand, the I θ space was used for analyzing a gray-scale image sequence. This space represents how the pixel intensity changes as illumination or viewing geometry changes. By using the space, Nayar et al. analyzed a gray-scale image sequence given by a moving light source [49]. Their method can separate the diffuse and specular reflection components from an observed intensity change in the I θ space, and can estimate shapes and reflectance parameters of objects with hybrid surfaces 2. The main advantage of their method is that all necessary information is obtained from a single point on the object surface, and therefore, the method can be applied locally. This is advantageous compared to the algorithm developed by Klinker et al. [39] because the algorithm by Klinker et al. examines a color cluster formed from an entire image globally. However, the Nayar et al. method is still limited in the sense that only a small group of real objects have hybrid surfaces, and the method requires an imaging apparatus of specific dimension, e.g., the light diffuser s diameter and the distance from a light source to the light diffuser. 2. In the paper [49] by Nayar et al. a hybrid surface is defined as one which exhibits the diffuse lobe reflection component and the specular spike reflection components. (Those two reflection components and the specular lobe reflection component will be described in more detail in Chapter 3.)

23 1.2 Object Modeling from Color Image Sequence 7 image sequence under moving light Figure 4 Goniochromatic space Unlike the RGB color space and the I θ space, the GSA does not require strong assumptions such as uniform reflectance, non-planar surfaces, hybrid surfaces, or an imaging apparatus of specific dimension. By using the GSA, we can separate the two reflection components locally from a color image sequence and obtain the shape and reflectance parameters of objects. 1.2 Object Modeling from Color Image Sequence Based on GSA, we have developed a new method for estimating object shapes and reflectance parameters from a sequence of color images taken by a moving point light source. First, the diffuse and specular reflection components are separated from the color image sequence. The separation process does not have to assume any specific reflection model. Then, using the separated reflection components, object shapes and parameters of a specific reflection model are estimated. Like the I θ space analysis, our method requires only local information. In other words, we can recover the object shape and reflectance parameters based on color change at each point on the object surface; the method does not depend on observed color at other portions of the object surface. In addition, our method is not restricted to a specific reflection model, i.e., a hybrid surface, or to a specific imaging apparatus. Thus, our method can be applied to a wide range of objects.

24 8 Chapter 1 Figure 5 Reflection component separation Currently, our method can handle only the case where object surface normals exist in a 2D plane. This is a rather severe limitation. However, the limitation comes from the fact that only coplanar motion of a light source is used, and it is not an inherent limitation of our method. For instance, if only two light source locations are used for photometric stereo, two surface normals are obtained at each surface point. The ambiguity can be solved by adding one more light source location which is not coplanar to the other two locations. The same thing can be done for our method, but it has not been tested in research conducted for this thesis. The main limitation of this method is that an object shape cannot be recovered accurately if the object has a surface with high curvature. This is generally true for all methods which use only intensity images, e.g., shape-from-shading and photometric stereo. Another limitation of the method is that surface shape and reflectance parameters can be recovered only for a partial portion of the surface. Obviously, we cannot see the back of an object unless we rotate the object or change our eye location. 1.3 Object Modeling from Range and Color Image Sequences To overcome the limitations noted in the previous section, we investigated another method. The goal was creating object models with complete shapes even if the objects have surfaces with high curvature. To attain this goal, we developed a method for creating complete object models from sequences of range and color images. These images are taken by changing object posture. One advantage of using range image for shape recovery is that object shapes can be

25 1.3 Object Modeling from Range and Color Image Sequences 9 obtained as triangular mesh models which represent 3D information of the object shapes. In contrast, the method we described in Section 1.2 can produce only surface normals which is 2.5D information of the object surface. Hence, the object surface can be obtained as a triangular mesh model only after some sort of integration procedure is applied to the surface normals. In general, this integration process does not work well for object surfaces with high curvature. And, a single range image cannot capture an entire object shape: it measures only the partial shape of the object seen from a range sensor. Therefore, we need to observe the object from various viewing points to see the object surface entirely. Then, we have to somehow merge those multiple range images to create a complete shape model of the object. In this thesis, two different algorithms are used for integrating multiple range images: a surface-based method and a volume-based method. When we apply the GSA to a sequence of range and color images, we have to face the correspondence problem between color image frames. As we have already mentioned, the GSA examines a sequence of observed colors as illuminating/viewing geometry changes. Therefore, we need to know where the point on the object surface appears in each input color image. This correspondence problem was not an issue in the case of a color image sequence taken with a moving light source. In that case, the object location and the viewing point were fixed. Therefore, the point on the object surface appeared at the same pixel coordinate through the color image sequence. Fortunately, we can solve the correspondence problem by using the reconstructed triangular mesh model of the object shape. Having determined object locations and camera parameters from calibration, we project each color image frame back onto the reconstructed object surface. By projecting all of the color image frames, we can determine the observed color change at each point on the object surface as the object is rotated. Then, we apply the GSA to the observed color change to separate the diffuse and specular reflection components. After the two reflection components are separated, we estimate reflectance parameters of each reflection component. Here, we consider two different classes of objects. The first class comprises objects which are painted in multiple colors and do not have detailed surface textures. In this case, an object surface can be segmented into multiple regions with uniform diffuse color. The second class comprises objects with highly textured surfaces; in this case, object surfaces cannot be clearly segmented. In this thesis, we investigated two different approaches for these two classes of objects. For the first class of objects without detailed surface texture, we developed a method to estimate reflectance parameters based on region segmentation of an object surface. Each segmented region on the object surface is assigned the same reflectance parameters for the specular reflection component, by assuming that each region with uniform diffuse color has

26 10 Chapter 1 more or less the same specular reflectance. Then, each triangle of the triangular mesh object model is assigned its diffuse parameters and specular parameters. Figure 6 Synthesized image of an object without texture For the second class of objects with highly textured surfaces, region segmentation cannot be performed reliably. Therefore, we developed another method using a slightly different approach. Instead of assigning one set of reflectance parameters to each triangle of a triangular mesh object model, each triangle is assigned a texture of reflectance parameters and surface normals. This method is similar to the conventional texture mapping technique. However, unlike that technique, our method can be used to synthesize realistic color images with realistic shading such as highlights. Finally, highly realistic object images are synthesized using the created object models with shape and reflectance properties. Figure 7 Synthesized images of an object with texture

27 1.4 Reflectance Analysis under Solar Illumination Reflectance Analysis under Solar Illumination Most algorithms for analyzing object shape and reflectance properties, including our methods described above, have been applied to images taken in a laboratory. Images synthesized or taken in a laboratory are well controlled and are less complex than those taken outside under sunlight. For instance, in an outdoor environment, there are multiple light sources of different colors and spatial distributions, namely the sunlight and the skylight. The sunlight can be regarded as a point light source whose movement is restricted to the ecliptic. On the other hand, the skylight acts as a blue extended light source. Those multiple light sources create more than two reflection components from the object surface unlike the case of one known light source in a laboratory setup. Also, due to the sun s restricted movement, the problem of surface normal recovery becomes underconstrained under the sunlight. For instance, if the photometric stereo method is applied to two intensity images taken outside at different times, two surface normals which are symmetric with respect to the ecliptic are obtained at each surface point. Those two surface normals cannot be distinguished locally because those two surface normal directions give us exactly the same brightness at the surface point. In this thesis, we address the issues involved in analyzing real outdoor intensity images taken under solar illumination: the multiple reflection components including highlights, and the ambiguous solution for surface normals. For the difficulties associated with reflectance analysis under solar illumination, we propose a solution and then demonstrate the feasibility of the solution by using test images which are taken in a laboratory setup and outdoors under the sun. Figure 8 Image taken under solar illumination

28 12 Chapter Thesis Outline This thesis presents new approaches for modeling object surface reflectance properties, as well as shapes, by observing real objects in both indoor and outdoor environments. The methods are based on a novel algorithm called goniochromatic space analysis for separating the diffuse and specular reflection components from a color image sequence. This thesis is organized as follows. In Chapter 2, we introduce the goniochromatic space and explain the similarities and differences between the goniochromatic space and the two other spaces commonly used for reflectance analysis: the RGB color space and the I θ space. In Chapter 3, we discuss our method for modeling object shapes and reflectance parameters from a color image sequence. In Chapter 4 and Chapter 5, we describe two different methods for modeling object shape and reflectance parameters from a sequence of range and color images. In Chapter 6, we describe our attempt to analyze shape and reflectance properties of an object by using a color image sequence taken under solar illumination. Finally, in Chapter 7, we summarize the work presented in this thesis, give conclusions and directions of future research.

29 13 Chapter 2 Goniochromatic Space Analysis of Reflection 2.1 Background Color spaces, especially the RGB color space, have been widely used by the computer vision community to analyze color images. One of the first applications of color space analysis was image segmentation by partitioning a color histogram into Gaussian clusters (Haralick and Kelly [18]). A histogram is created by the color values at all image pixels; it tells, for each point in the RGB color space, how many pixels exhibit the color. Typically, the colors tend to form clusters in the histogram, one for each textureless object in the image. By manual or automatic analysis of the histogram, the shape of each cluster is determined. Then each pixel in the color image is assigned to the cluster that is closest to the pixel color in the RGB color space. Following the work by Haralick and Kelly, a number of image segmentation techniques have been developed [1], [9], [10], [61]. Most of the early works in color computer vision used color information as a random variable to be used for image segmentation. Later, many researchers tried using knowledge about how color is created to analyze a color image and compute some important properties of objects in the color image. Shafer [73] carefully examined the physical properties of reflection when light strikes an inhomogeneous surface which includes materials such as plastics, paints, ceramics, and

30 14 Chapter 2 paper. An inhomogeneous surface consists of a medium with particles of colorant suspended in it. When light hits such a surface, there is a change in the index of refraction at the interface. This reflection occurs in the perfect specular direction where angle of incidence equals angle of reflection, and forms the specular reflection component, i.e., the highlights seen on shiny materials. The light that penetrates through the interface is scattered and selectively absorbed by the colorant. Then, the light is re-emitted to the air to become the diffuse reflection component. Based on this observation, Shafer proposed the first realistic color reflection model used in computer vision: the dichromatic reflection model. That reflection model states that the reflectance of an object may be divided into two components: the interface or specular reflection component, and the body or diffuse reflection component. In addition, Shafer demonstrated that, illuminated by a single light source, a cluster of uniformly colored dielectric objects in the color space forms a parallelogram defined by two color vectors, namely the specular reflection vector and the diffuse reflection vector. The dichromatic reflection model proposed by Shafer has inspired a large amount of important related work. Klinker, Shafer and Kanade [39][40] demonstrated that, instead of a parallelogram, the cluster actually forms a T-shape in the color space, and they separated the diffuse reflection component and the specular reflection component by geometrically clustering a scatter plot of the image in the RGB color space. They used the separated diffuse reflection component for segmentation of a color image without suffering from disturbances of highlights in the image. Their method is based on the assumption that the directions of surface normals in an image are widely distributed in all directions. This assumption guarantees that both the diffuse reflection vector and the specular reflection vector will be visible. Therefore, their algorithm cannot handle cases where only a few planar surface patches exist in the image. Bajcsy and Lee [2] proposed using the hue-saturation-intensity (HSI) color space instead of the RGB color space for analyzing a color image. They studied clusters in the HSI color space formed by scene events such as shading, highlights, shadows, and interreflection. Based on this analysis, their algorithm uses a hue histogram technique to segment individual surfaces and then follows with a local thresholding to identify highlights and interreflection. This technique was the first to identify interreflection successfully from a single color image. The algorithm is shown to be effective on color images of glossy objects. Novak and Shafer [56] presented an algorithm for analyzing color histograms. The algorithm yields estimates of surface roughness, the phase angle between the camera and the light source, and the illumination intensity. In their paper, they showed that these properties cannot be computed analytically, and they developed a method for estimating these proper-

31 2.1 Background 15 ties based on interpolation between histograms that come from images of known scene properties. The method was tested by using both simulated and real images, and successfully estimated those properties from a single color image. Lee and Bajcsy [44] presented an interesting algorithm for the detection of specularities from Lambertian reflections using multiple color images from different viewing directions. The algorithm is based on the observation that reflected light intensity from the diffuse reflection component at an object surface does not change depending on viewing directions; however reflected light intensity from the specular reflection component or from a mixture of the diffuse and specular reflection components does change. This algorithm differs from other algorithms described above in that multiple color images taken from different viewing directions are used to differentiate color histograms of the color images. In this aspect, this algorithm is the one most closely related to our color analysis framework proposed in this thesis. However, their method still suffers from the fact that a color histogram of a color image is analyzed. When input color images contain many objects with non-uniform reflectance, color histograms of those images become too cluttered to be used for detecting the specular reflection component. Also, Lee and Bajcsy s method cannot compute reflectance parameters of objects, and it is not clear how the method can be extended for reflectance parameter estimation. All of the algorithms for color image analysis described in this section examine histograms formed either in the RGB color space or in some other color space. This means that the method depends on global information extracted from color images. In other words, those algorithms require color histograms which are not so cluttered and can be segmented clearly. If color images contain many objects with non-uniform reflectance, then color histograms become impossible to segment; therefore, the algorithms will fail. Another limitation of those algorithms is that there is little or no consideration of illuminating/viewing geometry. In other words, those algorithms, with the exception of the one by Lee and Bajcsy, do not examine how observed color changes as the illuminating/viewing geometry changes. This makes it very difficult to extract any information about object surface reflectance properties. (Strictly speaking, this is not true in the work by Novak and Shafer [56] where surface roughness is estimated from a color histogram. However, their algorithm does not work well for cluttered color images.) On the other hand, other techniques have been developed for analyzing gray-scale images. Those techniques include shape-from-shading and photometric stereo. The shapefrom-shading technique introduced by Horn [32] recovers object shapes from a single intensity image. In this method, surface orientations are calculated starting from a chosen point whose orientation is known a priori, by using the characteristic strip expansion method. Ikeuchi and Horn [33] developed a shape-from-shading technique which uses occluding

32 16 Chapter 2 boundaries of an object to iteratively calculate surface orientation. In general, shape-from-shading techniques require rather strong assumptions about object surface shape and reflectance properties, e.g., smoothness constraint and uniform reflectance. The limitation comes from the fact that only one intensity image is used for shape-from-shading techniques, and therefore, it is a fundamentally under-constrained problem. Photometric stereo was introduced by Woodham [95] as a technique for recovering surface orientation from multiple gray-scale images taken with different light source locations. Surface normals are determined from the combination of constraints provided by reflectance maps with respect to different incident directions of a point light source. Unlike shape-fromshading techniques, Woodham s technique does not rely on assumptions such as the surface smoothness constraint. However, the technique is still based on the assumption of the Lambertian surface. Hence, the technique can be applied to object surfaces only with the diffuse reflection component. While specularities have usually been considered to cause errors in surface normal estimation by photometric stereo methods, some researchers have proposed the opposite idea of using the specular reflection component as a primary source of information for shape recovery. Ikeuchi was the first to develop a photometric stereo technique that can handle purely specular reflecting surfaces [34]. Nayar, Ikeuchi and Kanade [49] developed a photometric stereo technique for recovering object shape with surfaces exhibiting both the diffuse and specular reflection components, i.e., hybrid surfaces. These reflection components can vary in relative strength, from purely Lambertian to purely specular. The technique determines 2D surface orientation and the relative albedo strength of the diffuse and specular reflection components. The key is to use extended rather than point light sources so that a non-zero specular component is detected from more than just one light source. In fact, the extended nature of the light sources and their spacing are made so that for a hybrid surface, a non-zero specular component results from two consecutive light sources, with the rest of the observed reflections being only from the diffuse reflection component. Later, this technique was extended further to be able to compute 3D surface orientations by Sato, Nayar, and Ikeuchi [63]. Lu and Little developed a photometric stereo technique to estimate a reflectance function from a sequence of gray-scale images taken by rotating a smooth object, and the object shape was successfully recovered using the estimated reflectance function [47]. Since the reflectance function is measured directly from the input image sequence, the method does not assume a particular reflection model such as the Lambertian model which is commonly used in computer vision. However, their algorithm can be applied to object surfaces with

33 2.2 The RGB Color Space 17 uniform reflectance properties, and it cannot be easily extended to overcome this limitation. These photometric stereo techniques determine surface normals and reflectance parameters by examining how reflected light intensity at a surface point changes as light source direction varies. This intensity change can be represented in the I θ (image intensity - illumination direction) space. The main difference between the I θ space and the RGB color space is that the former can represent intensity change caused by illumination/viewing geometry change, while the latter cannot. This ability is a significant advantage when we want to measure various properties of object surfaces such as surface normals and reflectance properties. Also, the I θ space uses intensity change observed at each surface point. Therefore, necessary information can be obtained locally, while the RGB color space uses a color histogram which is formed globally from a entire color image. However, the I θ space fails to represent important information for reflectance analysis, i.e., color. Therefore, it is desirable to have a new framework which can represent both observed color information and change caused by different illumination/viewing geometry. In this thesis, we propose a new framework to analyze object shape and surface properties from a sequence of color images. We observe how the color of the image varies on change in angular illuminating-viewing conditions using a four dimensional RGB plus illuminating/viewing angle space. We call this space the goniochromatic space based on Standard Terminology of Appearance 4 from the American Society for Testing and Materials [78]. This chapter first briefly describes the conventional RGB color space and the I θ space in Section 2.2 and Section 2.3. Then, in Section 2.4, we introduce the goniochromatic space in comparison to those two other spaces. 2.2 The RGB Color Space A pixel intensity Iis determined by the spectral distribution of incident light to the camera h( λ) and the camera response to the various wavelengths s( λ), i.e., 4. goniochromatism: change in any or all attributes of color of a specimen on change in angular illuminating-viewing conditions but without change in light source or observer.

34 18 Chapter 2 I = s( λ)h( λ)dλ (EQ1) A color camera has color filters attached in front of its sensor device. Each color filter has a transmittance function τλ which ( ) determines the fraction of light transmitted at each wavelength λ. Then, pixel intensities I R, I G, and Ifrom B red, green, and blue channels of the color camera are given by the following integrations: I R I G I B = = = τ R ( λ)s( λ)h( λ)dλ τ G ( λ)s( λ)h( λ)dλ τ B ( λ)s( λ)h( λ)dλ (EQ2) where τ R ( λ), τ G ( λ), τ B ( λ) are the transmittance functions of the red, green, and blue filters, respectively. The three intensities I R, I G and Iform B a 3 color 1 vector Cwhich represents the color of a pixel in the RGB color space. C I R = = I G I B τ R ( λ)s( λ)h( λ)dλ τ G ( λ)s( λ)h( λ)dλ τ B ( λ)s( λ)h( λ)dλ (EQ3) Klinker, Shafer, and Kanade [39][40] demonstrated that the histogram of dielectric object color in the RGB color space forms a T-shape (Figure 9). They extracted the two components of the T-shape in order to separate the specular reflection component and the diffuse reflection component. 200 BLUE 100 GREEN RED 200

35 2.3 The I-q (Intensity - Illuminating/Viewing Angle) Space 19 Figure 9 A sphere and its color histogram as T shape in the RGB color space (synthesized data) A significant limitation of the method is that it works only when surface normals in the image are well distributed in all directions. Suppose that the image has only one planar object illuminated by a light source which is located far away from the object. Then, all pixels on the object are mapped to a single point in the color space because observed color is constant over the object surface. The T-shape converges, in the RGB color space, to a single point which represents the color of the object, because the plane has uniform color. As a result, we cannot separate the reflection components. This indicates that the method cannot be applied locally. 2.3 The I-θ (Intensity - Illuminating/Viewing Angle) Space Nayar, Ikeuchi, and Kanade [49] analyzed an image sequence given by a moving light source in the I θ space. They considered how the pixel intensity changes as the light source direction θvaries (Figure 10). The pixel intensity from a monochrome camera is written as a function of θ : I( θ) = g( θ) s( λ)h( λ)dλ (EQ4) where g( θ) represents intensity change with respect to a light source direction θ. Note that the spectral distribution of incident light to the camera h( λ) is generally dependent on geometric relations such as the viewing direction and the illumination direction. However, as an approximation, we assume that the function his ( λindependent ) of these factors. The vector p( θ, I( θ) ) shows how pixel intensity changes with respect to light source direction in Ispace θ (Figure 11). As opposed to analysis in the RGB color space, the I θ space analysis is applied locally. All necessary information is extracted from the intensity change at each individual pixel. Nayar et al. [49] used the I θ space to separate the surface reflection component and the diffuse reflection component, using a priori knowledge of the geometry of the photometric sampler.

36 θ n θ 20 Chapter 2 camera n light source object Figure 10 Viewer-centered coordinate system These three vectors are coplanar. I( θ) p( θ, I( θ) ) θ Figure 11 The I θ space 2.4 The Goniochromatic Space Without resorting to relatively strong assumptions, neither the RGB color space nor the I θ space can be used to separate the two reflection components using local pixel information. To overcome this weakness, we propose a new four dimensional space, which we call

37 2.4 The Goniochromatic Space 21 the goniochromatic space. This four dimensional space is spanned by the R, G, B, and θ axes. The term goniochromatic space implies an augmentation of the RGB color space with an additional dimension that represents varying illumination/viewing geometry. This dimension represents the geometric relationship between the viewing direction, illumination direction, and surface normal. In our method that we describe more fully in the next chapter, we keep the viewing direction and surface normal orientation fixed. Then, we vary the illumination direction, taking a new image at each new illumination direction. (The same information could be obtained if we kept the viewing direction and illumination direction fixed, and varied the surface normal orientation. This case will be described in Chapter 4 and Chapter 5.) The goniochromatic space can be thought of as a union of the RGB color space and the I θ space. By omitting the θ axis, the goniochromatic space becomes equivalent to the RGB color space; and by omitting two color axes, the goniochromatic space becomes the I θ space. Each point in the space is represented by the light source direction θand the color vector C( θ) which is a function of θ : p( θ, C( θ) ) (EQ5) C( θ) I R ( θ) = I G ( θ) = I B ( θ) g( θ) τ R ( λ)s( λ)h( λ)dλ (EQ6) The goniochromatic pace represents how the observed color of a pixel C( θ) changes as the direction of the light source θ changes (Figure 12). Note that, in Figure 12, the dimension of the goniochromatic space is reduced from four to three for clarity. In this diagram, one axis of the RGB color space is ignored. g( θ) τ G ( λ)s( λ)h( λ)dλ g( θ) τ B ( λ)s( λ)h( λ)dλ

38 22 Chapter BLUE 100 GREEN THETA [deg] 100 Figure 12 The goniochromatic space (synthesized data)

39 23 Chapter 3 Object Modeling from Color Image Sequence In the previous chapter, the goniochromatic space was introduced as a method for analyzing a sequence of color images. In this chapter, we introduce a method for estimating object surface shapes and reflectance properties from a sequence of color images taken by changing the illuminating direction. The method consists of two steps. First, by using the GSA introduced in the previous chapter, the diffuse and specular reflection components are separated from the color image sequence. Then, surface normals and reflectance parameters are estimated based on the separation results. The method was successfully applied to real images of objects made of different materials. Objects that we consider in this chapter are made of dielectric or metal material. Also, the method can be applied to objects whose surface normals exist in a 2D plane defined by a light source direction and a viewing direction. Note that this is not a limitation inherent to the proposed method; rather, it is due to limited coplanar motion of the light source as we will see later in this chapter. This chapter is organized as follows. First, a parametric reflectance model used in our analysis is described in Section 3.1. Then, the decomposition of the diffuse and specular reflection components from a color image sequence is explained in Section 3.2. The decomposition method requires the specular reflection color and the diffuse reflection color. A method for estimating these colors is explained in Section 3.3 and Section 3.4, respectively. The results of experiments conducted using objects of different materials are presented in Section 3.5. The summary of this chapter is given in Section 3.6.

40 24 Chapter Reflection Model A mechanism of reflection is described in terms of three reflection components, namely the diffuse lobe, the specular lobe, and the specular spike [50]. Reflected light energy from object surface is a combination of these three components. The diffuse lobe component may be explained as internal scattering. When an incident light ray penetrates the object surface, it is reflected and refracted repeatedly at a boundary between small particles and medium of the object. The scattered light ray eventually reaches the object surface, and is refracted into the air in various directions. This phenomenon results in the diffuse lobe component. The Lambertian model is based on the assumption that those directions are evenly distributed in all directions. On the other hand, the specular spike and lobe are explained as light reflected at an interface between the air and the surface medium. The specular lobe component spreads around the specular direction, while the specular spike component is zero in all directions except for a very narrow range around the specular direction. The relative strengths of the two components depends on the microscopic roughness of the surface. specular spike specular lobe Unlike the diffuse lobe and the specular lobe components, the specular spike compocamera light source diffuse lobe reflecting surface Figure 13 Polar plot of the three reflection components (redrawn from [50])

41 3.1 Reflection Model 25 nent is not commonly observed in many actual applications. The component can be observed only from mirror-like smooth surfaces where reflected light rays of the specular spike component are concentrated in a specular direction. That makes it hard to observe the specular spike component from viewing directions at coarse sampling angles. Therefore, in many computer vision and computer graphics applications, a reflection mechanism is modeled as a linear combination of two reflection components: the diffuse lobe component and the specular lobe component. Those two reflection components are normally called the diffuse reflection component and the specular reflection component. The reflection model was formally introduced by Shafer [73] as the dichromatic reflection model. Based on the dichromatic reflection model, the reflection model used in our analysis is represented as a linear combination of the diffuse reflection component and the specular reflection component. specular lobe camera light source diffuse lobe reflecting surface Figure 14 Reflection model used in our analysis The Torrance-Sparrow model is relatively simple and has been shown to conform with experimental data [85]. In our analysis, we use the Torrance-Sparrow model [85] for representing the diffuse reflection component and the specular reflection component. As we will see in Section 3.1.2, this model describes reflection of incident light rays on rough surfaces, i.e., the specular lobe component, and captures important phenomena such as the off-specular effect and spectral change within highlights.

42 26 Chapter The Lambertian Model The Torrance-Sparrow model uses the Lambertian model for representing the diffuse reflection component. The Lambertian model has been used extensively for many computer vision techniques such as shape-from-shading and photometric stereo. The Lambertian model is the first model proposed to approximate the diffuse reflection component. The mechanism of the diffuse reflection is explained as the internal scattering. When an incident light ray penetrates the object surface, it is reflected and refracted repeatedly at a boundary between small particles and medium of the object (Figure 15). The scattered light ray eventually reaches the object surface, and is refracted into the air in various directions. This phenomenon results in body reflection. The Lambertian model is based on the assumption that the directions of the refracted lights are evenly distributed in all directions. incident light specular component diffuse component pigments Figure 15 Internal scattering and surface reflection da i dωi r θ i n dω s da s Figure 16 Solid angles of a light source and illuminated surface

43 3.1 Reflection Model 27 For a Lambertian surface, the radiance of the surface is proportional to the irradiance into the surface. Let be the incident flux onto (Figure 16). Then, dφ i da s dφ i = L i dω s da i (EQ7) where L i is a source radiance [ W ( m 2 Sr) ]. Also, da i = dω i r 2 da s cosθ i dω s = Substituting (EQ8) and (EQ9) into (EQ7), r 2 dφ i = L i dω i da s cosθ i (EQ8) (EQ9) (EQ10) Therefore, the irradiance of the surface is P s dφ = i = L da i dω i cosθ i s (EQ11) As stated above, since the radiance of a Lambertian surface is proportional to the irradiance, the radiance of the surface is L r = k D ( λ)l i dω i cosθ i (EQ12) where k D ( λ) represents the ratio of the radiance to the irradiance. It is known that this model tends to model the diffuse lobe component poorly as surface roughness increases. Other models may be used to describe the diffuse lobe component more accurately. For instance, those models include [57], [94]. However, in our analysis, these more sophisticated diffuse reflection models were not used because they are considerably more complex and therefore expensive to use The Torrance-Sparrow Reflection Model The Torrance-Sparrow model describes single reflection of incident light rays by rough surfaces. This model is reported to be valid when the wavelength of light is much smaller than the roughness of the surface [85]; a condition which is true for most objects. The surface is modeled as a collection of planar micro-facets which are perfectly smooth and reflect

44 28 Chapter 3 light rays as perfect specular reflectors. The geometry for the Torrance-Sparrow model is shown in Figure 17. The surface area da s is located at the center of the coordinate system. An incoming light beam lies in the X Z plane and is incident to the surface at an angle θ i. The radiance and solid angle of the light source are represented as L i and dω i, respectively. In the Torrance-Sparrow model, the micro-facet slopes are assumed to be normally distributed. Additionally, the distribution is assumed to be symmetric around the mean surface normal n. The distribution is represented by a one-dimensional normal distribution ρ α ( α) = cexp α 2 2σ α 2 (EQ13) where c is a constant, and the facet slope α has mean value α = 0 and standard deviation σ α. In the geometry shown in Figure 17, only planar micro-facets having normal vectors within the solid angle dω can reflect incoming light flux specularly. The number of facets per unit area of the surface that are oriented within the solid angle dω is equal to ρ α ( α)dω. Hence, considering the area of each facet a f and the area of the illuminated surface da s, the incoming flux on the set of reflecting facets is determined as d 2 Φ i = L i dω i a f ρ α ( α)dω da s cosθ i (EQ14) The Torrance-Sparrow reflection model considers two terms to determine what portion of the incoming flux is reflected as outgoing flux. One term is the Fresnel reflection coefficient, F( θ i, η, λ) where η is the refractive index of the material, and λis the wavelength of the incoming light. The other term, called the geometric attenuation factor, is represented as G( θ i, θ r, φ r ). This factor accounts for the fact that, at large incidence angles, light incoming to a facet may be shadowed by adjacent surface irregularities, and outgoing light along the viewing direction that grazes the surface may be masked or interrupted in its passage to the viewer. Considering those two factors, the flux dreflected 2 Φ r into the solid angle is dω r determined as d 2 Φ r = F( θ i, η, λ)g( θ i, θ r, φ r )d 2 Φ i. (EQ15) The radiance of reflected light is defined as dl r = d 2 Φ r dω r da s cosθ r. (EQ16) Substituting (EQ14) and (EQ15) into(eq16), we obtain

45 3.1 Reflection Model 29 dl r = F( θ i η, )G( θ i, θ r, φ r )L i dω i ( a f ρ α ( α)dω da s ) cos θ i dω r da s cosθ r. (EQ17) Since only facets with normals that lie within the solid angle the solid angle, the two solid angles are related as dω r dω can reflect light into dω = dω r 4cosθ i. (EQ18) Substituting (EQ13) and (EQ18) into (EQ17), the surface radiance of the surface given by the specular reflection component is represented as da s ca f F( θ i, η )G( θ i, θ r, φ r ) dl r L i dω i = exp cosθ r α 2 2σ α 2. (EQ19) As stated above, the Fresnel coefficient F( θ i, η ) and the geometrical attenuation factor depend on the illumination and viewing geometry. G( θ i, θ r, φ r ) To simplify the Torrance-Sparrow model used in our analysis, we have made two assumptions with respect to the Fresnel reflectance coefficient and the geometrical attenuation factor. For both metals and non-metals, the Fresnel reflectance coefficient is nearly constant until the local angle of incidence θ i approaches 90. Also, for most of dielectric and metal objects, the coefficient is uniform over visible wavelength. Therefore, we assume that the Fresnel reflectance coefficient F is constant with respect to θand i θ. r Additionally, it is observed that the geometrical attenuation factor G equals unity for angles of incidence not near the grazing angle. Based on this observation, we also assume that the geometrical attenuation factor G is equal to 1. Finally, the surface radiance of the specular reflection component in our experiments is represented as: dl r = ca f F L i dω i exp cosθ r α 2 2σ α 2. (EQ20) This reflection model for the specular lobe component is combined with the Lambertian model (EQ12) to produce k S dl r = k D ( λ) cos θ i exp cosθ 2 Li dω i r 2σ α α 2 (EQ21)

46 30 Chapter 3 where and produce k D ( λ) k S = ca f F 4 represents the ratio of the radiance to the irradiance of the diffuse reflection,. That expression is integrated in the case of a collimated light source to where s( λ) L r = d = ω i L r k D ( λ)s( λ) cosθ i k S s( λ) α 2 + exp cosθ r 2σ α 2 is the surface irradiance on a plane perpendicular to the light source direction. (EQ22) Z incident beam θ i n n α θ i θ dω i θ r reflected beam dω i dω r da s φ r X Figure 17 Geometry for the Torrance-Sparrow reflection model [85] Image Formation Model If the object distance is much larger than the focal length and the diameter of the entrance pupil of the imaging system, it can be shown that the image irradiance E p is proportional to scene radiance. The image irrandiance is given as L r E p = π L r -- d f cos 4 γ (EQ23) where d is the diameter of a lens, f is the focal length of the lens, and γis the angle between the optical axis and the line of sight [29]. In our experiments, changes of those three parameters d, f, and γare assumed to be relatively small. Therefore, (EQ23) can be simply given as

47 3.2 Decomposition of Reflection Components 31 E p = gl r (EQ24) where g = π -- d cos f 4 γ. Combining (EQ22) and (EQ24), we have E p = gk D ( λ)s( λ) cosθ i gk S s( λ) α 2 + exp cosθ r 2σ α 2. (EQ25) Now let τ m ( λ), m = ( R, G, B) be the spectral responsivity of the color camera in red, green and blue bands. Then, the output from the color camera in each band can be expressed as I m This equation can be simplified as: = λ τ m ( λ)e p ( λ) dλ. (EQ26) I m = K D, m cosθ i e cosθ r 1 K Sm, α σ α (EQ27) where, = g τ m ( λ)k D ( λ)s( λ) dλ K Sm, = gk S τ m ( λ)s( λ) dλ K D m λ. (EQ28) This simplified Torrance-Sparrow model is used as a reflection model in our analysis. In our analysis, reflection bounced only once from the light source is considered. Therefore, the reflection model is valid only for convex objects, and it cannot represent reflection which bounces more than once (i.e., interreflection) on concave object surfaces. We, however, empirically found that interreflection did not significantly affect our analysis. λ 3.2 Decomposition of Reflection Components In this section, we introduce a new algorithm for separating the diffuse and specular reflection components. Using red, green, and blue filters, the coefficients K D and K S, in (EQ27), become two linearly independent vectors, D and S, unless the colors of the two K K reflection components are accidentally same:

48 32 Chapter 3 K D K S K D, R K D, G K D, B = = K SR, K SG, K SB, = = g τ R ( λ)k D ( λ)s( λ)c( λ)dλ λ g τ G ( λ)k D ( λ)s( λ)c( λ)dλ λ g τ B ( λ)k D ( λ)s( λ)c( λ)dλ λ gk S gk S gk S λ λ λ τ R ( λ)s( λ)c( λ)dλ τ G ( λ)s( λ)c( λ)dλ τ B ( λ)s( λ)c( λ)dλ (EQ29) (EQ30) These two vectors represent the colors of the diffuse and specular reflection components in the dichromatic reflectance model [73]. First, the pixel intensities in the R, G, and B channels with mdifferent light source directions, are measured at one pixel. It is important to note that all intensities are measured at the same pixel. A typical example of the intensity values is shown in Figure red green blue Intensity Theta [deg] Figure 18 Measurement at one pixel (synthesized data)

49 3.2 Decomposition of Reflection Components 33 The three sequences of intensity values are stored in the columns of an matrix 3. M The matrix is called the measurement matrix. Considering the reflectance model (EQ28) and two color vectors in (EQ29) and (EQ30), the intensity values in the R, G, and B channels can be represented as: M = M R M G M B cosθ i1 α exp cosθ r 2σ α 2 = cosθ i2 cosθ im α exp cosθ r 2σ α 2 α m exp cosθ r 2σ α 2 K D, R K D, G K D, B K SR, K S, G K S, B = G D G S T D K T S K GK (EQ31) where the two vectors D and represent S the intensity values of the diffuse and specular G G reflection components with respect to the light source direction θ i. Vector D represents the K diffuse reflection color vector. Vector S represents the specular reflection color vector. We K call the two matrices G and K, the geometry matrix and the color matrix, respectively. The color vectors and the θ i axis span the space in the goniochromatic space. We call the space T spanned by the color vector K and the axis the diffuse reflection plane, and the space D θ i T spanned by the color vector and the axis the specular reflection plane. K S θ i

50 34 Chapter 3 K S K D specular reflection plane diffuse reflection plane specular reflection component diffuse reflection component Figure 19 Diffuse and specular reflection planes (synthesized data) In the case of a conductive material, such as metal, the diffuse reflection component is zero, and (EQ31) becomes M = M R M G M B = α exp cosθ r 2σ α 2 α exp cosθ 2 r 2σ α α m exp cosθ r 2σ α 2 K SR, K SG, K S, B = S G K T S (EQ32)

51 3.3 Estimation of the Specular Reflection Color 35 because there exists only the specular reflection component. Suppose we have an estimation of the color matrix K. Then, the two reflection components represented by the geometry matrix Gare obtained by projecting the observed reflection stored in onto the two color vectors and. M K D K S G = MK + where K + is a 3 2 pseudoinverse matrix of the color matrix K. (EQ33) This derivation is based on the assumption that the color matrix K is known. In Section 3.3 and Section 3.4, we describe how to estimate the specular reflection color vector and the diffuse reflection color vector from the input color image sequence. 3.3 Estimation of the Specular Reflection Color It can be seen from (EQ30) that the specular reflection color vector is the same as a light source color vector. Thus, we can estimate the illumination color and use it as the specular reflection color. Several algorithms have been developed by other researchers for estimating illuminant color from a single color image. In the next section, we first review these estimation algorithms, and then explain our method for estimating the specular color vector from a sequence of color images, which was modified from the previously developed methods Previously Developed Methods The following three sections describe previously developed methods for estimating illuminant color from a single color image Lee s Method According to the dichromatic reflection model [73], the color of reflection from a dielectric object is a linear combination of the diffuse reflection component and the specular reflection component. The color of the specular reflection component is equal to the illuminant color. Lee [41] proposed, based on this observation, that the illuminant color can be estimated from shading on multiple objects with different body colors. In the x-y chromaticity diagram, the observed color of the dielectric object lies on a seg-

52 36 Chapter 3 ment whose endpoints represent the colors of the diffuse and specular reflection components. By representing the color of each object as a segment in the chromaticity diagram, the illuminant color can then be determined from the intersection of the multiple segments attributed to multiple objects of different body colors (Figure 20). Unfortunately, this method does not work if each object in the color image has non-uniform reflectance. For instance, if an object has textured surface, then a segment formed from the object surface in the chromaticity diagram arbitrarily scatters in the diagram, and therefore does not form a line segment. This is a rather severe limitation, since few objects we see have uniform reflectance without surface texture. 0.9 y 0 0 x 0.8 Figure 20 x-y chromaticity diagram showing the ideal loci of chromaticities corresponding to colors from five surfaces of different colors (redrawn from [41]) Tominaga and Wandell s Method Tominaga and Wandell [84] indicated that the spectral power distributions of all possible observed colors of a dielectric object with a highlight exist on the plane spanned by the spectral power distributions of the diffuse reflection component and the specular reflection component. They called this plane the color signal plane. Each object color forms its own color signal plane. The spectral power distribution of the specular reflection component, which is the same as the spectral power distribution of the illuminant, can be obtained by

53 3.3 Estimation of the Specular Reflection Color 37 taking the intersection of the color signal planes. The singular value decomposition technique was used to determine the intersection of color signal planes. Fundamentally, their method is equivalent to Lee s method. Therefore, Tominaga and Wandel s method has the same limitation that we described for Lee s method in Section This estimation method can be applied to a limited class of objects of uniform reflectance without surface texture. Q 3 Once again, this estimation method is also bounded to the same limitation as the two methods described above. In addition, because the specular color vector is estimated geoilluminant color signal plane Q 1 Q 2 Figure 21 Estimation of illuminant color as an intersection of color signal planes (in the case of tristimulus vectors) Klinker, Shafer, and Kanade s Method As described in Chapter 2, a technique for separating the specular reflection component from the diffuse reflection component from one color image was developed by Klinker, Shafer, and Kanade [39]. The algorithm is based on the dichromatic reflection model and the prediction that the color pixels corresponding to a single dielectric material will form a T-shape cluster in the RGB color space. The directions of two sub-clusters of the T-shape cluster are estimated geometrically in the RGB color space. Those directions of the two subcluster correspond to the diffuse color vector and the specular color vector. Subsequently, those two color vectors are used to separate the two reflection component from the T-shape cluster.

54 38 Chapter 3 metrically in the RGB color space, this method seems to perform less reliably than the other two methods. 200 BLUE 100 GREEN illuminant color vector diffuse color vector RED Figure 22 T-shape color histogram and two color vectors Our Method for Estimating an Illuminant Color T In our experiment, the specular color vector, i.e., the row S of the color matrix K, is K estimated using a method similar to Lee s method for illuminant color estimation. First, several pixels of different colors in the image are manually selected. The observed reflection color from those selected pixels is a linear combination of the diffuse reflection component and the specular reflection component. By plotting the observed reflection color of each pixel in the x-y chromaticity diagram over the image sequence, we obtain several line segments in the x-y chromaticity diagram. The illuminant color can then be determined by the intersection of those line segments in the diagram. This technique is not limited to the case of objects of uniform reflectance without surface texture. That is because we use observed color change at each surface point, rather than color change distributed over object surface of uniform reflectance. Therefore, our estimation technique can be used for objects with non-uniform reflectance. However, our technique requires that there are multiple objects of different colors in the image. In other words, if the image contains objects of only one color, the light source color cannot be estimated. In

55 3.4 Estimation of the Diffuse Reflection Color 39 that case, the illumination color must be obtained by measuring the color vector of the light source as a part of system calibration. 3.4 Estimation of the Diffuse Reflection Color By using the method we describe in Section 3.3.2, we can estimate the specular reflection color. This has to be done only once because the specular reflection color is determined T by illuminant color and does not depend on objects in the scene. However, the other row D K of the color matrix cannot be obtained in the same manner because it depends on the material of the object. To estimate the diffuse reflection color, we propose to use another estimation method based on the following observation. The specular reflection component represented in the reflection model (EQ27) attenuates quickly as the angle α increases due to the exponential function. Therefore, if two vectors, i T = I ( ) are sampled for sufficiently different, at least one of w Ri I Gi I Bi i = 1, 2 α T these vectors is equal to the color vector of the diffuse reflection component D. This vector K has no specular reflection component. It is guaranteed that both vectors i exist in the row space of the color matrix K T T w spanned by the basis D and S. Therefore, the desired color vector of the diffuse reflection T K K component D is the vector i which subtends the largest angle with respect to the vector T K w (Figure 23). The angle between the two color vectors is simply calculated as: K S β = T S K w i acos (EQ34) T S K w i Once we obtain the color matrix K, the geometry matrix Gcan be calculated from (EQ33) (Figure 24). After the matrix Ghas been obtained, the loci of the diffuse reflection component and the specular reflection component in the goniochromatic space can be extracted as shown in (EQ35) and (EQ36): M diffuse = D G K T D (EQ35)

56 40 Chapter 3 M specular = S G K T S (EQ36) row space of I red green blue K S w 1 Intensity 50.0 β 1 w 2 β Theta [deg] K D w 1 w 2 Figure 23 Estimation of the color vector K D G D 1 GS 1 Intensity column 1 column 2 G = D G G S = G D 2 GS 2 G D m GS m Theta [deg] Figure 24 Geometry matrix G(synthesized data) 3.5 Experimental Results In order to demonstrate the feasibility of the algorithm outlined in this chapter, the algo-

57 3.5 Experimental Results 41 rithm was applied to color images of several kinds of objects: a shiny dielectric object, a matte dielectric object, and a metal object. The surface normal and the reflectance parameters of the objects were obtained using the algorithm. The algorithm was applied to the metal object to demonstrate that it also works in the case where only the specular reflection component exits. The algorithm was subsequently applied to each pixel of the entire image to extract the needle map of the object in the image. The object shape was recovered from the needle map. Finally, the method for reflectance component separation was applied to a more complex dielectric object with non-uniform reflectance properties. Also, the proposed method for estimating the specular reflection color was applied in the last example Experimental Setup A SONY CCD video camera module model XC-57, to which three color filters (#25, #58, #47) are attached, is placed at the top of a spherical light diffuser. A point light source attached to a PUMA 560 manipulator is moved around the diffuser on its equatorial plane. The whole system is controlled by a SUN SPARC workstation. A geometry of the experimental setup is shown in Figure 25. In our experiment, a lamp shade, whose diameter is R = 20 inches, is used as the spherical light diffuser [49]. The maximum dispersion angle ε of the extended light source is determined by the fixed diameter R and the distance from the point light source to the surface of the diffuser H (Figure 26). The object is placed inside the spherical diffuser. It is important to note that the use of the light diffuser for generating an extended light source is not essential for the algorithm to separate two reflection components. It is used only for avoiding camera saturation when input images are taken. With the light diffuser, highlights observed on objects become less bright and are distributed in larger areas on the objects surfaces. The algorithm introduced in this chapter can be applied to images taken without a light diffuser when the objects are not very shiny.

58 42 Chapter 3 color camera θ light source surface normal θ r θ i light intensity distribution object diffuser Figure 25 Geometry of the experimental setup camera H point light source φ object ε R diffuser Figure 26 Geometry of the extended light source As shown in [49], the distribution of the extended light source (Figure 26) is given by

59 3.5 Experimental Results 43 L( φ) C[ ( R+ H) cosφ R] = [( R+ H Rcosφ) 2 + ( R sinφ) 2 ] 3 2 (EQ37) This distribution is limited to the interval ε < φ < ε where ε = acos( R ( R+ H) ). This distribution has a rather complex formula and is somewhat difficult to use analytically. Fortunately, this distribution has a profile very similar to a Gaussian distribution function [49]. Therefore, we can approximate this distribution by using a Gaussian distribution function. The standard deviation of the Gaussian distribution function can be computed numerically by using (EQ37). We denote the standard deviation of the extended light s distribution as. σ e Finally, the reflection model (EQ27) for this experimental setup is given as I m =, cos( θ θ r ) + K Dm 1 K Sm, ( θ 2θ r ) 2 exp cosθ r 4( 2σ α + σ e ) 2 (EQ38) where θ represents the angle between the viewing direction and the center of the extended light source (Figure 25). The light source direction θis controlled by the robotic arm Estimation of Surface Normal and Reflectance Parameters After the geometry matrix Ghas been recovered, the two curves which represent the diffuse and the specular reflection component in (EQ38) are fitted to the separated diffuse and specular reflection components, respectively. A 1 cos( θ A 2 ) + A 3 ( θ B 2 ) B 1 exp B (EQ39) (EQ40) B 2 2 and Aare 2 the direction of the surface normal. θ r Band 1 are A 1 the parameters of the specular and diffuse reflection components, respectively Shiny Dielectric Object A green plastic cylinder with a relatively smooth surface was used in this experiment (Figure 27). In this example, the light source color was measured directly because the object had uniform color, and therefore our estimation method described in Section could no

60 44 Chapter 3 be applied. Figure 28 shows the measured intensities plotted in the goniochromatic space with the blue axis omitted. Note that the intensity values around the maximum intensity contain both the diffuse reflection component and the specular reflection component. On the other hand, the intensity values for θ > 60 contain only the diffuse reflection component. The curve for θ > 60 and θlies < 140 inside the diffuse reflection plane in the goniochromatic space, whereas the curve for 140 < θ < 60 does not lie inside the diffuse reflection plane. This is because the intensities values for 140 < θ < 60 are linear combinations of the diffuse color vector and the specular color vector. K D K S Figure 27 Green shiny plastic cylinder

61 3.5 Experimental Results BLUE 100 GREEN THETA [deg] 0 Figure 28 Measured intensities in the goniochromatic space The algorithm for separating the two reflection components, described in Section 3.2, was applied to the measured data. The red, green, and blue intensities are initially stored in the measurement matrix Mas its columns (EQ31). Then, the measurement matrix ism decomposed into the geometry matrix Gand the color matrix. KThe columns of the resulting geometric matrix Gare plotted in Figure 29. Figure 30 shows the result of the decomposition of the reflection in the goniochromatic space. It is evident from this figure that the measured intensity in the goniochromatic space (Figure 28) has been successfully decomposed into the diffuse and specular reflection components using our algorithm. The diffuse reflection plane and the specular reflection plane are shown in Figure 31. This diagram is the result of viewing Figure 30 along the θ axis. Note that the slope of the specular reflection plane is 45 in the diagram. This is because the specular reflection vector (EQ30) has been normalized to S =, , K (EQ41) The diffuse reflection plane is shifted toward the green axis because the color of the observed object is green in this experiment.

62 46 Chapter 3 The result of the fitting procedure described in Section is shown in Figure 32. From the result, we obtain the direction of the surface normal, and the reflectance parameters as follows: the surface normal ( B 2 2 ) 52.09, the parameter of the specular reflection component ( B 1 ) , and the parameter of the diffuse reflection component ( A 1 ) Notations A 1, B 1, and Bfollow 2 (EQ39) and (EQ40) specular reflection diffuse reflection Intensity Theta [deg] Figure 29 Decomposed two reflection components

63 3.5 Experimental Results 47 specular reflection diffuse reflection Figure 30 Loci of two reflection components in the goniochromatic space specular reflection diffuse reflection Green Red Figure 31 Diffuse and specular reflection planes

64 48 Chapter 3 intensity specular reflection diffuse reflection θ Figure 32 Result of fitting Matte Dielectric Object A green plastic cylinder with a relatively rough surface was used in this experiment (Figure 33). The measured intensities are plotted in the goniochromatic space (Figure 34) in the same manner as explained in Section 3.2. Note that the width of the specular reflection component is larger than that in the previous experiment. This is mainly attributed to different surface roughness of the two plastic cylinders.

65 3.5 Experimental Results 49 Figure 33 Green matte plastic cylinder 300 GREEN RED THETA [deg] 0 Figure 34 Measured intensities in the goniochromatic space

66 50 Chapter 3 The intensity is decomposed into the two reflection components according to the algorithm shown in Section 3.2. The result of the decomposition is shown in Figure 35. The directions of the specular reflection plane and the diffuse reflection plane are the same as those in the case of the previous shiny green plastic cylinder. Figure 36 depicts the result of parameter estimation from the reflection components. The surface normal and the parameters of the two reflection components obtained are: the surface normal ( B 2 2 ) 49.61, the parameter of the specular reflection component ( B 1 ) , and the parameter of the diffuse reflection component ( A 1 ) Note that the value of the parameter B 3 which is equal to 4is ( estimated 2σ α + σ e ) as This value is greater than that of the shiny plastic object ( 13.56). The difference is consistent with the fact that the matte object s surface roughness is greater than the shiny object s surface roughness. diffuse reflection specular reflection Figure 35 Two decomposed reflection components

67 3.5 Experimental Results 51 intensity diffuse reflection specular reflection θ Figure 36 Result of fitting Metal Object The dichromatic reflection model [73] cannot be applied to non-dielectric objects such as a metallic specular object. As an example of those objects, an aluminum triangular prism was used in this experiment (Figure 37). This type of material has only the specular reflection component, but not the diffuse reflection component. The measured intensities shown in Figure 38 indicate that the reflection from the aluminum triangular prism possesses only the specular reflection component. This observation is justified by the result of the decomposition of the two reflection components (Figure 39). The diffuse reflection component is negligibly small compared to the specular reflection component.

68 52 Chapter 3 Figure 37 Aluminum triangular prism GREEN RED THETA [deg] 0 Figure 38 Loci of the intensity in the goniochromatic space

69 3.5 Experimental Results 53 specular reflection diffuse reflection Figure 39 Two decomposed reflection components Shape Recovery In the previous sections, the decomposition algorithm was applied to real color images in order to separate the two reflection components using intensity change at a single pixel. In other words, the reflection components were separated locally. After this separation, the surface normal ( B 2 2 or A 2 ) and the reflectance parameters ( B 1 and A 1 ) at each pixel were obtained by nonlinear curve fitting of the two reflection components models ((EQ39), (EQ40)) to the decomposed reflection components. We repeated the same operation over all pixels in the image to obtain surface normals over the entire image. Then, the needle map and the depth map of the object in the image were obtained from those recovered surface normals. We used a purple plastic cylinder as the observed object in this experiment. The image is shown in Figure 40. Results of the curve fitting of the diffuse reflection component were used to obtain surface normal directions. The resulting needle map is shown in Figure 41. The depth map of the purple plastic cylinder is obtained from the needlemap by the relaxation method proposed by Horn and Brooks [30]. Figure 42 depicts the resulting depth map.

70 54 Chapter 3 Figure 40 Purple plastic cylinder Figure 41 Needle map

71 3.5 Experimental Results Figure 42 Recovered object shape Reflection Component Separation with Non-uniform Reflectance In the previous examples, the proposed method was applied to relatively simple objects with uniform reflectance. In this example, our method was applied to more complex objects with non-uniform surface reflectance. Also, our method for estimating illuminant color, which was described in Section 3.3.2, was applied in this example. First, for estimating the illuminant color, three pixels of different colors were selected in one of the input color images (Figure 44). Then, the reflection colors from those three pixels through the input color image sequence were plotted in the x-y chromaticity diagram as shown in Figure 43. The loci of observed color sequence at those selected image pixels forms a line segment in the diagram. Finally, the illuminant color was estimated as ( r, g, b) = ( 0.353, 0.334, 0.313) by computing the intersection of those three line segments.

72 56 Chapter 3 Color Y Estimated illumination color (r,g,b)=(0.353, 0.334, 0.313) Pixel 1 Pixel 2 Pixel Color X Figure 43 Estimation of illuminant color in the x-y chromaticity diagram (The three pixels on different colors are manually selected in the image in Figure 44.) By using the pixel-based separation algorithm, we can easily generate images of the two reflection components. The algorithm was applied to all pixels of the input images locally, and each separated reflection component was used to generate the diffuse reflection image and the specular reflection image. Figure 44 shows one frame from the input image sequence. All pixels in the image are decomposed into two reflection components by using the separation algorithm described in Section 3.2. The result of the diffuse reflection image and the specular reflection image are shown in Figure 45 and Figure 46, respectively. Note that the input image is successfully decomposed into the images of the two reflection components, even though the input image has a complex object with non-uniform reflectance. This is because the proposed algorithm is pixel-based and does not require global information. In this kind of situation, the traditional separation algorithm based on the RGB color histogram would easily fail because clusters in the RGB color space become crowded and obscure, so that clustering in the RGB space becomes impossible. In contrast, since our algorithm is pixel-based and applied to each pixel separately, the two reflection components can be successfully separated even in the face of inconspicuous specular reflection.

73 3.5 Experimental Results 57 pixel 1 pixel 2 pixel 3 Figure 44 Multicolored object 4 The three pixels of different colors are manually selected in the image. 4. This figure is shown in color in the Color Figures chapter.

74 58 Chapter 3 Figure 45 Diffuse reflection image 5 Figure 46 Specular reflection image 5. This figure is shown in color in the Color Figures chapter.

75 3.6 Summary Summary We proposed goniochromatic space analysis as a new framework for color image analysis, where the diffuse reflection component and the specular reflection component from the dichromatic reflection model span subspaces. We presented an algorithm to separate the two reflection components at each pixel from a sequence of color images and to obtain the surface normal and the parameters of the Torrance-Sparrow reflection model. The significance of our method lies in its use of local (i.e., pixel-based) and not global information of intensity values in the images. This characteristic separates our algorithm from previously proposed algorithms for segmenting the diffuse reflection component and the specular reflection component in the RGB color space. Our algorithm has been applied to objects of different materials to demonstrate the algorithm s effectiveness. We have successfully separated the two reflection components in the temporal-color space. Using the separation result, we have obtained surface normals and parameters of the two reflection components for objects with 2D surface normals. In addition, we are able to reconstruct the shape of the objects. Also, our separation algorithm was successfully applied to a more complex object with non-uniform reflectance.

76 60 Chapter 3

77 61 Chapter 4 Object Modeling from Range and Color Images: Object Models Without Texture In Chapter 3, we introduced a method for estimating object shape and surface reflectance parameters from a color image sequence taken by a moving light source. Unfortunately, the proposed method is limited in several aspects. An object shape cannot be recovered accurately if the object has a surface with high curvature. That is because the method can recover only surface normals, but it cannot obtain the 3D shape of the object surface directly. Also, the method can recover the object surface shape only partially. The part of the object surface that is not seen from the view point cannot be recovered. Those limitations motivated us to further extend our method for creating a complete model of a complex object. In this chapter, we extend our object modeling method by creating a complete object model from a sequence of range and color images which are taken by changing object posture. First, we review the previously developed methods related to our new method, and examine their limitations.

78 62 Chapter Background Techniques to measure object surface shape and reflectance properties together by using both range images and gray-scale (or color) images have been studied by several researchers. Ikeuchi and Sato originally developed a technique to measure object shapes and reflection function parameters from a range image and intensity image pair [35]. The Torrance- Sparrow reflection model is used, and the Fresnel reflectance parameter in the specular component is assumed to be constant by restricting surface orientations to be less than 60 from the viewing direction. The following four parameters are determined: (i) Lambertian strength coefficient, (ii) incident orientation of light source, (iii) specular strength coefficient, and (iv) roughness parameter of the specular reflection distribution. First, the surface shape is measured from the range image, and then surface normals of the object surface are computed from the measured shape. Then, surface points which exhibit only the diffuse reflection component are identified by using a brightness criterion. Pixel intensities of the identified surface points only with the diffuse reflection components are used to estimate Lambertian strength and incident direction of the point light source by using least-squares fitting. Criteria are developed that also identify pixels that are in shadow, and that exhibit the specular reflection component or interreflection. A least-squares procedure is applied to fit the specular strength and surface roughness parameters from identified pixels with the specular reflection component. The main drawback of the technique is that it assumes uniform reflectance properties over the object surface. Additionally, only partial object shape is recovered because only one range image is used in the technique. Baribeau, Rioux, and Godin [4] measured three reflectance parameters that they call the diffuse reflectance of the body material, the Fresnel reflectance of the air-media interface, and the slope surface roughness of the interface, of the Torrance-Sparrow reflection model. In their method, a polychromatic laser range sensor is used to produce a pair of range and color images. Unlike the technique developed by Ikeuchi and Sato, this method can capture more subtle reflectance properties of the object surface because this method is capable of estimating the Fresnel reflectance parameter. However, the Baribeau et al. method still required uniform reflectance over each object surface, and only partial object shape was recovered. Also, their method was intended to be used for understanding images, e.g., region segmentation. Therefore, in their method, important features for object modeling were missing. In particular, their method did not

79 4.1 Background 63 guarantee that reflectance parameters are estimated at all points on the object surface. Kay and Caelli [36] introduced another method to use a range image and 4 or 8 intensity images taken under different illumination conditions. By increasing the number of intensity images, they estimated parameters of the Torrance-Sparrow reflection model locally for each image pixel. They classified the object surface into three groups: non-highlight regions, specular highlight regions, and rank-deficient regions. Based on this classification, a different solution method was applied to each region. Unlike the two techniques described above, Kay and Caelli s method can handle object surfaces with varying reflectance due to the use of multiple intensity images with different light source directions. However, it is reported that parameter estimation can be unstable, especially when the specular reflection component is not observed strongly. This prohibits their method from being applied to a wide range of real objects. In this thesis, we propose a new method to recover complete object surface shape and reflectance parameters from a sequence of range images and color images taken by changing the object s posture. Unlike previously introduced methods, our method is capable of estimating surface reflectance parameters of objects with non-uniform reflectance. Also, our method guarantees that all surface points are assigned appropriate reflectance parameters. This is desirable especially for the purpose of object modeling for computer graphics. In this chapter, we consider objects whose surfaces are uniformly painted in multiple colors. Therefore, the surfaces of such objects can be segmented into regions of uniform color. Many real objects fall into this category. However, there are still other objects whose surfaces have detailed texture. Modeling of such objects will be discussed in the next chapter. In our method, a sequence of range images is used to recover the entire shape of an object as a triangular mesh model. Then, a sequence of color images is mapped onto the recovered shape. As a result, we can determine an observed color change through the image sequence for all triangular patches of the object surface shape model. The use of three dimensional shape information is important here because, without the object shape, the correspondence problem, i.e., determining where a surface point in one image appears in another image, cannot be solved easily. This problem was not the case in our method described in Chapter 3. It was solved automatically because the camera and the object were fixed, and only the light source was moved to take a color image sequence. Subsequently, by using the algorithm introduced in Chapter 3, the observed color sequence is separated into the diffuse reflection component and the specular reflection component. Then, parameters of the Torrance-Sparrow reflection model are estimated reliably

80 64 Chapter 4 for the diffuse and specular reflection components. Unlike the diffuse reflection component, special care needs to be taken in estimating the specular parameters. The specular reflection component can be observed in only a limited viewing direction. Therefore, the specular reflection component can be observed only in a small subset of the object surface. As a result, we cannot estimate the specular reflection parameters where the specular reflection component is not observed. Our approach avoids the problem by using region segmentation of the object surface. Based on the assumption that each region of uniform diffuse color has uniform specular reflectance, we estimate the specular parameters for each region, i.e., not for each surface point. Then, the estimated specular parameters are assigned to all surface points within the region. Finally, color images of the object are synthesized using the constructed model to demonstrate the feasibility of the proposed approach. The chapter is organized as follows. First, we explain our imaging system in Section 4.2. In Section 4.3, we describe the reconstruction of object shape from multiple range images. In Section 4.4, we explain the projection of color images onto the recovered object shape. In Section 4.5, describe the estimation of reflectance parameters in our method. The estimated object shape and reflectance parameters are used to synthesize object images under arbitrary illumination/viewing conditions. Several examples of synthesized object images are shown in Section 4.6, Finally, we summarize this chapter in Section Image Acquisition System The experimental setup for the image acquisition system used in our experiments is illustrated in Figure 47. The object whose shape and reflectance information is to be recovered is mounted on the end of a PUMA 560 manipulator. The object used in our experiment is a plastic toy dinosaur whose height is about 170mm. A range image is obtained using a light-stripe range finder with a liquid crystal shutter and a color CCD video camera [62]. The light-stripe range finder projects a set of stripes onto the scene. Each stripe has a distinct pattern, e.g., a binary code. The CCD video camera is used to acquire images of the scene as the pattern is projected. Each pattern corresponds to a different plane of the projected light. With knowledge of the relative positions of the camera and projector, the image location and projected light plane determine the ( X, Y, Z) position of the point in the scene with respect to the CCD camera. The same color camera is

81 4.2 Image Acquisition System 65 used for digitizing color images. Therefore, pixels of the range images and the color images directly correspond. The range finder is calibrated to produce a 3projection 4 matrix which Πrepresents the projection transformation between the world coordinate system and the image coordinate system. The location of the PUMA 560 manipulator with respect to the world coordinate system is also given by calibration. Therefore, the object location is given as a 4 4 transformation matrix T for each digitized image. A single xenon lamp, whose diameter is approximately 10mm, is used as a point light source. The light source is located near the camera, and the light source direction is considered to be the same as the viewing direction. This light source location is chosen to avoid the problem of self-shadowing in our images. Then, the gain and offset of outputs from the video camera are adjusted so that the light source color becomes ( R, G, B) = ( 111,, ). Therefore, the specular reflection color is assumed to be known in this experiment. The camera and light source locations are fixed in our experiment. The approximate distance between the object and the camera is 2m. Using the image acquisition system, a sequence of range and color images of the object is obtained as the object is rotated at a fixed angle step. light stripe range finder light source object color camera PUMA arm Figure 47 Image acquisition system

82 66 Chapter Shape Reconstruction from Multiple Range Images For generating a three dimensional object shape from multiple range images, we developed a new method to integrate multiple range images by using a volumetric representation [91]. Since the shape reconstruction from multiple range images is an important step in our technique, we now review shape reconstruction techniques previously developed by other researchers, and examine their characteristics. Then, we will describe our shape reconstruction method. The reconstruction of three dimensional object shapes from multiple range images has been studied intensively in the past. But, all of the conventional shape reconstruction techniques we review here pay very little, if any, attention to object surface reflectance properties. Those techniques attempt to recover only object surface shapes; they do not recover surface reflectance properties, which are as important as the shapes for object modeling. Turk and Levoy [86] developed a technique to combine multiple range images one by one, using a two step strategy: registration and integration. Their technique uses a modified version of the iterated closest-point algorithm (ICP algorithm) which was originally developed by Besl and McKay [7]. After the registration procedure, two surface meshes composed of small triangular patches are integrated to produce one combined surface mesh. Turk and Levoy s method performs poorly if the surfaces are slightly misaligned or if there is significant noise in the data. Typically, the resulting surfaces would have noticeable seams along the edges where they were pieced together. Turk and Levoy s method was motivated by another method developed by Soucy and Laurendre [77] which uses a computationally intensive strategy for aligning all surface patches together. Higuchi, Hebert, and Ikeuchi [24] developed a method for merging multiple range views of a free-form surface obtained from arbitrary viewing directions, with no initial estimation of relative transformation among those viewing directions. The method is based on the Spherical Attribute Image (SAI) representation of free-form surfaces which was originally introduced by Delingette, Hebert and Ikeuchi in [13]. Although the Higuchi et al. technique does not require relative transformation between observed surface patches, it can handle only objects which are topologically equivalent to a sphere, i.e., objects with no holes. Also, it is difficult to produce object shapes of high resolution because of the SAI representation. The computation cost of the algorithm becomes unacceptably high when a SAI of high frequency is used. The Higuchi et al. method was further extended by Shum et al. [74] to improve the robustness by applying principal components analysis with missing data technique for

83 4.3 Shape Reconstruction from Multiple Range Images 67 simultaneously estimating relative transformations. However, their algorithm still suffers the same limitations. Hoppe, DeRose, and Duchamp [26] introduced an algorithm to construct three dimensional surface models from a cloud of points without spatial connectivity. The algorithm differs from others in that it does not require surface meshes as input. The algorithm computes the signed distance function from the points of the range images rather than triangulated surfaces generated from the range images. The signed distance is computed at each node of three dimensional array, i.e., voxel, around the target object to produce volumetric data set. Then, an iso-surface of zero distance is extracted by using the marching cube algorithm [46]. Although their reliance on points rather than on triangulated surface patches makes their algorithm applicable to more general cases, using points rather than surfaces suffers from some practical problems. The main problem is that a surface is necessary to measure the signed distance correctly. To compensate for this problem, their algorithm locally infers a plane at a surface point from the neighboring points in the input data. Unfortunately, due to this local estimation of planes, their algorithm is sensitive to outliers in the input 3D points; therefore, their algorithm is not suitable in the case where range data are less accurate and contain a significant amount of noise. Curless and Levoy [12] proposed another technique similar to the technique by Hoppe et al. Curless and Levoy s technique differs in that triangulated surface patches from range images are used instead of a 3D points. For each voxel of a volumetric data set, they take a weighted average of the signed distances from the voxel center to range image points whose image rays intersect the voxel. This is done by following the ray from the camera center to each range image point and incrementing the sum of weighted signed distances. Then, like the method by Hoppe et al., the marching cube algorithm is applied to the resulting volumetric data set to extract the object surface as an iso-surface of zero distance. Unfortunately, Curless and Levoy s algorithm is still sensitive to noisy data and extraneous data, while it performs significantly better than the one by Hoppe et al. In Curless and Levoy s algorithm, the weighted signed distance is averaged to produce an estimate of true distance to the object surface. This certainly reduces some of the noise, but still cannot overcome the effects of large errors in the data. After having studied and used those previously developed methods for multiple range image integration, we have found that, unfortunately, none of those methods can give us satisfactory integration results, especially when input range images contain a significant amount of noise, and when input surface patches are slightly misaligned. That motivated us to develop another method for integrating multiple range images by using a volumetric representation [91].

84 68 Chapter Our Method for Merging Multiple Range Images Our method consists of the following four steps: 1. Surface acquisition from each range image The range finder in our image acquisition system cannot measure the object surface itself. In other words, the range finder can produce only images of 3D points on the object surface. Because of this limitation, we need to somehow convert the measured 3D points into a triangular mesh which represents the object surface shape. This is done by connecting two neighboring range image pixels based on the assumption that those points are connected by a locally smooth surface. If those two points are closer in a 3D distance than some threshold, then we consider them to be connected on the object surface. 2. Alignment of all range images All of the range images are measured in the coordinate system fixed with respect to the range finder system, and they are not aligned to each other initially. Therefore, after we obtain the triangular surface meshes from the range images, we need to transform all of the meshes into a unique object coordinate system. For aligning all of the range images, we use a transformation matrix Twhich represents an object location for each range image (Section 4.2). Suppose we select one of the range images as a key range image whose coordinate system is used as the world coordinate system. We refer to the transformation matrix for the key range image as T merge. Then, all other range images can be transformed into the key range image s coordinate system by transforming all 3D points P = ( X, Y, Z, 1) as P = T merge T f P where f = 1 n is a range image 1 frame number. 3. Merging based on a volumetric representation After all of the range images are converted into triangular patches and aligned to a unique coordinate system, we merge them using a volumetric representation. First, we consider imaginary 3D volume grids around the aligned triangular patches. (The volume grid is usually called voxel in the computer graphics field.) Then, in each voxel, we store the value, fx ( ), of the signed distance from the center point of the voxel, x, to the closest point on the object surface. The sign indicates whether the point is outside, fx ( ) > 0, or inside, fx ( ) < 0, the object surface, while fx ( ) = 0 indicates that x lies on the surface of the object. The signed distance can be computed reliably by using the consensus surface algorithm [91]. In the algorithm, a quorum of consensus of locally coherent observation of the object

85

86 70 Chapter 4 frame 0 frame 20 frame 40 frame 60 frame 80 frame 100 Figure 49 Input color images 4 (6 out of 120 frames are shown) Shape Recovery The consensus surface method described in Section was used for merging eight triangular surface meshes created from the input range images. The recovered object shape is shown in Figure 50. The object shape consists of 9943 triangular patches. In the process of merging surface meshes, the object shape was manually edited to remove noticeable defects such as holes and spikes. Manual editing could be eliminated if more range images are used. 4. This figure is shown in color in the Color Figures chapter.

87

88 72 Chapter 4 We represent world coordinates and image coordinates using homogeneous coordinates. A point on the object surface with Euclidean coordinates ( X, Y, Z) is expressed by a column vector P = [ X, Y, Z, 1] T. An image pixel location ( is x, y) represented by p = [ x, y, 1] T. As described in Section 4.2, the camera projection transformation is represented by a 3 4 matrix Π, and the object location is given by a 4 4 object transformation matrix T. We denote the object transformation matrix for the input color image frame f by T f ( f = 1 n ). Thus, using the projection matrix Πand the transformation matrix T merge for the key range image in Section 4.3.1, the projection of a 3D point on the object surface in the color image frame f is given as where the last component of ( x, y). p f 1 = ΠT f T merge P ( f = 1 n) p f (EQ42) has to be normalized to give the projected image location The observed color of the 3D point in the color image frame f is given as the ( R, G, B) color intensity at the pixel location ( x, y). If the 3D point is not visible in the color image (i.e., the point is facing away from the camera, or it is occluded), the observed color for the 3D point is set to ( R, G, B) = ( 0, 0, 0). For determining the visibility efficiently, we used the Z-buffer algorithm ([15], for instance) in our analysis. Ideally, all triangular patches are small enough to have uniform color on the image plane. However, a projection of a triangular patch on the image plane often corresponds to multiple image pixels of different colors. Therefore, we average the color intensity of all corresponding pixels, and assign that intensity to the triangular patch. This approximation is acceptable as long as the object surface does not have fine texture compared with the resolution of triangular patches. (In the next chapter, we will discuss another approach for the case where this assumption does not hold true.) By applying the mapping procedure for all object orientations, we finally get a collection of triangular patches, each of which has a sequence of observed colors with respect to the object orientation. The result of the color image mapping is illustrated in Figure 51, which shows six frames as examples.

89 4.4 Mapping Color Images onto Recovered Object Shape 73 frame 0 frame 20 frame 40 frame 60 frame 80 frame 100 Figure 51 View mapping result 6 out of 120 input color images are shown here. Object surface regions which are not observed in each color image are shown as white area. Based on the image mapping onto the recovered object shape, a sequence of observed colors is determined at each triangular patch of the object shape. The observed color is not defined if the triangular patch is not visible from the camera. In this case, the observed color is set to zero. Figure 52 illustrates a typical observed color sequence at a triangular patch with strong specularity. The specular reflection component can be observed strongly near image frame 67. When the specular reflection component exists, the output color intensity is a linear combination of the diffuse reflection component and the specular reflection component. The two reflection components are separated by using the algorithm introduced in Chapter 3. (The separation result will be shown in the next section.) The intensities are set to zero before image frame 39 and after image frame 92 because the triangular patch is not visible from the camera due to occlusion.

90 74 Chapter 4 Another example with weak specularity is shown in Figure 53. In the example, the observed specular reflection is relatively small compared with the diffuse reflection component. As a result, estimating reflectance parameters for both the diffuse and specular reflection components together could be sensitive to various disturbances such as image noise. That is why the reflection component separation is introduced prior to parameter estimation in our analysis. By separating the two reflection components based on color, reflectance parameters can be estimated separately in a robust manner red green blue intensity image frame number Figure 52 Intensity change with strong specularity red green blue intensity image frame number Figure 53 Intensity change with little specularity

91 4.5 Reflectance Parameter Estimation Reflectance Parameter Estimation Reflection Model As Figure 47 illustrates, illumination and viewing directions are fixed and are identical. The Torrance-Sparrow reflection model (EQ27) is modified for the particular experiment setup as I m 1 θ 2 = K Dm, cosθ + K S, m exp cosθ 2 2σ α m = R, G, B (EQ43) where θ is the angle between the surface normal and the viewing direction (or the light source direction) in Figure 54, K D, m and Kare Sm, a constant for each reflection component, σ α is the standard deviation of a facet slope α of the Torrance-Sparrow reflection model. The direction of the light source and the camera with respect to the surface normal is referred as to the sensor direction θ. Like our analysis in Chapter 3, reflection bounced only once from the light source is considered here. Therefore, the reflection model is valid only for convex objects, and it cannot represent reflection which bounces more than once (i.e., interreflection) on concave object surfaces. incident beam & reflected beam n Z n θ i = θ r = α X Figure 54 Geometry for simplified Torrance-Sparrow model

92 76 Chapter Reflection Component Separation The algorithm to separate the diffuse and specular reflection components was applied to the observed color sequence at each triangular patch. The red, green, and blue intensities of the observed color sequence are stored in the matrix Mas its columns (EQ31). Then, the matrix Gis computed from the matrix and M the matrix which K is estimated as described in Section 3.3 and Section 3.4. Finally, the diffuse and specular reflection components are given as shown in (EQ35) and (EQ36). This reflection component separation is repeated for all triangular patches of the object. Some of the separation results are shown in Figure 55 and Figure 56. Figure 55 shows the separated reflection components with strong specularity. (The measured color sequence is shown in Figure 52 in the previous section.) Another example of the reflection component separation is given in Figure 56. In that case, the specular reflection component is relatively small compared to the diffuse reflection component. That example indicates that the separation algorithm can be applied robustly even if the specularity is not observed strongly. After the reflection component separation, reflectance parameters can be estimated separately. The separated reflection components at all triangular patches of a particular image frame can be used to generate the diffuse reflection image and the specular reflection image. The result of the diffuse and specular reflection images are shown in Figure 57 and Figure 58. Image frames 0 and 60 are used to generate Figure 57 and Figure 58, respectively diffuse-red diffuse-green diffuse-blue specular-red specular-green specular-blue intensity image frame number Figure 55 Separated reflection components with strong specularity

93 4.5 Reflectance Parameter Estimation diffuse-red diffuse-green diffuse-blue specular-red specular-green specular-blue intensity image frame number Figure 56 Separated reflection component with little specularity Figure 57 Diffuse image and specular image: example 1

94 78 Chapter 4 Figure 58 Diffuse image and specular image: example Reflectance Parameter Estimation for Segmented Regions In this section, we will discuss how to estimate parameters of the reflectance model for the triangular patch by using the separated reflection components. After the separation algorithm is applied, we obtain a sequence of the diffuse reflection intensities and a sequence of the specular reflection intensities for each triangular patch. This information is sufficient to estimate reflectance parameters of the reflection model (EQ43) separately for the two reflection components. As (EQ43) shows, the reflectance model is a function of the angle between the surface normal and the viewing direction θ. Therefore, for estimating reflectance parameters: K D, m, K Sm,, and σ α, the angle θ has to be computed as the object posture changes. Since the projection transformation matrix is already given and the object orientation is known in the world coordinate system, it is straightforward to compute a surface normal vector and a viewing direction vector (or an illumination vector) at the center of each triangular patch. Thus, the angle θ between the surface normal and the viewing direction vector can be computed. After the angle θ is computed, the reflectance parameters for the diffuse reflection component ( K D, m ) and the specular reflection component ( K Sm, and σ α ) are estimated separately by the Levenberg-Marquardt method [60]. In our experiment, the camera output is calibrated so that the specular reflection color has the same value from the three color channels. Therefore, only one color band is used to estimate in our experiment. K S

95 4.5 Reflectance Parameter Estimation 79 By repeating the estimation procedure for all triangular patches, we can estimate the diffuse reflection component parameters for all triangular patches if those patches are illuminated in one or more frames of the image sequence. On the other hand, the specular reflection component can be observed only in a limited viewing direction. Due to this fact, the specular reflection component can be observed only in a small subset of all triangular patches. We cannot estimate the specular reflection component parameters for those patches in which the specular reflection component is not observed. Even if the specular reflection component is observed, the parameter estimation can become unreliable if the specular reflection is not sufficiently strong. To avoid that problem, we can increase the number of sampled object orientations and take more color images. However, that still cannot guarantee that all triangular patches show the specular reflection component. Taking more color images may not be practical since more sampled images require more measurement time and data processing time. For the above reasons, we decided to assign the specular reflection component parameters based on region segmentation. In our experiments, it is assumed that the object surface can be segmented into a finite number of regions which have uniform diffuse color, and all triangular patches within each region have the same specular reflection component parameters. The result of the region segmentation is shown in Figure 59 (segmented regions are represented as grey levels). By using the segmentation result, the specular reflection parameters of each region can be estimated from triangular patches with strong specularity. For estimating specular reflection component parameters, several triangular patches (e.g., ten patches in our experiment) with the largest specular reflection component are selected for each of the segmented regions. The triangular patches with strong specularity can be easily selected after the reflectance component separation. Then, the specular reflection component parameters of the reflection model (EQ43) are estimated for each of the ten selected triangular patches. Finally, the average of the estimated parameters of the selected triangular patches is used as the specular reflection component parameters of the segmented region. In our experiments, the four largest segmented regions were used for specular reflection parameter estimation, and the rest of the regions were not used. These unused regions were found to be located near or at the boundaries of the large regions. Hence, a surface normal of a triangular patch does not necessarily represent a surface normal of the object surface at the location. That causes the estimation of the specular reflection parameters to be inaccurate. In addition, it is more likely that the specular reflection component is not seen in those small

96 region 0 region 1 region 4 region 3 region 2

97 4.6 Synthesized Images with Realistic Reflection 81 region # K S σ α Table 1 Estimated parameters of the specular reflection component 4.6 Synthesized Images with Realistic Reflection By using the recovered object shape and reflection model parameters, images of the object under arbitrary illumination conditions can be generated. In this section, some examples of the images are shown to demonstrate the ability of the proposed method to produce realistic images. Point light sources located far from the object are used for generating images. For comparing synthesized images with the real images of the object, the object model was rendered with similar illumination and viewing directions to those in our experimental setup. The illumination and viewing directions for input color image frame 0 were used to create the image shown in Figure 60. The input color image is shown in Figure 49. It is important to see that region 2 shows less specularity than region 0 and region 1. (See Figure 59 for region numbers.) In addition, the specular reflection is widely distributed in region 2 because region 2 has a large reflectance parameter. Another image example is shown in Figure 61. The object model is rendered under similar illumination and viewing conditions as input color image frame 60. Figure 62 shows the object illuminated by two light sources. The arrow in the image represents the illumination direction. σ α

98 82 Chapter 4 Figure 60 Synthesized image 1 5 Figure 61 Synthesized image This figure is shown in color in the Color Figures chapter. 6. This figure is shown in color in the Color Figures chapter.

99 4.7 Summary 83 Figure 62 Synthesized image 3 7 The arrows in the image represent the illumination directions. 4.7 Summary We developed a new method for estimating object surface shape and reflectance parameters from a sequence of range and color images of the object. A sequence of range and color images is taken by changing the object posture which, in our image acquisition system, is controlled by a robotic arm. First, the object shape is recovered from a range image sequence as a collection of triangular patches. Then, a sequence of input color images are mapped onto the recovered object shape to determine an observed color sequence at each triangular patch individually. The observed color sequence is separated into the diffuse and specular reflection components. Finally, parameters of the Torrance-Sparrow reflection model are estimated separately at each of triangular patches. By using the recovered object shape and estimated reflectance parameters associated with each triangular patch, highly realistic images of the real object can be synthesized under arbitrary illumination conditions. The proposed approach has been applied to real range and color images of a plastic object, and the effectiveness of the proposed approach has been success- 7. This figure is shown in color in the Color Figures chapter.

100 84 Chapter 4 fully demonstrated by constructing synthesized images of the object under different illumination conditions.

101 85 Chapter 5 Object Modeling from Range and Color Images: Object Models With Texture In Chapter 4, we introduced a method for creating a three dimensional object model from a sequence of range and color images of the object. The object model is created as a triangular surface mesh each of whose triangle is assigned parameters of the Torrance-Sparrow reflection model. The method is based on the assumption that the object surface can be segmented into a finite number of regions, each of which has uniform diffuse color and the same specular reflectance. Then, all triangles within each region are assigned the same specular reflectance parameters. However, this assumption does not hold for objects with detailed diffuse texture or varying specular reflectance. Therefore, in this chapter, we extend our object modeling method for objects with texture. Especially, our modeling method is further extended in the following two points. The first point is dense estimation of surface normals on the object surface. In Chapter 4, surface normals were computed as polygonal normals from a reconstructed triangular surface mesh model. Polygonal normals approximate real surface normals fairly well when object surfaces are relatively smooth and do not have high curvature points. However, accu-

102 86 Chapter 5 racy of polygonal normals becomes poor when the object surface has high curvature points and the resolution of the triangular surface mesh model are low, i.e., a small number of triangles to represent the object shape. In this chapter, rather than using polygonal normals, we compute surface normals densely over the object surface by using the lowest level input, i.e., 3D points from range images. We consider regular grid points within each triangle (Figure 63). Then, surface normal is estimated at each of the grid points. With dense surface normal information, we now can analyze subtle highlights falling onto a single triangle of the object shape model. The second point is dense estimation of reflectance parameters within each triangle of the object shape model. In the previous chapter, we assumed uniform diffuse reflectance and specular reflectance within each triangle, based on a belief that the resolution of the object shape model is high enough. However, this strategy does not work well when the object has dense texture on its surface. One way to solve this problem is to increase the resolution of the object shape model. In other words, we can increase the number of triangles to represent the object shape until each of the triangles approximately corresponds to surface region of uniform reflectance. However, this is not a practical solution since it quickly increases the required storage for the object model. Instead, like dense estimation of surface normals, we estimate reflectance parameters at regular grid points within each triangle. Then, the densely estimated reflectance parameters are used together with the estimated surface normals for synthesizing object images with realistic shadings, including subtle highlights on object surfaces. grid points ( n x, n y, n z, K D, R, K D, G, K DB,, K S, σ α ) surface normal and reflectance parameters Figure 63 Object modeling with reflectance parameter mapping

103 5.1 Dense Surface Normal Estimation 87 This chapter is organized as follows: Section 5.1 describes estimation of dense surface normals from measured 3D points. Section 5.2 describes our method for estimating diffuse reflection parameters. Section 5.3 explains estimation of the specular reflection parameters in our object modeling method. Section 5.4 shows experimental results. Finally, Section 5.5 presents a summary of this chapter. 5.1 Dense Surface Normal Estimation The marching cube algorithm used in our shape reconstruction algorithm generally produces a large number of triangles whose sizes vary significantly. Therefore, it is desirable to simplify the reconstructed object surface shape by reducing the number of triangles. We used the mesh simplification method developed by Hoppe et al. [27] for this purpose. One disadvantage of using the simplified object model is that a surface normal computed from the simplified model does not approximate a real surface normal accurately even though the object shape is preserved reasonably well. As a result, small highlights observed within each triangle cannot be analyzed correctly, and therefore they cannot be reproduced in synthesized images. For this reason, we compute surface normals at regular grid points, e.g., images points, within each triangle using the 3D points measured in the input range The surface normal at a grid point P g is determined from a least squares best fitting plane to all neighboring 3D points whose distances to the point P g are shorter than some threshold (Figure 64). The surface normal is computed as an eigen vector of the covariance matrix of the neighboring 3D points, specifically, the eigen vector associated with the eigenvalue of smallest magnitude. The covariance matrix of n 3D points [ X i, Y i, Z i ] T, with centroid [ X, Y, Z] T, is defined as: C = n ( X) X i ( Y) Y i ( X) ( Y i Y) ( Z i Z) X i (EQ44) i = 1 ( Z) Z i The surface normals computed at regular grid points within each triangle are then later used for mapping dense surface normals to the triangular mesh of the object shape. The mapped surface normals are used both for reflectance parameter estimation and for rendering color images of the object.

104 principal axis of neighboring points input 3D points K D, R K D, G K D, B

105 5.3 Specular Reflection Parameter Estimation Specular Reflection Parameter Estimation Like the diffuse reflection parameter estimation, the specular reflection parameters ( K SR,, K SG,, K SB,, and σ) are computed using the angle θ r and the angle α in the reflection model (EQ27). However, there is a significant difference between estimation of the diffuse and specular reflection parameters. The diffuse reflection parameters can be estimated as long as the object surface is illuminated and viewed from the camera. On the other hand, the specular reflection component is usually observed only from a limited range of viewing directions. Therefore, the specular reflection component can be observed only at a small portion of the object surface in the input color image sequence. That is, we cannot estimate the specular reflection parameters for the rest of the object surface. Even if the specular reflection component is observed, the parameter estimation can become unreliable if the specular reflection component is not sufficiently strong, or if the separation of the two reflection components is not performed well. For the above reasons, unlike the diffuse reflection parameter estimation, we estimate the specular parameters only at points on the object surface where the parameters can be computed reliably. Then we interpolate the estimated specular reflection parameters on the object surface to assign parameters to the rest of the object surface. For the specular refection parameters to be estimated reliably, the following three conditions are necessary at a point on the object surface: 1. The two reflection components are separated reliably. Because the diffuse and specular reflection components are separated using the difference of the colors of the two components (Section 3.2), those color vectors should differ as much as possible. This can be examined by saturation of the diffuse color (Figure 65). Since the light source color is generally close to white (saturation = 0), if the diffuse color has a high saturation value, the diffuse and specular reflection colors are different. 2. The magnitude of the specular reflection component is as large as possible. 3. The magnitude of the diffuse reflection component is as large as possible. Although this condition might seem to be unnecessary, we empirically found that the specular reflection parameters can be obtained more reliably if this condition is satisfied.

106 90 Chapter 5 B diffuse color vector 1 saturation specular color vector 1 G 1 R Figure 65 Diffuse saturation shown in the RGB color space Taking these three conditions into account, we select a fixed number of vertices with the largest values: v = diffuse saturation max specular intensity max diffuse intensity as suitable surface points for estimating the specular reflection parameters. After the specular reflection parameters K S and σare estimated at the selected vertices, the estimated values are linearly interpolated based on a distance on the object surface, so that the specular reflection parameters are obtained at regular grid points within each triangle of the object surface mesh. The obtained specular reflection parameters were then stored as two specular reflection parameter images (a K S image and a σ image) in the same manner as was the surface normal image. 5.4 Experimental Results We applied our object modeling method to real range and color images taken by using the image acquisition system described in Section 4.2. The target object used in this experiment is a ceramic mug whose height is approximately 100mm. Using the image acquisition system, a sequence of range and color images of the object was obtained as the object was rotated at a fixed angle step. Twelve range images and 120 color images were used in this experiment. Figure 66 shows four frames of the input range images as a triangular surface patch. Figure 67 shows the sequence of input color images of the mug. Six frames out of 120 are shown as examples. The volumetric method for merging multiple range images described in Section 4.3 was

107 5.4 Experimental Results 91 applied to the input range image sequence to recover the object shape as a triangular mesh model. Figure 68 shows the result of object shape reconstruction. In this example, 3782 triangles are generated by the marching cube algorithm. frame 0 frame 3 frame 6 frame 9 Figure 66 Input range data (4 out of 12 frames are shown) frame 0 frame 20 frame 40 frame 60 frame 80 frame 100 Figure 67 Input color images (6 out of 120 frames are shown)

108 92 Chapter 5 Subsequently, the recovered object shape was simplified by using Hoppe s mesh simplification method [27]. In our experiment, the total number of triangles was reduced from 3782 to 488 (Figure 69). In the triangular mesh model initially generated by the marching cube algorithm, each triangle s size varies significantly. That is typical for outputs from the marching cube algorithm. After the triangular mesh model was simplified, we can see that sizes of triangles in the simplified triangular mesh model are more regular, which is desirable for object modeling. By using the simplified object shape model and all 3D points measured in the input range image sequence, we computed dense surface normals over the object surface. In this example, surface normals were estimated at grid points within each triangle. The estimated surface normals were then stored as a three-band surface normal image. The estimated surface normals are compared with polygonal normals computed from the simplified triangular mesh model in Figure 70. In the figure, surface normals at the center of each triangle are displayed. Surface normals estimated from 3D points are shown in green, and polygon normals are shown in red. As we can see in the figure, there is a significant difference between the estimated surface normals and polygonal normals. Thus, reflectance parameter estimation would fail if polygonal normals were used instead of estimated surface normals from 3D points. Figure 68 Recovered object shape

109 5.4 Experimental Results 93 Figure 69 Simplified shape model The object shape model was simplified from 3782 to 488 triangles. Figure 70 Estimated surface normals and polygonal normals 4 Estimated surface normals are shown in green. Polygonal normals are shown in red. 4. This figure is shown in color in the Color Figures chapter.

110 94 Chapter 5 The sequence of input color images was mapped onto the simplified triangular mesh model of the object shape as described in Section 4.4. Figure 71 shows the result of the mapping. Six out of 120 frames are shown in the figure. Then, as explained in Section 4.5.2, the diffuse reflection component and the specular reflection component were separated from an observed color sequence at each point on the object surface. The separation result was used for estimating the parameters of the Torrance- Sparrow reflection model given in (EQ27). The diffuse reflection parameters were estimated at regular grid points within each triangle just as the surface normals were estimated. The resolution of the regular grid points was in our experiment, while the resolution was for the surface normal estimation. The higher resolution was necessary to capture details of the diffuse texture on the object surface. The resolution can be determined by the average number of pixels which fall onto one triangle in color images. Resolutions higher than the average number do not capture any more information than that in the input color images. On the other hand, if the resolution is too low, object images synthesized by using the estimated diffuse reflectance parameters become blurred because high frequency components in the input color images are lost. Figure 72 shows the result of the diffuse reflection parameter estimation where the estimated parameters are visualized as surface texture on the mug.

111 5.4 Experimental Results 95 frame 0 frame 20 frame 40 frame 60 frame 80 frame 100 Figure 71 Color image mapping result 6 out of 120 color images are shown here. Figure 72 Estimated diffuse reflection parameters

112 96 Chapter 5 For estimating the specular reflection parameters reliably, we selected suitable surface points on the object surface as described in Section 5.3. Figure 73 illustrates 100 selected vertices out of a total of 266 vertices for specular parameter estimation. In our experiment, we used the vertices of the triangular mesh model as candidates for parameter estimation. However, the use of the triangular vertices as initial candidates for specular parameter estimation is not essential in our method. Without any changes in the method, we could also use other points on the object surface as candidates. However, we found that, in most cases, using only triangular vertices was enough to find suitable points for specular parameter estimation. Then, the specular parameters were estimated at those selected vertices. Subsequently, the estimated values were linearly interpolated based on a distance on the object surface, so that the specular reflection parameters were obtained at grid points within each triangle. The resulting specular parameters were then stored as two specular reflection parameter images (a K S image and a σ α image) as the estimated surface normals were stored in the surface normal image. For the specular parameter estimation, we used a lower resolution ( ) than for the diffuse reflection parameter estimation. This is because specular reflectance usually does not change so rapidly as the diffuse reflectance, i.e., diffuse texture on the object surface. Therefore, the resolution of was enough to capture the specular reflectance of the mug. Figure 73 Selected vertices for specular parameter estimation 100 out of 266 vertices were selected.

113 5.4 Experimental Results 97 K S σ Figure 74 Interpolated K S and σ Finally, using the reconstructed object shape, the surface normal image (Section 5.1), the diffuse reflection parameter image (Section 5.2), the specular reflection parameter image (Section 5.3), and the reflection model (EQ27), we synthesized color object images under arbitrary illumination/viewing conditions. Figure 75 shows synthesized images of the object with two point light sources. Note that the images represent highlights on the object surface naturally. Unlike the object modeling method described in Chapter 4, diffuse texture on the object surface were satisfactorily reproduced in the synthesized images in spite of the reduced number of triangles of the object shape model. For comparing synthesized images with the input color images of the object, the object model was rendered using the same illumination and viewing directions as some of the input color images. Figure 76 shows two frames of the input color image sequence as well as two synthesized images that were generated using the same illuminating/viewing condition as that used for the input color images. It can be seen that the synthesized images closely resemble the corresponding real images. In particular, highlights, which generally are a very important cue of surface material, appear on the side and the handle of the mug naturally in the synthesized images. However, we can see that the synthesized images are slightly more blurred than the original color images, e.g., the eye of the painted fish in frame 50. That comes from slight

114 98 Chapter 5 error in the measured object transformation matrix T(Section 4.2) due to imperfect calibration of the robotic arm. Because of the error in the object transformation matrix T, the projected input color images (Section 4.4) were not perfectly aligned on the object surface. As a result, the estimated diffuse reflection parameters were slightly blurred. This blurring effect can be avoided if, after a color image is projected onto the object surface, the color image is aligned with previously projected images by a local search on the object surface 5. However, we have not tested this idea in our implementation yet. 5.5 Summary In this chapter, we extended our object modeling method for objects with detailed surface texture. Especially, to analyze and synthesize subtle highlights on the object surface, our object modeling method estimates surface normals densely over the object surface. The surface normals are computed directly from a cloud of 3D points measured in input range images, rather than computed as polygonal surfaces. This gives a more accurate estimation of surface normals. In addition, the parameters of the Torrance-Sparrow reflection model are also estimated densely over the object surface. As a result, fine diffuse texture and varying specular reflectance observed on the object surface can be captured, and therefore reproduced in synthesized images. In particular, the specular reflection parameters were successfully obtained by identifying suitable surface points for estimation and by interpolating estimated parameters over the object surface. Finally, highly realistic object images were synthesized using the recovered shape and reflectance information to demonstrate the feasibility of our method. 5. personal communication with Richard Szelski at Microsoft Corp [81].

115 5.5 Summary 99 Figure 75 Synthesized object images 6 6. This figure is shown in color in the Color Figures chapter.

116 100 Chapter 5 input frame 50 synthesized input frame 80 synthesized Figure 76 Comparison of input color images and synthesized images 7 7. This figure is shown in color in the Color Figures chapter.

117 101 Chapter 6 Reflectance Analysis under Solar Illumination 6.1 Background Most algorithms for analyzing object shape and reflectance properties have been applied to intensity images taken in a laboratory. However, reports of applications for real intensity images of outside scenes have been very limited. Intensity images synthesized or taken in a laboratory setup are well controlled and are less complex than those taken outside under sunlight. For instance, in an outdoor environment, there are multiple light sources of different colors and spatial distributions, namely the sunlight and the skylight. The sunlight can be regarded as a yellow point light source whose movement is restricted to the ecliptic. 4 On the other hand, the skylight is a blue extended light source which appears to be almost uniform over the entire hemisphere. Even more, there may be clouds in the sky, which makes modeling the skylight significantly more difficult. Due to the sun s restricted movement, the problem of surface normal recovery becomes underconstrained under sunlight. For instance, if the photometric stereo method is applied to two intensity images taken outside at different times, two surface normals which are symmetric with respect to the ecliptic are obtained at each surface point. Those two surface normals cannot be distinguished locally because those two surface normal directions give us 4. Ecliptic: The great circle of the celestial sphere that is the apparent path of the sun among the stars or of the earth as seen from the sun: the plane of the earth s orbit extended to meet the celestial sphere.

118 102 Chapter 6 exactly the same brightness at the surface point. Another factor that makes reflectance analysis under the solar illumination difficult is multiple reflection components from the object surface generated. Reflection from object surfaces may have multiple reflection components such as the diffuse reflection component and the specular reflection component. In the previous chapters in this thesis, we used our algorithm to separate the two reflection components from an observed color sequence. In this case of solar illumination, we observe more than two reflection components on the object surface because both the sunlight and the skylight act as a light source. Therefore, additional care has to be taken to analyze color images taken in an outdoor environment. In this chapter, we address the two issues involved in analyzing real outdoor intensity images taken under the solar illumination: 1. the multiple reflection components including highlights, and 2. the unique solution for surface normals under sunlight. We analyze a sequence of color images of an object in an outdoor scene. Color images are taken at different times, e.g., every 15 minutes, on the same day. Then, for each of the two problems, we show a solution and demonstrate the feasibility of the solution by using real images. This chapter is organized as follows. The reflectance model that we used for analyzing outdoor images under solar illumination is described in Section 6.2. The reflection model takes into account two light sources of different spectral and spatial distributions. Separation of the multiple reflection components under solar illumination is explained in Section 6.3 and Section 6.4. A method to obtain two sets of surface normals for the object surface to choose the correct set of surface normals is discussed in Section 6.5. Experimental results from a laboratory setup and the outdoor environment are shown in Section 6.6 and Section 6.7, respectively. A summary of this chapter is presented in Section Reflection Model Under Solar Illumination In outdoor scenes, there are two main light sources of different spectral and spatial distributions: the sunlight and the skylight. The sunlight acts as a moving point light source with a finite size, while the skylight acts as an extended light source over the entire hemisphere. Both the sunlight and the skylight observed at the earth surface are generated by a very complex mechanism [48]. Solar radiation striking the earth surface from above the atmosphere is attenuated in passing through the air by two processes: absorption and scattering. Absorption removes light from the beam of light and converts it to heat. Absorption does

119 6.2 Reflection Model Under Solar Illumination 103 not occur uniformly across the spectrum, but only at certain discrete wavelength regions determined by the absorbing molecule s internal properties. Scattering, while not absorbing energy, redirects it out of the beam and away from its original direction. It takes place at all visible wavelengths. The probability that a single photon of sunlight will be scattered from its original direction by an air molecule is inversely proportional to the fourth power of the wavelength. The shorter the wavelength of the light is, the greater its chances are of being scattered. This means that, when we look in any part of the sky except directly toward the sun, we are more likely to see a blue photon of scattered sunlight than a red one. This causes the sky to appear blue. The result of this scattering process varies, depending on how far it is from the direct sunlight. This results in a significant change of the spectral distribution of the skylight over the sky (Figure 77). Also, the brightness of the sky is determined by the number of molecules in the line-ofsight: more air molecules mean a brighter sky. Therefore, the brightness of the skylight is not uniform over the sky, and the sky brightness increases to a maximum just above the horizon. Another well known behavior of sunlight is color and brightness of the low sun. As the sun approaches the horizon, its color changes from white to bright yellow, orange and even to red. The sun becomes dimmer and redder as it approaches the horizon. At the same time, the spectral distribution of the sunlight changes widely, depending on the sun s location in the sky (Figure 78). To make matters even more complicated, there are usually clouds in the sky, and the skylight becomes a highly non-uniform extended light source. Therefore, it is very difficult to model both the sunlight and the skylight. Nevertheless, in order to analyze color images taken under solar illumination, it is necessary to have a reflection model and an illumination model which can represent reflected lights from the sunlight and skylight on object surfaces.

120 104 Chapter 6 Figure 77 Comparison of the spectra of sunlight and skylight [48] The spectra were taken on the solar circle at angular distances from the sun of 10, 45, 90, and 135 degrees respectively. All spectra have been scaled to have the same value at 500 nm. Figure 78 Change of color of sun with altitude [48] These spectra show the sun as viewed through 1.0, 1.5, 2.0, and 4.0 air masses. With increasing air mass the sun becomes dimmer and redder.

121 6.2 Reflection Model Under Solar Illumination 105 In this thesis, as a first step, we decided to use a simpler illumination model to analyze color images taken under solar illumination. In our analysis, we model the skylight as an extended light source uniformly distributed over the sky. In addition, the sunlight is modeled as a moving point light source which has a different spectral distribution from the skylight. We consider that this rather simple illumination model approximates the real solar illumination reasonably well as long as the sun is not close to the horizon. Based on the simplified illumination model, the intensity of incident light from the sunlight and the skylight is represented as L i ( θ i, λ) = c sun ( λ)l sun ( θ i ) + c sky ( λ)l sky (EQ45) where the angle θ i represents an incident direction of the sun in the surface normal centered coordinate system in Figure 17. c( λ) is the spectral distribution of the incident light and L( θ i ) is a geometrical term of incident light onto the object surface. The subscript sun and sky refer to the sunlight and the skylight, respectively. With this illumination model, the Torrance-Sparrow reflection model (EQ27) becomes I m K sun sun 1 D, mcosθ i K Sm, α 2 = + exp K sky m = R, G, B cosθ r 2σ α 2 (EQ46) Note that the diffuse and specular reflection components from the skylight become constant with respect to the direction of the sun and the viewing direction. The resulting reflection model is illustrated in Figure 79. specular from sunlight camera the sun diffuse from sunlight reflecting surface diffuse+specular from skylight Figure 79 Three reflection components from solar illumination

122 106 Chapter 6 In our analysis, the reflectance model represented as (EQ46) is used both to remove the specular reflection component and for the shape recovery. By using the reflection model given as (EQ46), we analyze a sequence of color images of an object in an outdoor scene. In particular, we try to recover the object shape even if the object surface exhibits specularity. The color images are taken at different times (i.e., every 15 minutes) on the same day. Therefore, the sun acts as a moving light source in the color image sequence. As shown in the reflection model, reflected lights under solar illumination contain the three reflection components: the diffuse reflection component from the sunlight, the specular reflection component from the sunlight, and the reflection component from the skylight. Therefore, we need to isolate those three reflection components, so that object shapes can be recovered. First, the reflection component from the skylight is removed from the observed color images. Then, the diffuse reflection component and the specular reflection components from the sunlight are separated by using our reflection component separation algorithm that was introduced in Chapter 3. Finally, we can recover the object shape from the resulting diffuse reflection component from the sunlight. 6.3 Removal of the Reflection Component from the Skylight As stated in Section 6.2, the diffuse and specular reflection components from the skylight are constant with respect to the sun s direction θand i the viewing direction. θthere- fore, shadow regions from the sunlight should have uniform pixel intensities since they are r illuminated only by the skylight. In other words, pixel intensities in those regions do not have the reflection components from the sunlight, but only the reflection component from the skylight K sky. The value of the reflection component due to the skylight K sky can be obtained as an average pixel intensity in the shadow regions of constant pixel intensity. is subtracted from all pixel intensities of the image to yield K sky I m = sun K D, m cos θ i + sun 1 K S, m α 2 exp cosθ r 2σ α 2 (EQ47) Then, the pixel intensity has only the diffuse and specular reflection components from the sunlight.

123 6.4 Removal of the Specular Component from the Sunlight Removal of the Specular Component from the Sunlight After the reflection component from the skylight is subtracted from an observed color sequence at each image pixel, we apply our algorithm for separating the diffuse and specular reflection components described in Section 3.2, to the resulting sequence of observed colors. That removes the specular reflection component from the observed color sequence. As a result, the pixel intensities in the image have only the diffuse reflection component from the sunlight, and they can be modeled by the equation I m sun = K D, m cosθ i m = R, G, B (EQ48) Since the pixel intensity now has only the diffuse reflection component from the sunlight, the intensities in three color bands are redundant for the purpose of shape recovery. Thus, only one band of the three color bands is used in surface normal estimation. I = K D sun cosθ i (EQ49) 6.5 Obtaining Surface Normals Two Sets of Surface Normals After the specular reflection removal, the input image sequence has only the diffuse reflection component from the sunlight. Usually, shape-from-shading and photometric stereo are used for recovering shape information from diffuse reflection images. Initially, those techniques were implemented for shape recovery in our experiments. However, we found that, unfortunately, neither of those techniques could yield correct object shapes. This problem is attributed to various sources of noise in the input image such as incomplete removal of the specular reflection component. Shape-from-shading and photometric stereo use a very small number of images for surface normal computation. That leads us to an erroneous object shape when the images contain slight errors in pixel intensities. Therefore, we decided to use another algorithm to determine surface normals from the input image sequence. The algorithm makes use of more images in the sequence, rather than

124 108 Chapter 6 just a few of them. We describe the algorithm in this section. v:viewing direction Here, the reflectance parameter has to be known for computing ϕ. If we assume that at least one surface normal on the object surface is the same as the sun direction s, the sun sun reflectance parameter K D is simply obtained as the intensity of the pixel I = K D. The pixel in the image can be found simply as the brightest pixel. In a practical case, the estimaθ s P 1 n 1 s:sun direction ϕ P2 n 2 ecliptic Figure 80 Sun direction, viewing direction and surface normal in 3D case To represent the sun s motion in three dimensional space, we consider the Gaussian sphere as shown in Figure 80. The ecliptic is represented as the great circle on the Gaussian sphere The viewing direction vis fixed. The direction of the sun is sspecified as the function of in the plane of the ecliptic. θ s P P Consider an intensity of one pixel as the function of the sun direction I( θ s ). If the maximum intensity is observed when the sun is located at the direction θ s, the surface normal of the image pixel should be located somewhere on the great circle which is perpendicular to the ecliptic. For obtaining robust estimations, the maximum pixel intensity I and 1 2 the direction of the sun θare s found by fitting a second degree polynomial to the observed pixel intensity sequence. According to the reflectance model (EQ49), the angle between the sun direction sand the surface normal directions n 1 and on n 2 the great circle P 1 isp 2 given by ϕ = I acos K D sun (EQ50) K D sun

125 6.5 Obtaining Surface Normals 109 tion of the reflectance parameter is computed as the average of the brightest pixel intensities from multiple images of the input image sequence, for robustness Unique Surface Normal Solution Due to the sun s restricted movement on the ecliptic, we cannot obtain a unique solution for surface normal by applying photometric stereo to outdoor images taken at different times on the same day. This fact was pointed out by Woodham [95] when he introduced the photometric stereo method. As a result, there have been no attempts reported for recovering an object shape by the photometric stereo method applied to outdoor images. However, Onn and Bruckstein [58] recently studied photometric stereo applied to two images and showed that surface normals can be determined uniquely even if only two images are used, with the exception of some special cases. By using the algorithm described in the previous section, two sets of surface normals n 1 and nare 2 obtained. We used the constraint which Onn called integrability constraint, in order to choose a correct set of surface normals out of the two sets of surface normals. n 1 = ( p 1, q 1, 1) n 2 = ( p 2, q 2, 1) (EQ51) Onn s integrability constraint is described here. First, we compute two surface normals n 1 and nfor 2 all pixels. Then, the object surface is segmented into subregions by defining a boundary where two surface normals are similar. In practice, if an angle between nand 1 n 2 is within a threshold, the pixel is included in the boundary. Then, for each subregion R, two integrals are computed. ( x, y) R p 1 q 1 2 y x dxdy (EQ52) ( x, y) R p 2 q 2 2 y x dxdy (EQ53) Theoretically, the correct set of surface normals produces the integral value equal to zero. In a practical case, the correct surface normal set can be chosen as the one with the integral value closer to zero. Onn and Bruckstein showed that the integrability constraint is always valid except for a few rare cases where the object surface can be represented as

126 110 Chapter 6 Hxy (, ) = Fx ( ) + Gy ( ) in a suitably defined coordinate system. In our experiments, the exceptional case does not occur, so the integrability constraint can be used for obtaining a unique solution for surface normals. 6.6 Experimental Results: Laboratory Setup In the previous sections, we described the methods issues which are essential for analyzing real color images taken under the sun. They are: the separation of the reflection components from the two light sources: the sunlight and the skylight; and also the unique solution for surface normals. We tested our solution for unique surface normals (Section 6.5) by using a color image sequence taken in a laboratory setup. A SONY CCD color video camera module model XC- 711 was used to take all images. In our experimental setup, a dielectric object (a plastic dinosaur toy) was placed at the center of the origin of the world coordinate system, and the color video camera was placed above the object. The sunlight was simulated by a small xenon lamp attached to a PUMA 560 manipulator which moves around the object on its equatorial plane. The skylight was not simulated in our experimental setup. The effect of the skylight and separation of the reflection components from the skylight will be described in Section 6.7. The sequence of color images was taken as the point light source was moved around the object from θ s = 70 to θin s steps = 70 of. The specular 5 reflection component is removed from the input image sequence by using the same algorithm used in Section 3.2. In this experiment, the specular reflection color was directly measured rather than estimated as described in Section 3.3. The 8th frame of the resulting diffuse reflection image sequence is shown in Figure 81. The algorithm for obtaining two sets of surface normals which was described in Section 6.5 was applied to the red band of the resulting diffuse reflection image sequence. Computed two sets of surface normals n 1 and nare 2 shown in Figure 82 as a needle diagram. Subsequently, the integrability constraint was applied to determine the correct set of surface normals uniquely. First, the object surface was segmented into subregions by defining a boundary where the two surface normals n 1 and nare 2 similar. The obtained boundary is shown in Figure 83. Theoretically, the boundary should be connected and narrow. However, in a practical case, the obtained boundary tends to be wide in order to guarantee its connectivity. Thus, the thinning operation, in our case the medial axis transformation,

127 6.6 Experimental Results: Laboratory Setup 111 was applied to narrow the boundary. Figure 84 shows the resulting boundary after the medial axis transformation. Figure 81 Diffuse reflection component image (frame 8) Figure 82 Two sets of surface normals

128

129

130

131 6.7 Experimental Result: Outdoor Scene (Water Tower) 115 ing. The extracted region of interest is shown in Figure 89. The next step was to remove the reflection component from the skylight. According to the reflection model under the solar illumination (EQ46), the reflection component due to the skylight is represented as a constant value K sky. The constant value K sky can be estimated as an average pixel color of a uniform intensity region which is in a shadow from the sunlight. In our experiment, the region of a constant pixel colors was selected manually as shown in Figure 89. The measured pixel color within the region is ( r, g, b) = ( 14.8, 17.2, 19.5) with the variance ( 0.2, 0.3, 0.6). The pixel color vector was subtracted from intensities of all pixels to eliminate effects from the skylight. After this operation, the color images contain only the reflection components due to the sunlight. The resulting image is shown in Figure 90. It can be seen that the image has more contrast between an illuminated region and a shadow region, compared with the image with the reflection component due to the skylight (Figure 89). All of frames of the input color images were processed in the same manner to remove the reflection component due to the skylight. Figure 88 Observed color image sequence of a water tank Six frames out of 23 are shown here.

132 116 Chapter 6 Region of constant color Figure 89 Extracted region of interest Figure 90 Water tank image without sky reflection component After the removal of the reflection component from the skylight, the sequence of color images included two reflection component: the diffuse reflection component and the specular reflection component due to the sunlight as modeled by (EQ47). The algorithm to separate the diffuse and specular reflection components was applied to the resulting color image sequence. At each pixel in the color image, the two reflection components were separated and only the diffuse reflection component was used for further shape recovery. As an example, one frame of the resulting color image sequence is shown in Figure 91. The image includes only one reflection component: the diffuse reflection component from the sunlight. The water tower appears to have a uniform reflectance.

133 6.7 Experimental Result: Outdoor Scene (Water Tower) 117 Figure 91 Water tank image after highlight removal The algorithm to determine surface normals uniquely by using an image sequence was applied to the red band of the resulting color image sequence. Figure 92 shows the recovered surface normals of the water tower. Note that surface normals are not obtained in the lower right part of the water tower. This is because, in the region, the maximum intensity is not observed at each pixel through the image sequence. To recover surface normals in the region, we need to take an input image sequence over a longer period of time than this experiment encompassed. Alternatively, other techniques such as photometric stereo may be used for recovering surface normals in the region. Finally, the relaxation method for calculating height from surface normals was applied. The recovered shape of the part of the water tower is shown in Figure 93. Figure 92 Surface normals

Object Shape and Reflectance Modeling from Observation

Object Shape and Reflectance Modeling from Observation Object Shape and Reflectance Modeling from Observation Yoichi Sato 1, Mark D. Wheeler 2, and Katsushi Ikeuchi 1 ABSTRACT 1 Institute of Industrial Science University of Tokyo An object model for computer

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 20: Light, reflectance and photometric stereo Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Properties

More information

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 CSE 167: Introduction to Computer Graphics Lecture #7: Color and Shading Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday,

More information

Draft from Graphical Models and Image Processing, vol. 58, no. 5, September Reflectance Analysis for 3D Computer Graphics Model Generation

Draft from Graphical Models and Image Processing, vol. 58, no. 5, September Reflectance Analysis for 3D Computer Graphics Model Generation page 1 Draft from Graphical Models and Image Processing, vol. 58, no. 5, September 1996 Reflectance Analysis for 3D Computer Graphics Model Generation Running head: Reflectance Analysis for 3D CG Model

More information

CS 5625 Lec 2: Shading Models

CS 5625 Lec 2: Shading Models CS 5625 Lec 2: Shading Models Kavita Bala Spring 2013 Shading Models Chapter 7 Next few weeks Textures Graphics Pipeline Light Emission To compute images What are the light sources? Light Propagation Fog/Clear?

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 21: Light, reflectance and photometric stereo Announcements Final projects Midterm reports due November 24 (next Tuesday) by 11:59pm (upload to CMS) State the

More information

w Foley, Section16.1 Reading

w Foley, Section16.1 Reading Shading w Foley, Section16.1 Reading Introduction So far, we ve talked exclusively about geometry. w What is the shape of an object? w How do I place it in a virtual 3D space? w How do I know which pixels

More information

Estimating Scene Properties from Color Histograms

Estimating Scene Properties from Color Histograms Estimating Scene Properties from Color Histograms Carol L. Novak Steven A. Shafer November 1992 CMU-CS-92-212 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 This research was

More information

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker CMSC427 Shading Intro Credit: slides from Dr. Zwicker 2 Today Shading Introduction Radiometry & BRDFs Local shading models Light sources Shading strategies Shading Compute interaction of light with surfaces

More information

Lecture 22: Basic Image Formation CAP 5415

Lecture 22: Basic Image Formation CAP 5415 Lecture 22: Basic Image Formation CAP 5415 Today We've talked about the geometry of scenes and how that affects the image We haven't talked about light yet Today, we will talk about image formation and

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Thomas Buchberger, Matthias Zwicker Universität Bern Herbst 2008 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation

More information

Shading. Brian Curless CSE 557 Autumn 2017

Shading. Brian Curless CSE 557 Autumn 2017 Shading Brian Curless CSE 557 Autumn 2017 1 Reading Optional: Angel and Shreiner: chapter 5. Marschner and Shirley: chapter 10, chapter 17. Further reading: OpenGL red book, chapter 5. 2 Basic 3D graphics

More information

CSE 167: Introduction to Computer Graphics Lecture #6: Colors. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

CSE 167: Introduction to Computer Graphics Lecture #6: Colors. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 CSE 167: Introduction to Computer Graphics Lecture #6: Colors Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 Announcements Homework project #3 due this Friday, October 18

More information

PHOTOMETRIC STEREO FOR NON-LAMBERTIAN SURFACES USING COLOR INFORMATION

PHOTOMETRIC STEREO FOR NON-LAMBERTIAN SURFACES USING COLOR INFORMATION PHOTOMETRIC STEREO FOR NON-LAMBERTIAN SURFACES USING COLOR INFORMATION KARSTEN SCHLÜNS Fachgebiet Computer Vision, Institut für Technische Informatik Technische Universität Berlin, Franklinstr. 28/29,

More information

Comment on Numerical shape from shading and occluding boundaries

Comment on Numerical shape from shading and occluding boundaries Artificial Intelligence 59 (1993) 89-94 Elsevier 89 ARTINT 1001 Comment on Numerical shape from shading and occluding boundaries K. Ikeuchi School of Compurer Science. Carnegie Mellon dniversity. Pirrsburgh.

More information

ECS 175 COMPUTER GRAPHICS. Ken Joy.! Winter 2014

ECS 175 COMPUTER GRAPHICS. Ken Joy.! Winter 2014 ECS 175 COMPUTER GRAPHICS Ken Joy Winter 2014 Shading To be able to model shading, we simplify Uniform Media no scattering of light Opaque Objects No Interreflection Point Light Sources RGB Color (eliminating

More information

Simple Lighting/Illumination Models

Simple Lighting/Illumination Models Simple Lighting/Illumination Models Scene rendered using direct lighting only Photograph Scene rendered using a physically-based global illumination model with manual tuning of colors (Frederic Drago and

More information

Introduction. Chapter Computer Graphics

Introduction. Chapter Computer Graphics Chapter 1 Introduction 1.1. Computer Graphics Computer graphics has grown at an astounding rate over the last three decades. In the 1970s, frame-buffers capable of displaying digital images were rare and

More information

CS130 : Computer Graphics Lecture 8: Lighting and Shading. Tamar Shinar Computer Science & Engineering UC Riverside

CS130 : Computer Graphics Lecture 8: Lighting and Shading. Tamar Shinar Computer Science & Engineering UC Riverside CS130 : Computer Graphics Lecture 8: Lighting and Shading Tamar Shinar Computer Science & Engineering UC Riverside Why we need shading Suppose we build a model of a sphere using many polygons and color

More information

Lighting affects appearance

Lighting affects appearance Lighting affects appearance 1 Source emits photons Light And then some reach the eye/camera. Photons travel in a straight line When they hit an object they: bounce off in a new direction or are absorbed

More information

Shading. Reading. Pinhole camera. Basic 3D graphics. Brian Curless CSE 557 Fall Required: Shirley, Chapter 10

Shading. Reading. Pinhole camera. Basic 3D graphics. Brian Curless CSE 557 Fall Required: Shirley, Chapter 10 Reading Required: Shirley, Chapter 10 Shading Brian Curless CSE 557 Fall 2014 1 2 Basic 3D graphics With affine matrices, we can now transform virtual 3D objects in their local coordinate systems into

More information

Lighting affects appearance

Lighting affects appearance Lighting affects appearance 1 Source emits photons Light And then some reach the eye/camera. Photons travel in a straight line When they hit an object they: bounce off in a new direction or are absorbed

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons Reflectance 1 Lights, Surfaces, and Cameras Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons 2 Light at Surfaces Many effects when light strikes a surface -- could be:

More information

Photometric Stereo.

Photometric Stereo. Photometric Stereo Photometric Stereo v.s.. Structure from Shading [1] Photometric stereo is a technique in computer vision for estimating the surface normals of objects by observing that object under

More information

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 CSE 167: Introduction to Computer Graphics Lecture #6: Lights Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 Announcements Thursday in class: midterm #1 Closed book Material

More information

Lecture 4: Reflection Models

Lecture 4: Reflection Models Lecture 4: Reflection Models CS 660, Spring 009 Kavita Bala Computer Science Cornell University Outline Light sources Light source characteristics Types of sources Light reflection Physics-based models

More information

CS5620 Intro to Computer Graphics

CS5620 Intro to Computer Graphics So Far wireframe hidden surfaces Next step 1 2 Light! Need to understand: How lighting works Types of lights Types of surfaces How shading works Shading algorithms What s Missing? Lighting vs. Shading

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Mapping textures on 3D geometric model using reflectance image

Mapping textures on 3D geometric model using reflectance image Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi Ikeuchi The University of Tokyo Cyra Technologies, Inc. The University of Tokyo fkurazume,kig@cvl.iis.u-tokyo.ac.jp

More information

Radiometry and reflectance

Radiometry and reflectance Radiometry and reflectance http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 16 Course announcements Homework 4 is still ongoing - Any questions?

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

Ligh%ng and Reflectance

Ligh%ng and Reflectance Ligh%ng and Reflectance 2 3 4 Ligh%ng Ligh%ng can have a big effect on how an object looks. Modeling the effect of ligh%ng can be used for: Recogni%on par%cularly face recogni%on Shape reconstruc%on Mo%on

More information

Shading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham

Shading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham Shading / Light Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham Phong Illumination Model See Shirley, Ch 10 and http://en.wikipedia.org/wiki/phong_shading

More information

Shading I Computer Graphics I, Fall 2008

Shading I Computer Graphics I, Fall 2008 Shading I 1 Objectives Learn to shade objects ==> images appear threedimensional Introduce types of light-material interactions Build simple reflection model Phong model Can be used with real time graphics

More information

Light Reflection Models

Light Reflection Models Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 21, 2014 Lecture #15 Goal of Realistic Imaging From Strobel, Photographic Materials and Processes Focal Press, 186.

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 27, 2015 Lecture #18 Goal of Realistic Imaging The resulting images should be physically accurate and

More information

Computer Graphics. Illumination and Shading

Computer Graphics. Illumination and Shading () Illumination and Shading Dr. Ayman Eldeib Lighting So given a 3-D triangle and a 3-D viewpoint, we can set the right pixels But what color should those pixels be? If we re attempting to create a realistic

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

And if that 120MP Camera was cool

And if that 120MP Camera was cool Reflectance, Lights and on to photometric stereo CSE 252A Lecture 7 And if that 120MP Camera was cool Large Synoptic Survey Telescope 3.2Gigapixel camera 189 CCD s, each with 16 megapixels Pixels are 10µm

More information

Shading & Material Appearance

Shading & Material Appearance Shading & Material Appearance ACM. All rights reserved. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/. MIT EECS 6.837 Matusik

More information

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface. Lighting and Photometric Stereo CSE252A Lecture 7 Announcement Read Chapter 2 of Forsyth & Ponce Might find section 12.1.3 of Forsyth & Ponce useful. HW Problem Emitted radiance in direction f r for incident

More information

Capturing light. Source: A. Efros

Capturing light. Source: A. Efros Capturing light Source: A. Efros Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection? Lenses

More information

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015 Orthogonal Projection Matrices 1 Objectives Derive the projection matrices used for standard orthogonal projections Introduce oblique projections Introduce projection normalization 2 Normalization Rather

More information

Illumination & Shading: Part 1

Illumination & Shading: Part 1 Illumination & Shading: Part 1 Light Sources Empirical Illumination Shading Local vs Global Illumination Lecture 10 Comp 236 Spring 2005 Computer Graphics Jargon: Illumination Models Illumination - the

More information

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3 Image Formation: Light and Shading CSE 152 Lecture 3 Announcements Homework 1 is due Apr 11, 11:59 PM Homework 2 will be assigned on Apr 11 Reading: Chapter 2: Light and Shading Geometric image formation

More information

Highlight detection with application to sweet pepper localization

Highlight detection with application to sweet pepper localization Ref: C0168 Highlight detection with application to sweet pepper localization Rotem Mairon and Ohad Ben-Shahar, the interdisciplinary Computational Vision Laboratory (icvl), Computer Science Dept., Ben-Gurion

More information

WHY WE NEED SHADING. Suppose we build a model of a sphere using many polygons and color it with glcolor. We get something like.

WHY WE NEED SHADING. Suppose we build a model of a sphere using many polygons and color it with glcolor. We get something like. LIGHTING 1 OUTLINE Learn to light/shade objects so their images appear three-dimensional Introduce the types of light-material interactions Build a simple reflection model---the Phong model--- that can

More information

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows Recollection Models Pixels Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows Can be computed in different stages 1 So far we came to Geometry model 3 Surface

More information

USING A COLOR REFLECTION MODEL TO SEPARATE HIGHLIGHTS FROM OBJECT COLOR. Gudrun J. Klinker, Steven A. Shafer, and Takeo Kanade

USING A COLOR REFLECTION MODEL TO SEPARATE HIGHLIGHTS FROM OBJECT COLOR. Gudrun J. Klinker, Steven A. Shafer, and Takeo Kanade USING A COLOR REFLECTION MODEL TO SEPARATE HIGHLIGHTS FROM OBJECT COLOR Gudrun J. Klinker, Steven A. Shafer, and Takeo Kanade Department of Computer Science, Carnegie-Mellon, Schenley Park, Pittsburgh,

More information

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

Introduction to Visualization and Computer Graphics

Introduction to Visualization and Computer Graphics Introduction to Visualization and Computer Graphics DH2320, Fall 2015 Prof. Dr. Tino Weinkauf Introduction to Visualization and Computer Graphics Visibility Shading 3D Rendering Geometric Model Color Perspective

More information

Reflection and Shading

Reflection and Shading Reflection and Shading R. J. Renka Department of Computer Science & Engineering University of North Texas 10/19/2015 Light Sources Realistic rendering requires that we model the interaction between light

More information

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

Comp 410/510 Computer Graphics. Spring Shading

Comp 410/510 Computer Graphics. Spring Shading Comp 410/510 Computer Graphics Spring 2017 Shading Why we need shading Suppose we build a model of a sphere using many polygons and then color it using a fixed color. We get something like But we rather

More information

Timothy Walsh. Reflection Models

Timothy Walsh. Reflection Models Timothy Walsh Reflection Models Outline Reflection Models Geometric Setting Fresnel Reflectance Specular Refletance & Transmission Microfacet Models Lafortune Model Fresnel Incidence Effects Diffuse Scatter

More information

Image segmentation through reasoning about optics

Image segmentation through reasoning about optics Image segmentation through reasoning about optics Bruce A. Maxwell University of North Dakota Grand Forks, ND Steven A. Shafer Microsoft Redmond, WA Abstract We have developed a general segmentation framework

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both

More information

Today s class. Simple shadows Shading Lighting in OpenGL. Informationsteknologi. Wednesday, November 21, 2007 Computer Graphics - Class 10 1

Today s class. Simple shadows Shading Lighting in OpenGL. Informationsteknologi. Wednesday, November 21, 2007 Computer Graphics - Class 10 1 Today s class Simple shadows Shading Lighting in OpenGL Wednesday, November 21, 27 Computer Graphics - Class 1 1 Simple shadows Simple shadows can be gotten by using projection matrices Consider a light

More information

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Ko Nishino, Zhengyou Zhang and Katsushi Ikeuchi Dept. of Info. Science, Grad.

More information

Lighting. Figure 10.1

Lighting. Figure 10.1 We have learned to build three-dimensional graphical models and to display them. However, if you render one of our models, you might be disappointed to see images that look flat and thus fail to show the

More information

OpenGl Pipeline. triangles, lines, points, images. Per-vertex ops. Primitive assembly. Texturing. Rasterization. Per-fragment ops.

OpenGl Pipeline. triangles, lines, points, images. Per-vertex ops. Primitive assembly. Texturing. Rasterization. Per-fragment ops. OpenGl Pipeline Individual Vertices Transformed Vertices Commands Processor Per-vertex ops Primitive assembly triangles, lines, points, images Primitives Fragments Rasterization Texturing Per-fragment

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 3, 2017 Lecture #13 Program of Computer Graphics, Cornell University General Electric - 167 Cornell in

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

CPSC 314 LIGHTING AND SHADING

CPSC 314 LIGHTING AND SHADING CPSC 314 LIGHTING AND SHADING UGRAD.CS.UBC.CA/~CS314 slide credits: Mikhail Bessmeltsev et al 1 THE RENDERING PIPELINE Vertices and attributes Vertex Shader Modelview transform Per-vertex attributes Vertex

More information

Lecture 15: Shading-I. CITS3003 Graphics & Animation

Lecture 15: Shading-I. CITS3003 Graphics & Animation Lecture 15: Shading-I CITS3003 Graphics & Animation E. Angel and D. Shreiner: Interactive Computer Graphics 6E Addison-Wesley 2012 Objectives Learn that with appropriate shading so objects appear as threedimensional

More information

Re-rendering from a Dense/Sparse Set of Images

Re-rendering from a Dense/Sparse Set of Images Re-rendering from a Dense/Sparse Set of Images Ko Nishino Institute of Industrial Science The Univ. of Tokyo (Japan Science and Technology) kon@cvl.iis.u-tokyo.ac.jp Virtual/Augmented/Mixed Reality Three

More information

Color constancy through inverse-intensity chromaticity space

Color constancy through inverse-intensity chromaticity space Tan et al. Vol. 21, No. 3/March 2004/J. Opt. Soc. Am. A 321 Color constancy through inverse-intensity chromaticity space Robby T. Tan Department of Computer Science, The University of Tokyo, 4-6-1 Komaba,

More information

Announcements. Image Formation: Light and Shading. Photometric image formation. Geometric image formation

Announcements. Image Formation: Light and Shading. Photometric image formation. Geometric image formation Announcements Image Formation: Light and Shading Homework 0 is due Oct 5, 11:59 PM Homework 1 will be assigned on Oct 5 Reading: Chapters 2: Light and Shading CSE 252A Lecture 3 Geometric image formation

More information

Computer Graphics. Shading. Based on slides by Dianna Xu, Bryn Mawr College

Computer Graphics. Shading. Based on slides by Dianna Xu, Bryn Mawr College Computer Graphics Shading Based on slides by Dianna Xu, Bryn Mawr College Image Synthesis and Shading Perception of 3D Objects Displays almost always 2 dimensional. Depth cues needed to restore the third

More information

Virtual Reality for Human Computer Interaction

Virtual Reality for Human Computer Interaction Virtual Reality for Human Computer Interaction Appearance: Lighting Representation of Light and Color Do we need to represent all I! to represent a color C(I)? No we can approximate using a three-color

More information

Texture Mapping. Images from 3D Creative Magazine

Texture Mapping. Images from 3D Creative Magazine Texture Mapping Images from 3D Creative Magazine Contents Introduction Definitions Light And Colour Surface Attributes Surface Attributes: Colour Surface Attributes: Shininess Surface Attributes: Specularity

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Path Tracing part 2. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Path Tracing part 2. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Path Tracing part 2 Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Monte Carlo Integration Monte Carlo Integration The rendering (& radiance) equation is an infinitely recursive integral

More information

Three-Dimensional Computer Vision

Three-Dimensional Computer Vision \bshiaki Shirai Three-Dimensional Computer Vision With 313 Figures ' Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Table of Contents 1 Introduction 1 1.1 Three-Dimensional Computer Vision

More information

Assignment #2. (Due date: 11/6/2012)

Assignment #2. (Due date: 11/6/2012) Computer Vision I CSE 252a, Fall 2012 David Kriegman Assignment #2 (Due date: 11/6/2012) Name: Student ID: Email: Problem 1 [1 pts] Calculate the number of steradians contained in a spherical wedge with

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray Photometric stereo Radiance Pixels measure radiance This pixel Measures radiance along this ray Where do the rays come from? Rays from the light source reflect off a surface and reach camera Reflection:

More information

The exam begins at 2:40pm and ends at 4:00pm. You must turn your exam in when time is announced or risk not having it accepted.

The exam begins at 2:40pm and ends at 4:00pm. You must turn your exam in when time is announced or risk not having it accepted. CS 184: Foundations of Computer Graphics page 1 of 12 Student Name: Student ID: Instructions: Read them carefully! The exam begins at 2:40pm and ends at 4:00pm. You must turn your exam in when time is

More information

The Rasterization Pipeline

The Rasterization Pipeline Lecture 5: The Rasterization Pipeline Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2016 What We ve Covered So Far z x y z x y (0, 0) (w, h) Position objects and the camera in the world

More information

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping Topic 12: Texture Mapping Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping Texture sources: Photographs Texture sources: Procedural Texture sources: Solid textures

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Shading. Why we need shading. Scattering. Shading. Objectives

Shading. Why we need shading. Scattering. Shading. Objectives Shading Why we need shading Objectives Learn to shade objects so their images appear three-dimensional Suppose we build a model of a sphere using many polygons and color it with glcolor. We get something

More information

Analysis of photometric factors based on photometric linearization

Analysis of photometric factors based on photometric linearization 3326 J. Opt. Soc. Am. A/ Vol. 24, No. 10/ October 2007 Mukaigawa et al. Analysis of photometric factors based on photometric linearization Yasuhiro Mukaigawa, 1, * Yasunori Ishii, 2 and Takeshi Shakunaga

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized Topic 11: Texture Mapping Motivation Sources of texture Texture coordinates Bump mapping, mip mapping & env mapping Texture sources: Photographs Texture sources: Procedural Texture sources: Solid textures

More information

Illumination & Shading

Illumination & Shading Illumination & Shading Goals Introduce the types of light-material interactions Build a simple reflection model---the Phong model--- that can be used with real time graphics hardware Why we need Illumination

More information

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter? Light Properties of light Today What is light? How do we measure it? How does light propagate? How does light interact with matter? by Ted Adelson Readings Andrew Glassner, Principles of Digital Image

More information

Illumination and Shading

Illumination and Shading Illumination and Shading Light sources emit intensity: assigns intensity to each wavelength of light Humans perceive as a colour - navy blue, light green, etc. Exeriments show that there are distinct I

More information

Chapter 4- Blender Render Engines

Chapter 4- Blender Render Engines Chapter 4- Render Engines What is a Render Engine? As you make your 3D models in, your goal will probably be to generate (render) an image or a movie as a final result. The software that determines how

More information

1.6 Rough Surface Scattering Applications Computer Graphic Shading and Rendering

1.6 Rough Surface Scattering Applications Computer Graphic Shading and Rendering 20 Durgin ECE 3065 Notes Rough Surface Scattering Chapter 1 1.6 Rough Surface Scattering Applications 1.6.1 Computer Graphic Shading and Rendering At optical frequencies, nearly every object in our everyday

More information

Classifying Body and Surface Reflections using Expectation-Maximization

Classifying Body and Surface Reflections using Expectation-Maximization IS&T's 23 PICS Conference Classifying Body and Surface Reflections using Expectation-Maximization Hans J. Andersen and Moritz Störring Computer Vision and Media Technology Laboratory Institute of Health

More information