CHESTER F. CARLSON CENTER FOR IMAGING SCIENCE COLLEGE OF SCIENCE ROCHESTER INSTITUTE OF TECHNOLOGY ROCHESTER, NY CERTIFICATE OF APPROVAL

Size: px
Start display at page:

Download "CHESTER F. CARLSON CENTER FOR IMAGING SCIENCE COLLEGE OF SCIENCE ROCHESTER INSTITUTE OF TECHNOLOGY ROCHESTER, NY CERTIFICATE OF APPROVAL"

Transcription

1 CHESTER F. CARLSON CENTER FOR IMAGING SCIENCE COLLEGE OF SCIENCE ROCHESTER INSTITUTE OF TECHNOLOGY ROCHESTER, NY CERTIFICATE OF APPROVAL M.S. DEGREE THESIS The M.S. Degree Thesis of Ying Chen has been examined and approved by two members of the Color Science faculty as satisfactory for the thesis requirement for the Master of Science degree Dr. Roy S. Berns, Thesis Advisor Dr. James A. Ferwerda i

2 Model Evaluation and Measurement Optimization for the Reproduction of Artist Paint Surfaces through Computer Graphics Renderings Ying Chen B.S. Zhejiang University, Hangzhou, China (2002) M.S. Zhejiang University, Hangzhou, China (2005) A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Color Science in the Center for Imaging Science, Rochester Institute of Technology February 2008 Signature of the Author Accepted by Dr. Roy S. Berns, Coordinator, M.S. Degree Program ii

3 THESIS RELEASE PERMISSION FORM CHESTER F. CARLSON CENTER FOR IMAGING SCIENCE COLLEGE OF SCIENCE ROCHESTER INSTITUTE OF TECHNOLOGY ROCHESTER, NEW YORK Title of Thesis Model Evaluation and Measurement Optimization for the Reproduction of Artist Paint Surfaces through Computer Graphics Renderings I, Ying Chen, hereby grant permission to the Wallace Memorial Library of Rochester Institute of Technology to reproduce my thesis in whole or part. Any reproduction will not be for commercial use or profit. Signature of the Author Date iii

4 Model Evaluation and Measurement Optimization for the Reproduction of Artist Paint Surfaces through Computer Graphics Renderings Ying Chen A thesis submitted in partial fulfillment of the requirements For the degree of Master of Science in Color Science In the Center for Imaging Science, Rochester Institute of Technology Abstract: Light reflection models for computer graphics have been developed over the past several decades. For real paint surfaces, it is possible to model their bidirectional reflectance distribution function with simple models. A framework was established to evaluate two simple reflection models, Phong and Torrance-Sparrow, which were used to render artist paint surfaces under different illumination angles. An image acquisition system was devised to capture the images under selected illuminated angles. The parameters of the specular and the diffuse components were estimated with these image sequences. At the evaluation stage, both physicallybased metrics and psychophysical techniques were used to evaluate the estimation accuracy of each model. For both methods, the comparison of the estimations of the two models showed that better estimations were obtained from the Torrance-Sparrow model for glossy samples. The estimation accuracies of the two models were almost the same for matte samples. In addition, based on the analyses of the specular peak width and the histogram of the peak values, the optimized location and minimal number of measurements were determined for four kinds of paint samples. iv

5 Acknowledgements I would like express my sincere gratitude to the following people for their support, guidance and help of my master s research and thesis: Dr. Roy S. Berns, my advisor, for giving me the opportunity to study at Munsell Color Science Laboratory and work on this interesting research, and his patient guidance, valuable advice, kind help and stimulating encouragement during my whole research and thesis, Mr. Lawrence A. Taplin, for his excellent advice, friendly help in my experiments, programming and writing, Dr. James A. Ferwerda, for giving the detailed reviewing and constructive comments of my thesis, The sponsor of my research, the Andrew W. Mellon Foundation, for its funding support, Dr. Mark D. Fairchild, Dr. Ethan Montag, Dr. David R. Wyble and Dr. Mitchell R. Rosen, for sharing their knowledge and giving me advice, Ms. Colleen Desimone, Ms. Valerie Hemink, all the people at MCSL and all my friends in RIT, for always giving me help and support. Especially, I would like give my thanks to my husband Shizhe, my parents and my parents-inlaw. Without your love, I cannot go further. Thank you, all! v

6 Tables of Contents: ACKNOWLEDGEMENTS...V TABLES OF CONTENTS:... VI LIST OF FIGURES:... IX LIST OF TABLES:...XV 1 INTRODUCTION BACKGROUND THE DEFINITION OF BRDF ASTM STANDARDS AND PREVIOUS WORK FOR BRDF MEASUREMENT ASTM Standards for Apparatus ASTM Standards for Normalization The Development of BRDF Measurement BRDF MODELS IN COMPUTER GRAPHICS Physically-Based Models Empirical Models DIGITAL ARCHIVING AND REALISTIC RENDERING Goniospectral Imaging System at Chiba University Digital Archiving of Art Paintings at Osaka Electro-Communication University Digital Archiving of Artifacts at the University of Southern California Electric Display of Artistic Paintings at Sogang University Polynomial Texture Maps at HP Laboratories INSTRUMENT DEVELOPMENT THE GONIO-SPECTROPHOTOMETER INSTRUMENT THE SIMPLIFICATION OF THE MEASUREMENT INSTRUMENT THE ANALYSES OF THE LIGHT SOURCE...33 vi

7 3.4 CAMERA DESCRIPTIONS AND SET-UP The Specification of the Control of the Camera The Image Processing Procedure DATA COLLECTION OF OBJECTS PAINT SAMPLES WITH UNIFORM SURFACES PAINT SAMPLES ON CANVAS VARNISHED SAMPLES WITH BRUSH MARKS IMPASTO PAINTS THE GLOSS LEVELS OF THE SAMPLES RENDERING ALGORITHMS SYSTEM GEOMETRY PHONG MODEL TORRANCE-SPARROW MODEL THE IMPROVEMENT OF THE TORRANCE-SPARROW MODEL PARAMETER ESTIMATION OF THE TWO MODELS Estimation for Offset Angles from Illumination Angles Parameters Estimation for the Uniform Samples Parameters Estimation for the Samples with Simple Surface Shapes Parameters Estimation for the Samples with Complicated Surface Shapes (Impasto Samples) MODEL EVALUATION PHYSICALLY-BASED EVALUATION Computational Evaluation for Matte Samples Computational Evaluation for Glossy Samples PSYCHOPHYSICAL EVALUATION Color Management of the LCD Monitor Paired Comparison Experiments Experimental Results vii

8 7 OPTIMIZATION FOR MEASUREMENT GEOMETRY THE SELECTIONS FOR MEASUREMENT NUMBERS AND LOCATIONS PSYCHOPHYSICAL EVALUATION FOR OPTIMAL SELECTION COMPUTATIONAL EVALUATION FOR OPTIMAL SELECTION OPTIMIZATION OF THE MEASUREMENT NUMBERS AND LOCATIONS CONCLUSIONS AND FUTURE RESEARCH CONCLUSIONS FUTURE RESEARCH Model Development Higher Resolution Images Two-Dimensional BRDF Measurement REFERENCES APPENDICES APPENDIX ONE: RENDERED IMAGES APPENDIX TWO: MATLAB CODE Main Function including Image Information Optimization of Parameters of Phong Model Optimization of Parameters of Torrance-Sparrow Model Selecting Patches from Raw Images HDR Images Merge Calibration of Gray Card viii

9 List of Figures: Figure 2.1. BRDF expressed in terms of viewing and illumination angles....5 Figure 2.2. The conventional gonioreflectometer designed by Murray-Coleman [Murray-Coleman 1990].8 Figure 2.3. The four degrees of freedom provided in Murray s gonioreflectometer [Lun 1999]....9 Figure 2.4. The imaging gonioreflectometer at Lawrence Berkeley Laboratory [Ward 1992] Figure 2.5. The imaging gonioreflectometer at LBL. Light reflected by the sample in a specific direction was focused by the hemisphere through a fisheye lens onto a CCD imaging array [Ward 1992] Figure 2.6. The Field-Goniometer System in Switzerland [Sandmeier 1996]...12 Figure 2.7. Gonioreflectometer at Columbia University [Dana 1999] Figure 2.8. The gonioreflectometer at Cornell University [Li 2006]...14 Figure 2.9. The Stanford spherical gantry [Levoy 2004]...15 Figure The BRDF measurement gantry at Mitsubishi Electric Research Laboratories [Matusik 2003]...16 Figure The plan of the NPL goniospectrophotometer [Pointer 2005]. 1. Light source. 2. Optical bench. 3. Sample holder on translating and rotating stages. 4. Detector (spectroradiometer). 5. Camera. 6. Support rail. 7. Bench top Figure The view of the NPL goniospectrophotometer [Pointer 2005]...18 Figure The photography of Leloup s Goniospectroradiometer [Leloup 2006]...19 Figure Goniospectral imaging system in Chiba University [Nakaguchi 2005] Figure Digital archiving system for painting objects at Osaka Electro-Communication University [Tominaga 2001] Figure Digital Archiving system for artifacts at University of Southern California [Hawkins 2001] Figure The possible BRDF measurement system with multi viewing directions Figure Light Arrangement of Image Acquisition System at Sogang University [Ju 2002] ix

10 Figure Two devices for collecting the images at Hewlett-Packard Laboratories [Malzbender 2001] Figure 3.1. The proposed imaging gonio-spectrophotometer to capture the BRDF of artwork at the Munsell Color Science Laboratory at RIT Figure 3.2. The theoretical simplified image acquisition system...32 Figure 3.3. The practical simplified measurement system developed at the MCSL...33 Figure 3.4. The schematic of the illumination optics of the measurement system Figure 3.5. The relative spectral power distribution of the light source Figure 3.6. The control interface of Exposure Mode I for the Nikon D1 CCD camera...36 Figure 3.7. The control interface of Exposure Mode II for the Nikon D1 CCD camera Figure 3.8. The control interface of Data Storage for the Nikon D1 CCD camera...37 Figure 3.9. The control interface of Mechanical Control for the Nikon D1 CCD camera...38 Figure The control interface of Image Processing for the Nikon D1 CCD camera Figure The control interface of Camera Curves for the Nikon D1 CCD camera...39 Figure The resolution target used to determine the optimal focus...40 Figure The ISO OECF Chart used to measure the OECF of the Nikon D1 CCD camera...41 Figure 3.14 The calculated OECF look-up-tables of the three channels of the Nikon D1 CCD camera...43 Figure The hat function [Eq. (3.4)] used to determine the weight of different pixel values...45 Figure Twenty-eight color patches used to build and evaluate the color management of the Nikon D1 CCD camera Figure The CIELAB error vectors of the 28 patches used to calibrate the Nikon D1 CCD camera..48 Figure The histogram of E 00 values of the 28 patches used to calibrate the Nikon D1 CCD camera Figure 4.1. The uniform acrylic paints on opaque chart and glass...50 Figure 4.2. The LENETA opacity contrast chart used as the substrate Figure 4.3. The acrylic paint samples painted on the two canvas substrates x

11 Figure 4.4. The vanished painted samples showing brush marks Figure 4.5. Four impasto paints with different colors and complex surface shapes Figure 5.1. Light reflection geometry in terms of surface tilt angles...57 Figure 5.2. The plot of specular component in Phong model changes with n factor...59 Figure 5.3. Three cases for the calculation of geometrical attenuation factor, G Figure 5.4. The plot of D factor changes with eccentricity of the ellipsoids Figure 5.5. The images of samples (5) and (6) used to show the relationship between model magnitude parameters and visual appearance of the images...64 Figure 5.6. The BRDFs (specular components) of two selected pixels in sample (8) used to show the relationship between the relative radiance values under two illumination angles and parameters n and c Figure 5.7. The images of sample (8) including two selected parts used to show the relationship between the visual appearances of two parts under two illumination angles and parameters n and c Figure 5.8. Diagram of flowchart used to show the calculation of the offset angle θ from the illumination angle θ a Figure 5.9. The uniform gray card used to correct the non-uniformity of the incident angle...70 Figure The offset angles from the illumination angle θ a for pixels...70 Figure Diagram of flow for the estimation of the model parameters...71 Figure The example pixel used to show the separation of the diffuse and specular components...73 Figure The calculation of n 0 of the pixel with maximum relative radiance value in sample (3)...76 Figure The calculation of n 0 of one example pixel in sample (3) Figure The calculation of φ v0 of one example pixel in sample (3) Figure The optimization results of two models of one example pixel in sample (3)...81 Figure The histogram of Ad values of all the pixels of gray card Figure The optimization results of two models of one example pixel in sample (5)...86 xi

12 Figure The example pixel used to show the complicated diffuse component of sample (8) Figure The fitting results of Torrance-Sparrow model using different θ n values Figure The optimization results of two models of one example pixel in sample (8)...89 Figure 6.1. The fitting results of the two models for sample (4), the matte neutral gray uniform sample painted on the glass Figure 6.2. The fitting results of the two models for sample (6), the matte sample painted on the canvas.95 Figure 6.3. The real image and two rendering images of sample (6) under -10 illumination angle...96 Figure 6.4. The fitting results of the two models for sample (5), the canvas samples with middle gloss level Figure 6.5. The fitting results of the two models for sample (7) Figure 6.6. The real image and two rendering images of sample (7) under -15 illumination angle Figure 6.7. The fitting results of two models for sample (1) Figure 6.8. The real image and two rendering images of sample (1) 6 illumination angle Figure 6.9. The interface used in the display characterization Figure Three 1-D interpolated LUTs of LCD monitor Figure The workflow used to build color management model of LCD monitor Figure The end-to-end color management workflow used for the Nikon D1 CCD camera and the LCD monitor Figure The histogram of E 00 values of the 168 color patches using to show the colorimetric accuracy of display Figure The user interface used in the paired comparison experiments Figure Interval scales of rendering accuracy of the two models for all the illumination angles Figure Interval scales of rendering accuracy of the two models for some specular illumination angles xii

13 Figure Interval scales of rendering accuracy of the two models for some diffuse illumination angles Figure The real image and rendering images of sample (8) under 25 illumination angle Figure The real image and two rendering images of sample (5) under 10 illumination angle Figure 7.1. The interval scales of rendering accuracy optimized with different groups of angle number selection using Torrance-Sparrow models for sample (7) Figure 7.2. The interval scales of rendering accuracy optimized with different groups of angle number selection using Torrance-Sparrow models for sample (5) Figure 7.3. The interval scales of rendering accuracy optimized with different groups of angle number selection using Torrance-Sparrow models for sample (3) Figure 7.4. The interval scales of rendering accuracy optimized with different groups of angle number selection using Torrance-Sparrow models for sample (8) Figure 7.5. The real image and two rendering images optimized using different groups of angle numbers of sample (8) Figure 7.6. The fitting results of one pixel of sample (8) used to show the fitting for the grazing anlges Figure 7.7. The RMS values of the relative radiance of 2000 pixels optimized with different groups of angle number selection using Torrance-Sparrow models for sample (7) Figure 7.8. The RMS values of the relative radiance of 2000 pixels optimized with different groups of angle number selection using Torrance-Sparrow models for sample (5) Figure 7.9. The RMS values of the relative radiance of 2000 pixels optimized with different groups of angle number selection using Torrance-Sparrow models for sample (3) Figure The RMS values of the relative radiance of 2000 pixels optimized with different groups of angle number selection using Torrance-Sparrow models for sample (8) Figure The fitting results of sample (3) optimized with the group (2) selection Figure The fitting results of sample (3) optimized with the group (6) selection xiii

14 Figure The specular peak width and histogram of the locations of peak values for sample (7) Figure The specular peak width and histogram of the locations of peak values for sample (5) Figure The specular peak width and histogram of the locations of peak values for sample (3) Figure The specular peak width and histogram of the locations of peak values for sample (8) Figure Comparison of the real images and estimated images of sample (1) Figure Comparison of the real images and estimated images of sample (5) Figure Comparison of the real images and estimated images of sample (6) Figure Comparison of the real images and estimated images of sample (7) Figure Comparison of the real images and estimated images of sample (8) Figure Comparison of the real images and estimated images of sample (9) Figure Comparison of the real images and estimated images of sample (10) Figure Comparison of the real images and estimated images of sample (11) xiv

15 List of Tables: Table 2.1. The list of the symbols related to BRDF....5 Table 2.2. Several common light reflection models used in computer graphics Table 3.1. The mean, standard deviation, 90 th percentile, and maximum values of E * ab and E 00 showing the color reproduction accuracy of 28 patches used to calibrate Nikon D1 CCD camera...49 Table 4.1. The gloss properties of the paint samples Table 5.1. The parameters requiring estimation in each model for the uniform samples...72 Table 5.2. The start values of optimization used to estimate As for the uniform sample (3) Table 5.3. The start values of optimization used to estimate viewing angles and n (or c ) in each model for an example pixel in the uniform sample (3) Table 5.4. The optimized parameters in each model for an example pixel in the uniform sample (3)...80 Table 5.5. The parameters requiring estimation for the samples with simple surface shapes Table 5.6. The start values of optimization used to estimate As for the sample (5) with simple surface shape...84 Table 5.7. The start values of optimization used to estimate viewing and tilt angles and n (or c) in each model for an example pixel in the sample (5) with simple surface shape...85 Table 5.8. The optimized parameters in each model for an example pixel in the sample (5) with simple surface shape Table 5.9. The parameters requiring estimation in each model for the samples with complicated surface shapes (the impasto samples) Table The start values of optimization used to estimate As in each model for the sample (8) with complicated surface shape (the impasto sample) Table The start values of optimization used to estimate tilt angles and n (or c) in each model for an example pixel in the sample (8) with complicated surface shape (the impasto sample)...90 xv

16 Table The optimized parameters in each model for an example pixel in the sample (8) with complicated surface shape (the impasto sample) Table The estimated ratios of the specular component to the diffuse component Table 6.1 The RMS values of relative radiance errors for one pixel of five samples...95 Table 6.2 The RMS values of relative radiance errors for all the pixels of all the samples Table 6.2 The digital count values of the patch used to characterize the LCD monitor Table 6.3. The mean, standard deviation, 90 th percentile, and maximum values of E * ab and E 00 showing the color reproduction accuracy of display of the 168 color patches used to calibrate the display monitor Table 7.1 The number and location of angles of each group of selection Table 7.2. The Minimized numbers and the optimized locations of the measurement angles xvi

17 1 Introduction For realistic scenes, the spectral and geometric properties of the light source, object, and observer determine appearance. Thus, the interplay of the lighting, viewing and object properties must be considered in the digital reproduction of objects in display and print. Commonly, the photographer defines a specific set of geometric conditions, reducing the myriad geometric conditions to a single representation. Alternatively, if the data are available as a function of this interplay, known as the bidirectional reflectance distribution function (BRDF) [ASTM Standard E ], images can be rendered for a variety of geometries and in combination, simulate the real-time viewing experience. For practical use, it is desirable to approximate the actual BRDF with a limited set of measurements and a reflection model. A variety of reflection models have been proposed to calculate BRDF, including both physically-based and empirical models. The Phong [Phong 1975] and Ward [Ward 1992] models were two common empirical models, which were derived from measured data. Blinn [Blinn 1977] introduced the Torrance-Sparrow [Torrance 1967] physically-based light reflection model to computer graphics, and replaced the standard Gaussian distribution with ellipsoids of revolution [Trowbridge 1975] in modeling microfacets. Over the past several decades, more physically-based models were proposed with modeling more complex optical effects, such as He-Torrance model [He 1991]. In addition, Dana [Dana 1999] defined bidirectional texture function (BTF) to describe the function of the texture surfaces. One difficulty of the measurement system was that the camera position must be calibrated accurately since the camera was moved to different locations. Malzbender [Malzbender 2001] at HP Labs 1

18 presented polynomial texture mapping to reconstruct the luminance of each pixel. However, the specular component was not directly modeled and was handled separately. However, in the studies on BRDF and BTF measurement and 3D image rendering, there were few related to the reproduction of cultural heritage, especially research focusing on artist paint surfaces. Based on a literature review, several publications in this field were found. Hawkins [Hawkins 2001] proposed an approach to render cultural artifacts based on capturing the reflectance fields of the objects, but a large number of images were required. Tominaga [Tominaga 2001] also proposed a method to record and render art paintings. However, only a matte oil painting was tested in his research. Ju [Ju 2002] at Sogang University developed a method to reproduce artistic paintings for electronic display. But there were no detailed evaluations for the reflection models, especially psychophysical evaluation. The purpose of this research was to develop a practical apparatus for museums to record 2D artist paint surfaces under different illumination angles, and then render them with different light reflection models and evaluate their accuracy. Because of their mathematical simplicity and small number of parameters, the Phong and Torrance-Sparrow models were selected from the empirical and physically-based models to estimate the specular and diffuse components. Thirteen different paint samples with different gloss levels were selected to evaluate the computational accuracy of the models. Also, eight samples were selected for psychophysical evaluation. To determine the optimized measurement geometry, four different glossy samples were used to explore the relationship between rendering accuracy and measurement settings. Finally, the 2

19 number of lighting geometries needed to fit the model was minimized for each kind of measured sample. 3

20 2 Background 2.1 The Definition of BRDF Consider the incident and reflected flux on a surface; the bidirectional reflectance distribution function (BRDF) was defined as the ratio of the directional reflected radiance to the directionally incident irradiance. The BRDF is denoted symbolically as f r, as shown in Eq. (2.1) [Nicodemus 1977]. All the symbols related to BRDF are listed in Table f ( θ, φ ; θ, φ ) = dl ( θ, φ ; θ, φ ; E ) / de ( θ, φ ) (sr ) (2.1) r i i r r r i i The subscript i indicates quantities associated with incident radiant flux; the subscript r indicates the quantities associated with reflected radiant flux; and d indicates a differential quantity. r r i i i i BRDF, expressed in terms of viewing and illumination angles, is depicted in Figure 2.1. A surface element ( da) is illuminated from the incident direction (θ i, φ i ) within the element of solid angle dω i, with the reflection in the direction (θ r, φ r ) centered within solid angle dω r. According to the definition of BRDF in Eq. (2.1), the derived BRDF expression is shown as Eq. (2.2) [ASTM Standard E ]. f r = Θ r /dω r /da/cosθ r Θ i /da = Θ r Θ i dω r cosθ r (2.2) 4

21 Figure 2.1. BRDF expressed in terms of viewing and illumination angles. Table 2.1. The list of the symbols related to BRDF. Symbol Term Unit-Dimension θ Polar angle [rad] φ Azimuth angle [rad] E Irradiance [Wm -2 ] L Radiance [Wm -2 sr -1 ] Θ Radiant flux [W] dω Solid angle [sr] da Surface Element [m 2 ] 5

22 In practical measurement, BRDF should be designed with respect to wavelength. Thus, BRDF should be a differential function of wavelength. For simplicity, the above expressions of BRDF omit wavelength. 2.2 ASTM standards and Previous work for BRDF Measurement Variations of BRDF measurement techniques resulted in a large range of variation in the measurement results. This led to the recommendation for standardized BRDF measurement [ASTM Standard E ] ASTM Standards for Apparatus The instrument design must provide four degrees of freedom between the light source, the sample holder and the receiver assemblies. Considering the light source, a collimated or slightly converging source light can be used to measure BRDF. If the convergence angle is small, the uncertainty introduced by a non-unique angle of incidence is usually negligible. The sample holder should provide a secure mount for the sample that does not introduce any warp. Typically, the light source is fixed and the incident polar angles are changed with the rotation of the sample holder. If the receiver includes one degree of freedom for reflected direction, the receiver should normally have provisions for rotating about an axis on the front face of the sample. 6

23 2.2.2 ASTM Standards for Normalization In order to calculate BRDF, there were four methods for normalizing the reflected power to incident power to determine the incident flux. The following shows the two most typical methods. 1. Absolute Method With the absolute method, the incident and the reflected flux should be measured separately and then calculated. A recommended method to measure the incident flux is to move the receiver onto the optical axis of the source with no sample holder [ASTM Standard E ]. This method requires a large detector dynamic range. 2. Relative Method The relative method is based on knowing the BRDF of a reference sample. The reference sample should be uniform, isotropic, have high reflectance, and a diffuse surface. Knowing BRDF f r and the measured reflected flux Θ r of the reference sample; the incident flux can be calculated as Eq. (2.3). Θ i = Θ r f r dω r cosθ r (2.3) 7

24 2.2.3 The Development of BRDF Measurement 1. Traditional Gonioreflectometer for BRDF Measurement A gonioreflectometer was a traditional device to measure BRDF. One typical gonioreflectometer designed by Murray-Coleman and Smith [Murray-Coleman 1990] is shown in Figure 2.2. The mechanical design of this gonioreflectometer provided four degrees of freedom, which is shown as Figure 2.3. In this figure, mθ i, mφ i, mθ r and mφ r indicate the motions that change θ i, φ i, θ r and φ r separately. Figure 2.2. The conventional gonioreflectometer designed by Murray-Coleman [Murray-Coleman 1990]. In this system, an MR16 incandescent lamp was used as the light source to illuminate the sample. The luminance of the light source was adjusted to avoid overflow in the detector. The 8

25 photodetector was a simple photodiode type. The field of view of the detector was restricted to the sample and nearby area. With this traditional gonioreflectometer, only absolute BRDF was measured without 3D shape of the sample surface. In addition, only the surface sample could be measured. Figure 2.3. The four degrees of freedom provided in Murray s gonioreflectometer [Lun 1999]. 2. Imaging Gonioreflectometer Designed by Ward The development of an imaging gonioreflectometer makes BRDF measurements more practical and simple with the combination of lighting simulation and computer graphics. The imaging gonioreflectometer developed at Lawrence Berkeley Laboratory (LBL) [Ward 1992] was the pioneering work in this field. In the imaging gonioreflectometer, the light source was a 3-watt quartz-halogen lamp, which produced a well-collimated beam through an optically precise parabolic reflector. The two 9

26 incident angles were controlled mechanically by pivoting the light source arm at point C and the sample holder at point A, which are shown in Figure 2.4. The most two important components in this system were a half-silvered hemisphere and a CCD camera with a fish eye. These two components controlled the reflected directions with two degrees of freedom. The light reflected off the sample surface in holder A was collected by the hemispherical mirror and reflected back into the fisheye lens and onto the CCD array B, shown in Figure 2.5. By focusing the lens at one half the hemisphere radius, a near perfect imaging of reflected angles took place. Figure 2.4. The imaging gonioreflectometer at Lawrence Berkeley Laboratory [Ward 1992]. 10

27 Figure 2.5. The imaging gonioreflectometer at LBL. Light reflected by the sample in a specific direction was focused by the hemisphere through a fisheye lens onto a CCD imaging array [Ward 1992]. With this imaging gonioreflectometer, the BRDF of both isotropic and anisotropic materials could be measured. In addition, with measured data, BRDF models and light simulation algorithms, the graphics of the objects could be rendered finally. However, for this system, the ability to measure reflectance near grazing angles was limited because of the size and shape of the hemisphere. 11

28 3. Field-Goniometer System in Switzerland The Swiss field-goniometer system was built to measure BRDF of the targets at the earth surface under natural atmospheric and illumination conditions [Sandmeier 1995, Turner 1998]. As shown in Figure 2.6, this field-goniometer consisted of an azimuth full-circle, a zenith semi-arc and a motor driven sled on which the GER3700 spectroradiometer was mounted. The azimuth arc consisted of twelve sockets on which a rail for the zenith arc was mounted. This structure provided two degrees of freedom for the detector. The light source was the sun. If the artificial light source was used, another zenith arc would be used to change the illumination directions. Figure 2.6. The Field-Goniometer System in Switzerland [Sandmeier 1996]. 12

29 4. BRDF Measurement Device at Columbia University As shown in Figure 2.7, a BRDF measurement device was developed at the Columbia Automated Vision Environment research group. The device was used to measure BTF (bidirectional texture function) [Dana 1999]. There were five elements in this device: a robot arm to orient the samples, a halogen bulb with a Fresnel lens that produced a parallel beam, a control computer, a spectrophotometer, and a CCD color video camera. Throughout the measurement, the light source remained fixed. The sample had three degrees of freedom to control the illumination directions and the viewing directions. In addition, the camera had one degree of freedom. Thus, the BRDF of the samples with four degrees of freedom could be measured. However, when the robot arm moved, both the incident angle and the reflection angle were changed. This limitation made it impossible to keep the incident angle fixed during a BRDF scan. The BTF measurement system developed by Havran [Havran 2005] had the same configuration and problems. Figure 2.7. Gonioreflectometer at Columbia University [Dana 1999]. 13

30 5. BRDF Measurement at Cornell University The BRDF measurement system at Cornell University was the typical system to measure the BRDF of isotropic objects, which was unchanged under rotation about the surface normal. The system was developed over several years [Foo 1997, Marschner 2000, Li 2006] at the Light Measurement Laboratory at Cornell University. As shown in Figure 2.8, four components were included in the gonioreflectometer at Cornell University [Li 2006]: a broadband light source with one degree of freedom; a positioning mechanism with three motor-controlled axes of rotation to provide three degrees of freedom; a fixed spectroradiometer detector and a computer system. Although the whole system had four degrees of freedom, it could not measure anisotropic materials. Besides the gonioreflectometer above, an image-based BRDF measurement system was built [Marschner 2000], which could also only measure isotropic objects. Figure 2.8. The gonioreflectometer at Cornell University [Li 2006]. 14

31 6. Stanford Spherical Gantry The Stanford spherical gantry was designed by Marc Levoy [Levoy 2004], as shown in Figure 2.9. This instrument was useful for a variety of measurement of reflection and scattering properties of the objects, ranging from painted statuettes to single human hair fibers. In this measurement system, there were two computer-controlled arms. The object platform had one degree of rotational freedom. The inner arm, labeled "camera" in the image, had two degrees of rotational freedom and had a high-quality 3-CCD digital camera mounted on it. The outer arm, labeled "light" in the image, had only one degree of rotational freedom. On this arm, there was a focusable broadband point light source. Thus, the whole system had four degrees of freedom. Using the similar measurement apparatus at Cornell University, Marschner [Marschner 2005] performed the optical scattering measurement of the wood. Figure 2.9. The Stanford spherical gantry [Levoy 2004]. 15

32 7. BRDF Measurement Gantry at Mitsubishi Electric Research Laboratories A BRDF measurement gantry was developed at Mitsubishi Electric Research Laboratories [Matusik 2003]. As shown in Figure 2.10, the components included a Hamamatsu SQ Xenon lamp, a QImaging Retiga 1300 camera and the measured sample. The light orbited the measurement sample placed at the center of rotation. The camera was stationary. Similar with the BRDF measurement system at Cornell University, only the BRDF of the spherically homogenous sample could be measured. Thus, three degrees of freedom were enough for this system. Figure The BRDF measurement gantry at Mitsubishi Electric Research Laboratories [Matusik 2003]. 16

33 8. NPL Goniospectrophotometer The NPL (National Physical Laboratory) goniospectrophotometer was presented in a NPL report [Pointer 2005]. The plan and the view of this goniospectrophotometer are shown in Figure 2.11 and Figure 2.12 separately. Figure The plan of the NPL goniospectrophotometer [Pointer 2005]. 1. Light source. 2. Optical bench. 3. Sample holder on translating and rotating stages. 4. Detector (spectroradiometer). 5. Camera. 6. Support rail. 7. Bench top. 17

34 Figure The view of the NPL goniospectrophotometer [Pointer 2005]. The illumination components were mounted on an optical bench, including a LOT Oriel 100W Research stabilized tungsten halogen light source associated rear reflector and the condenser lens used to provide the parallel beam of light on the sample. A Minolta CS-1000 telespectroradiometer was used as the detector. A low-resolution color CCD camera was mounted on the same stage as the detector at an angle of 10 to the left side. Both the optical bench and the detector could be rotated on a support rail. The sample holder could be rotated. Thus, there were three degrees of freedom of this goniospectrophotometer. Since this goniospectrophotometer was designed to measure the spectral radiance of the samples, it could not measure absolute BRDF values. 18

35 9. Leloup s BRDF Goniospectroradiometer Leloup [Leloup 2006] presented a goniospectroradiometer to measure the absolute BRDF values of the objects, which is shown in Figure Like the goniospectrophotometer at NPL, this system was designed to measure the gonio-apparent samples. To measure BRDF, the detector position had two degrees of freedom and the sample had two degrees of freedom. A Xenon light source was used with fixed location. Figure The photography of Leloup s Goniospectroradiometer [Leloup 2006]. 2.3 BRDF Models in Computer Graphics In the domain of computer graphics, one important calculation is to model the reflection properties based on the given position of light source, view location and surface normal. Over 19

36 past decades, there were numerous light reflection models developed to model the BRDF in computer graphics, including both physical and empirical-based models. Several models are listed in Table Physically-Based Models Physically-based BRDF models were developed based on the theory of optics and physics. Thus, each parameter in a model has a physical meaning, representing either the characteristics of the material or its physical behavior. In 1967, Torrance and Sparrow created an analytical light reflection model of roughened surfaces based on geometrical optics [Torrance 1967]. In this model, it was assumed that there were small, randomly dispersed, mirror-like facets, which made up the surface area. The simple Gaussian function was used to represent the micro-facet distribution. Furthermore, a term was added to analyze shadow and masking phenomena. This model explained the off-specular peaks, which emerged as the angle of incidence was increased away from the surface normal. Based on the work of Torrance and Sparrow, Blinn improved the performance of the model and introduced it to computer graphics. The improvement was realized by utilizing a different microfacet distribution function, modeled as the ellipsoid of revolution, which was proposed by Trowbridge and Reitz [Trowbridge 1975]. This function not only modeled the micro facets randomly oriented, but also randomly curved. In addition, the modeling was more accurate than other models, which resulted from the better fitting to the measurement data. 20

37 Compared with the model of Torrance and Sparrow, Torrance and Cook developed a more general reflection model to describe the directional distribution of the reflected light and a color shift that occurred as the reflectance changed with incident angle [Cook 1981]. Multiple distribution functions were included in this model for the distribution of the microfacets. In realistic rendering, this model had a contribution on the distinction of metal and dielectric. He and Torrance presented a more comprehensive light reflection model in 1991 [He 1991]. Based on physical optics, the model extended the Cook-Torrance model. Polarization and directional Fresnel effects were added to the model. Moreover, the complicated subsurface scattering effect was introduced. Shirley [Shirley 1997] proposed a coupled model used a physically-based specular coefficient and a heuristic matte component to produce the matte-specular tradeoff while remaining reciprocal and energy conserving. The key feature of this model was that it coupled the matte and specular scaling coefficients. In addition, this model maintained the simplicity of the traditional models. The Beard-Maxwell Model is another physical model, in which the surface was assumed to be a three dimensional terrain of micro-facets of varying orientation [Maxwell 1973]. In this model, the shadowing and obscuration term and the polarization effect were also included. In 2002, the Nonconventional Exploitation Factors Data System (NEFDS) utilized the modified form of this model in order to accurately represent a wider range of materials [Westlund 2002]. 21

38 2.3.2 Empirical Models In contrast to the physically-based models, the empirical reflection models were derived based on well fitting a BRDF, regardless of the physical meaning of the parameters. From this point of view, there were more mathematical models were derived and applied, but not the analyses of the optical and physical phenomenon. One of the most common empirical models, and also the simplest model, was proposed by Phong in 1975 [Phong 1975]. Only two parameters described the specular component in the model. The mathematical simplicity is also the reason why this model is still very popular. As introduced in the previous section, Ward described the imaging gonioreflectometer developed by the Lighting Systems Research Group at Lawrence Berkeley Laboratory. Considering the complexity of physically-based models, Ward proposed an empirical model, which was derived through fitting the measured data from the gonioreflectometer [Ward 1992]. In 1997, based on the Phong shading model, Lafortune showed that the BRDF s model could be fitted using sums of cosine lobes [Lafortune 1997]. Also, in this model, more complex reflection behaviors were represented, including off-specular reflection and retro-reflection. 22

39 Table 2.2. Several common light reflection models used in computer graphics. Model Reference Torrance-Sparrow Analytical Model [Torrance 1967] Cook-Torrance Specular Micro-Facet Model [Cook 1981] Physically-based He-Torrance Comprehensive Analytic Model [He 1991] Shirley Coupled Model [Shirley 1997] Modified Beard-Maxwell Bidirectional Reflectance Model [Westlund 2002] Phong Empirical Specular Model [Phong 1975] Empirical Ward Anisotropic Model [Ward 1992] Lafortune Generalized Cosine Lobe Model [Lafortune 1997] 2.4 Digital Archiving and Realistic Rendering In order to develop an effective method to realistically render objects for digital archiving of culture heritage, it is necessary to develop a practical instrument to measure the reflection properties of these materials and utilize a simplified and accurate reflection model. There were some different methods for digital archiving of culture heritage Goniospectral Imaging System at Chiba University The goniospectral imaging system at the Department of Information and Imaging Sciences of Chiba University was developed originally to reproduce the optical properties of 3D objects [Haneishi 1998, Haneishi 2001]. Tonsho developed this system, which recorded the shape information of an object [Tonsho 2002]. 23

40 As shown in Figure 2.14, the goniospectral imaging system presented by Nakaguchi [Nakaguchi 2005] included a robot system to control the illumination, a 3D scanner to record the object shape, and a camera with multi-band filters. There were different azimuth angles that could be measured with fixed polar angle [Haneishi 2001]. The direction of the camera remained fixed. Thus, this goniospectral imaging system had only one degree of freedom. In addition, this system was used to measure the optical properties of paper and cloth samples [Akao 2004]. Figure Goniospectral imaging system in Chiba University [Nakaguchi 2005] Digital Archiving of Art Paintings at Osaka Electro-Communication University Tominaga [Tominaga 2001] developed their 3-D measurement system at the Department of Engineering Informatics of Osaka Electro-Communication University, which was similar to that 24

41 at Chiba University. The main difference was that Tominaga s system focused on digital archiving the art paintings, while the system at Chiba University was used to measure 3D objects. This system is shown as Figure A laser range finder was used to record surface shape of the painting object. The location of the multi-band camera was fixed. There were eight different azimuth incident angles could be measured with the fixed polar angle. The incident direction was determined using two specular mirrored balls. Similar to the system at Chiba University, the illumination directions were very limited and there was only one observation direction. In addition, no absolute BRDF values could be measured using these two systems. Figure Digital archiving system for painting objects at Osaka Electro- Communication University [Tominaga 2001]. 25

42 2.4.3 Digital Archiving of Artifacts at the University of Southern California The digital archiving measurement system at the University of Southern California Institute for Creative Technologies [Hawkins 2001] is shown in Figure It consisted of a semicircular arm three meters in diameter that rotated about a vertical axis through its endpoints. There were twenty-seven evenly spaced xenon strobe lights attached to the arm. Through rotating the arm, there were 64 directions along the longitude. Thus, there were in total 1728 different lighting directions. The location of the camera was fixed and there was only one view direction. Figure Digital Archiving system for artifacts at University of Southern California [Hawkins 2001]. To provide different view directions, one possible system was presented by Hawkins [Hawkins 2001] as shown in Figure Besides the illumination arc, a camera arc was added in this system. The system allowed the lights and the cameras to be positioned with high precision and 26

43 repeatability, resulting in a very efficient system. However, the costs of the light stage apparatus in these two systems were much higher because of multi-light strobes. Figure The possible BRDF measurement system with multi viewing directions Electric Display of Artistic Paintings at Sogang University To effectively display artistic paintings under different illumination conditions, the image acquisition system at the Department of Media Technology of Sogang University [Ju 2002] was developed, as shown in Figure In this system, the camera was fixed and the light sources were arranged in a circular arc. Only one degree of freedom was provided for the illumination directions. 27

44 Figure Light Arrangement of Image Acquisition System at Sogang University [Ju 2002] Polynomial Texture Maps at HP Laboratories In 2001, Malzbender at Hewlett-Packard Laboratories presented a new form of texture mapping to reconstruct surface color under varying lighting conditions [Malzbender 2001]. The two devices used to collect the images are shown in Figure The object was kept static, and the light sources were strobe lights, which were located in different positions. Also the camera was static, which was mounted in the apex of the dome and samples were placed on the floor. The advantage of the fixed camera was avoiding the camera spatial calibration or image registration. With the method of polynomial texture mapping, the sample surface could be reproduced by direction interpolation. The drawback of this method was that the object should be specific and the specular component needs to be reproduced separately. 28

45 Figure Two devices for collecting the images at Hewlett-Packard Laboratories [Malzbender 2001]. 29

46 3 Instrument Development The first stage in this research was to set up a practical instrument for museums to capture the BRDF properties of paint surfaces. In this chapter, how the measurement instrument was set up and simplified is described. In addition, how the camera was controlled and managed is also introduced. 3.1 The Gonio-Spectrophotometer Instrument As introduced in Chapter 2, the ideal instrument to measure BRDF properties of materials should provide four degrees of freedom between the light source, the sample holder and the receptors. Besides this requirement, the instrument should be as compact as possible due to the limited space in many museums. Thus, an imaging-based instrument was proposed and will be developed in the future at the Munsell Color Science Laboratory. The structure of the instrument is shown in Figure 3.1. In this measurement system, the light position had two degrees of freedom with the rotating of the arms #2 and #3. Both of the arms were designed to rotate within 360 degrees. The detector was the spectral-based camera, which provided one degree of freedom. The control arm of the camera was arm #1, which also could be rotated to any position. Another degree of freedom was provided with moving and rotating the sample. As it is shown, the sample could be moved towards the vertical and horizontal directions, and rotated within 360 degrees in its own plane. In addition, it could be moved along the direction that was vertical to the sample plane. With this measurement instrument, the 30

47 reflection and scattering properties of the sample could be measured in the entire three dimensional space. Figure 3.1. The proposed imaging gonio-spectrophotometer to capture the BRDF of artwork at the Munsell Color Science Laboratory at RIT. 31

48 3.2 The Simplification of the Measurement Instrument As it was reviewed in the Chapter 2, most systems developed to digitally archive artifacts had fixed camera locations and changed illumination positions. The advantage of this kind of system is to avoid image registration for the varying direction of the camera. In addition, in order to further simplify the measurement system, the illumination was limited to one degree of freedom. Thus, the structure of the system was proposed, as shown in Figure 3.2. The camera could be fixed on the tripod and the sample was pasted on the holder. Also for avoiding the calibration problems, the holder was fixed. In order to capture the image with 0 illumination angle, there was a small angle between the optical axis of the camera and the sample normal. The illumination could be rotated with the arm within the whole 360 in the horizontal plane. Since only the reflection properties were measured, the rotation within 180 was sufficient. Figure 3.2. The theoretical simplified image acquisition system. 32

49 With the proposal of the simplified system, a practical measurement system was developed at the Munsell Color Science Laboratory, shown in Figure 3.3. Different than the theoretical design, the camera and the sample holder in the practical system were fixed on an optical table to improve the stability of the measurement system. Figure 3.3. The practical simplified measurement system developed at the MCSL. 3.3 The Analyses of the Light Source The selection of a light source was based on several important factors that affect the system geometry and measurement accuracy. The following paragraphs will focus on the requirements and the performance of the light source used in this research. 33

50 According to the ASTM standard [ASTM Standard E ], a light source with collimated or slightly converging radiation can be considered for use in the system. Thus, the convergence angle should be as small as possible for the system to decrease uncertainty, which is introduced by a non-unique angle of incidence. Secondly, the illumination uniformity on each small measurement area is required. Thirdly, the light source should be spectrally and radiometrically stable during the whole measurement. In the current illumination system, as shown in Figure 3.4, points A and B define the center and edge points of the sample. It was reasonable to assume the output power distribution of the fiber optics was a Gaussian distribution. Thus, most of the light energy was focused on the center part of the output beam, and the small solid angles dω A and dω B could be considered with little contribution for the uncertainty of the angles. Therefore, the incident angle of each pixel was determined based on the center beam. Under the same illumination angle, the incident angles i A and i B were different. The calculation of the incident angles for the pixels with different positions will be introduced in Chapter 5. In addition, the relative spectral power distribution of the light source was measured, using a PR650 spectroradiometer shown in Figure

51 Figure 3.4. The schematic of the illumination optics of the measurement system. Figure 3.5. The relative spectral power distribution of the light source. 35

52 3.4 Camera Descriptions and Set-up In the measurement system, the Nikon D1 CCD camera was used. The Camera Control Pro software controlled the camera to capture images of the samples. The interface of the software is shown in Figures As shown in each figure, the exposure mode, storage and mechanical specification and the image processing were controlled using Camera Control Pro. Figure 3.6. The control interface of Exposure Mode I for the Nikon D1 CCD camera. 36

53 Figure 3.7. The control interface of Exposure Mode II for the Nikon D1 CCD camera. Figure 3.8. The control interface of Data Storage for the Nikon D1 CCD camera. 37

54 Figure 3.9. The control interface of Mechanical Control for the Nikon D1 CCD camera. Figure The control interface of Image Processing for the Nikon D1 CCD camera. 38

55 Figure The control interface of Camera Curves for the Nikon D1 CCD camera The Specification of the Control of the Camera The first stage was to set the exposure mode. The manual mode was used and the aperture value was set to f/11, yielding the good sharpness, low noise level and appropriate exposure. The shutter speed value was determined based on the luminance values of the target. Since some targets had high gloss levels, multiple pictures were taken with different exposure times in order to generate high dynamic range radiance maps. For the sensitivity of ISO setting, there were four options: 200, 400, 800 and The sensitivity of ISO 200 was used since the light conditions were adequate and the electrical noise was not increased. In addition, the appropriate white balance control was used. Secondly, the raw data of the images were saved as future processing. Then, the single capture model was used and the focus mode was manually controlled. The resolution target used to determine the optimal focus is shown in Figure Finally, the sharpening option was turned 39

56 off and the tone compensation was set to a linear curve, so that the saved images were as raw as possible. The output image format was a 12 bit NEF file, which was the raw image format defined by Nikon. To be imported in MATLAB in the future, the NEF file was processed by Photoshop and saved as a 16 bit tiff file. Figure The resolution target used to determine the optimal focus The Image Processing Procedure 1. The OECF Measurement of the Camera In order to convert the image RGB values to relative radiance values, which were used in the light reflection models, the OECF (opto-electronic conversion function) [ISO 14524] of the camera was measured and calculated. The ISO OECF Chart was used in the measurement, as 40

57 shown in Figure In addition, the Halon sample was made and taped above the OECF Chart. Thus, there were a total of 13 luminance levels for the OECF measurement. Figure The ISO OECF Chart used to measure the OECF of the Nikon D1 CCD camera. Two strobe lights were used as the illumination to measure OECF, and the illumination angle was set to 45 degree. Before the measurement, the camera was adjusted in sharp focus. In the measuring procedure, there were three different exposure time levels used to calculate the OECF with a fixed aperture of f/11. The three different exposure time settings corresponded to overexposed, optimal exposure, and underexposed settings. Under the overexposed setting, the image was captured with the maximum luminance levels clipped. With the optimal exposure setting, the digital count of the maximum luminance level was close to the maximum digital count, but not clipped. With the data from the three exposure levels, the full range of the 16 bit digital counts could be covered. Also, to perform dark correction, three dark images with over, 41

58 optimal and under exposure settings were captured. Furthermore, using the same position and illumination setting, the spectral radiance distributions of the 13 patches were measured using the PR650 spectroradiometer. In addition, the integrated radiance values of the 13 patches could be calculated by the spectroradiometer automatically. In the calculation stage, the digital count values of the 13 patches under the three exposure settings were normalized from 0 to 1 through dividing them by The relative radiance values Rad under different exposure settings were rescaled according to Eq. (3.1). t s to Rad = Rad (3.1) s o 2 2 f s f o where t is the exposure time, f is the f number of the camera, s and o stand for the rescaled images and the optimal image. In this research, the f number was the same for all the captured images, and only exposure time changed. Thus, three OECF look-up-tables (LUTs) for the three channels were linearly interpolated based on the measured data since the middle part of the OECF curve should be very close the linear relationship, as shown in Figure To fit the curves, three gamma values for each channel were calculated using Eq. (3.2). γ d Rad = (3.2) d max 42

59 where Rad is the relative radiance calculated from Eq. (3.1); d represents the digital count value of one channel and d max is the maximum digital counts value, 65535, γ value of each channel was calculated using MATLAB lsqcurvefit non-linear data fitting function. Figure 3.14 The calculated OECF look-up-tables of the three channels of the Nikon D1 CCD camera. However, it is obvious that the gamma model was not accurate enough to represent OECF of the camera. Thus, three LUTs were finally used, which were accurate enough to do the conversion between the digital count of raw images and relative radiance. 2. The Generation of HDR Radiance Maps As mentioned before, to capture the images of glossy samples, multiple images of a sample were captured and used to merge into a high dynamic range (HDR) radiance map. The algorithm used to create HDR images is shown in Eq. (3.3) [Khan 2006]. 43

60 Rad ( i, j ) N w( C e ( i, j ) e= 1 = N OECF ) t e= 1 w( C e ( i, j ) 1 ) e ( C e ( i, j) ) (3.3) where Rad ( i, j) is the relative radiance value of the pixel, whose location is specified by (i, j) in the image; C e(i, j ) is the digital count value of the pixel (i, j) in the eth exposure setting; N is the number of the exposure setting for this image and t is the exposure time; OECF 1 is the inverse OECF LUT of the camera of one channel, by which the relative radiance value was read based on the digital count value; w() is a weight function, as shown in Eq. (3.4). A weighting function [Reinhard 2005] was proposed to decrease the contribution of the values from underexposed and overexposed images. The hat function used as the weighting function is plotted in Figure where C is the pixel value in an the image. 2C ( ) 1 ( 1) w C = (3.4) 44

61 Figure The hat function [Eq. (3.4)] used to determine the weight of different pixel values. 3. Color Management of the Camera In order to establish the color management of the camera, the colorimetric based method was used to build the relationship between the camera space and CIEXYZ values [Day 2000]. In this part, 28 color patches, shown in Figure 3.16, were selected as the calibration target to determine the 3-by-3 transformation matrix. With the three OECF look-up-tables, the digital count values in the camera were converted to linearized R, G and B signals. And then, the tristimulus values of the pixels were obtained using the transformation matrix. The calculation is shown in Eq. (3.5). 45

62 R = OECFr G = OECFg B = OECFb X Y Z Est Est Est ( dr) ( dg) ( db) = OECFr OECFb 1 OECFg 1 1 (65535) (65535) (65535) R G B (3.5) where the linearized R, G and B values of 28 color patches were calculated from three inverse OECF LUTs; d represents the digital count of one channel of 28 color patches; subscript Est means the X, Y and Z values are estimated tristimulus values. The 3-by-3 transformation matrix was first calculated using Eq. (3.6). pinv in the equation means the pseudo inverse calculation; subscript Pat means the measured X, Y and Z values of 28 patches were used to build the transformation matrix. The measured tristimulus values of 28 patches were calculated based on the spectral reflectance values of the patches, which were measured using GretageMacbeth s Spectroscan spectrophotometer. And then the coefficients of the transformation matrix were optimized using MATLAB fminunc nonlinear optimization function, in which the average CIEDE2000 color difference between the measured and estimated values of 28 color patches were minimized X 3.40 = Y Z Pat Pat Pat R pinv G B (3.6) 46

63 Figure Twenty-eight color patches used to build and evaluate the color management of the Nikon D1 CCD camera. The 28 patches were used as the verification target to evaluate the color reproduction accuracy of the camera. The CIELAB error vector plots and the histogram of the E 00 values of the 28 47

64 patches are shown in Figures 3.17 and 3.18, respectively. In Table 3.1, the mean, standard deviation, 90 th percentile, and maximum values of E * ab and E 00 were calculated. According to these results, the color reproduction accuracies of most patches were acceptable. Although the maximum values were large, it doesn t represent the average accuracy. Since there was no comparison experiment performed between real objects and rendered images, this color accuracy was sufficient. Figure The CIELAB error vectors of the 28 patches used to calibrate the Nikon D1 CCD camera. 48

65 Figure The histogram of E 00 values of the 28 patches used to calibrate the Nikon D1 CCD camera. Table 3.1. The mean, standard deviation, 90 th percentile, and maximum values of E * ab and E 00 showing the color reproduction accuracy of 28 patches used to calibrate Nikon D1 CCD camera. Color Reproduction Accuracy Mean Standard Deviation 90 th Percentile Maximum E * ab E

66 4 Data Collection of Objects To better discover the reflection and scattering properties of the artist paints, a set of representative samples was created and collected. These samples varied in surface shape, paint material, application method and gloss. 4.1 Paint Samples with Uniform Surfaces At first, four samples with uniform surfaces were prepared. The images of the four samples are shown in Figure 4.1. For samples (1) and (2), the LENETA opacity chart was used as the substrate, shown in Figure 4.2. This chart was designed with one black stripe in the center and two white stripes on the top and the bottom. Thus, the opacity of the paint sample can be evaluated through calculating the reflectance difference between the paint film over the black and white substrates. In addition, to make the samples as uniform as possible, a BYK Gardner drawdown applicator with a 10mil gap was used. With this kind of thickness, the two acrylic paints were opaque. After drying completely, 50-by-50 mm samples were cut from the charts. (1) (2) (3) (4) Figure 4.1. The uniform acrylic paints on opaque chart and glass. 50

67 Figure 4.2. The LENETA opacity contrast chart used as the substrate. Samples (3) and (4) were painted during 2001 at the Munsell Color Science Laboratory. The neutral gray paints were painted on glass. The thickness of the paints also made the sample opaque. In all these four samples, samples (1) and (3) had higher gloss levels than the other two. Thus, for the uniform samples, there were different colors, substrates and gloss levels. 51

68 4.2 Paint Samples on Canvas Since artist paints on canvas are very common, two acrylic paint samples were painted on canvas. These two canvas boards had different texture structures. The canvas used for sample (5) was made of medium coarse cotton, and the texture of cotton for sample (6) was more coarse, in which the grains were more obvious. To show the texture of the canvas, a brush with a soft bristle was used to paint the samples. Also, two samples were cut from the canvas after drying completely. The paint material brushed on the sample (5) was the same as that used for sample (1), and samples (2) and (6) were applied with the same matte paint. Therefore, for the samples showing canvas textures, there were different gloss levels and texture styles. (5) (6) Figure 4.3. The acrylic paint samples painted on the two canvas substrates. 4.3 Varnished Samples with Brush Marks Vanish applied to paint is also typical in order to increase the gloss level and unify the gloss of the entire work of art. Therefore, one sample with varnish on the brushed paints was prepared, as shown in Figure 4.4. To make the sample opaque, one layer of drawdown acrylic paint was 52

69 applied on the contrast chart at first. After the drying of the first layer, a brush was used to make a second layer paint. Finally, the GOLDEN Mineral Spirits Acrylic (MSA) Gloss varnish was applied with a brush. To keep the surface shape as the brushed marks, a very thin layer of varnish was used. (7) Figure 4.4. The vanished painted samples showing brush marks. 4.4 Impasto Paints Another consideration for the paint database was to make impasto paint samples. Four impasto samples were made, shown in Figure 4.5. These samples were also painted on the canvas texture. On some parts of the samples, the canvas could still be seen. On the parts with thick paint, the surface shape was very complicated and different from each other. 53

70 (8) (9) (10) (11) Figure 4.5. Four impasto paints with different colors and complex surface shapes. 4.5 The Gloss Levels of the Samples To further detect the effect of gloss levels on the experimental results, it was necessary to quantify and classify the gloss of the samples. The gloss values of samples (1) - (4) could be 54

71 measured using a glossmeter. In this research, the BYK micro TRI Gardner glossmeter was used to measure gloss. According to the ASTM standard, there were three different measurement angles in the glossmeter to evaluate gloss. They were 20, 60 and 85. Before the measurement, the glossmeter was calibrated using a supplied black tile. For the other samples without uniform surface, the gloss levels were determined based on visual observation. Thus, the gloss levels of all the samples were determined and classified, as shown in Table 4.1. Table 4.1. The gloss properties of the paint samples. Sample Number Gloss Levels Gloss Values (85 ) Gloss Values (60 ) Gloss Values (20 ) (1) High (2) Low (3) Medium (4) Low (5) Medium (6) Low (7) High (8) (11) Medium 55

72 5 Rendering Algorithms As introduced in the background chapter, there were many light reflection models that were used to model the BRDF in terms of the position of light source, viewing direction and surface shape of the object. In this research, two simple reflection models, the Phong model and the Torrance- Sparrow model, were selected to model the reflection and scattering properties of the paint surfaces. Therefore, in this chapter, the mathematical formulae and computational methods of the two models will be introduced and analyzed. 5.1 System Geometry Before the introduction of the mathematical models, it is necessary to analyze the light reflection geometry in the current measurement system. Figure 5.1 depicts the light reflection geometry of a complex paint sample surface. From the figure, two normal directions are shown, the sample normal Z and the surface normal N^ of an element da. The incident and view directions were specified by θ i and ( θ v,φ v ), respectively. The element surface normal in terms of the sample normal was represented by two tilt angles, θ n and φ n. The illumination angle θ a of the sample surface was obtained and this angle changed only in the XZ plane. All the angles were defined as either positive or negative angles, since the same angles could exist on two sides of the Z-axis. The angles defined in this reflection geometry were different with that in traditional BRDF specification. The purpose of this definition was to simplify the mathematical calculation. Thus, in the system geometry, the 56

73 variable θ a was known, which was read from the illumination arm. In addition, as introduced in Chapter 3, for the pixels with different positions on the sample, the incident angles were different. Before parameter estimation of the two models, the offset angle θ from the illumination angle θ a of each pixel was calculated using a uniform gray card. The detailed calculation will be described in section 5.5. Figure 5.1. Light reflection geometry in terms of surface tilt angles. 5.2 Phong Model The Phong model [Phong 1975] controlled three magnitude parameters and surface shininess to determine the gonio-radiometric values. Thus, the relative radiance in the Phong model was expressed as the function of the angles depicted in Figure 5.1, shown in Eq. (5.1). 57

74 58 ),, ( cos ) )cos( sin(2 sin sin sin ) cos( cos cos cos ) )cos( cos( cos ) (cos cos ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( j i r j i n j i n j i r j i n j i a j i n j i r j i r j i v j i r j i v j i r j i v j i s j i n j i n j i a j i i n j i s j i i j i j i f As Ad Ae Y j i φ φ θ θ θ θ θ φ φ φ φ θ θ φ φ θ φ θ θ θ θ θ θ = = + = = + + = (5.1) where Ae, Ad and As are the magnitude parameters of the ambient, diffuse and specular components; φ r is the angle between the plane XZ and the perfect mirror reflection direction; θ r is the angle between the sample normal and the projection of a perfect mirror reflection on the XZ plane; θ s is the angle between the view angle and the perfect mirror reflection direction of the incident light; n describes the measured shininess of the surface, and (i, j) is used to specify the location of the pixel on the sample. According to the mathematical formula of the Phong model in Eq. (5.1), with the known variables θ a and θ, the parameters θ n, φ n, θ v, φ v, n and three magnitude values need to be estimated. In addition, in Figure 5.2, the relationship between n factor and the specular component in the Phong model is illustrated. It can be seen that with the increase of n factor, the specular component decreases more rapidly and the observed gloss of rendered samples increases.

75 59 Figure 5.2. The plot of specular component in Phong model changes with n factor. 5.3 Torrance-Sparrow Model Based on geometrical optics, Torrance and Sparrow [Torrance 1967] derived a theoretical model for roughened surfaces. In this model, the surface element was assumed to consist of small randomly dispersed mirror-like facets. This model is described as Eq. (5.2). ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( ), ( sin sin ) cos( cos cos cos )cos cos( cos cos cos j i v j i n j i n j i v j i v j i n j i vn j i n j i n j i a j i i j i vn j i i j i j i DGF As Ad Y φ φ θ θ φ φ θ φ θ θ θ θ θ θ + = = + = (5.2) function of where VN θ is the angle between the viewing direction and the surface normal; D is the standard Gaussian distribution function of the direction of the microfacets and F is the

76 Fresnel reflection, which is a function of θ i and the refractive index n. G is defined as the geometrical attenuation factor and represents the remaining light amount after the shadowing and masking, which is a function of θ a, θ n, φ n, θ v and φ v. As shown in Figure 5.3, there were three cases considered in the Torrance-Sparrow model to calculate factor G, which were summarized by Blinn [Blinn 1977]. H vector in the figure was defined as the bisector vector of incident light direction L and the view direction V. As illustrated in Figure 5.3, the first case in subplot (a) shows that no light is intercepted by the V shaped grooves. In subplots (b) and (c), either the incident light or the reflected light is intercepted by the grooves, so the masking and shadowing effect happens. Thus, the G factor for the three cases could be calculated with the Eq. (5.3). G G G a b c = 1 2 cosθ nh cosθ vn = cosθ vh 2 cosθ nh cosθi = cosθ vh (5.3) where cosθ nh and cosθ vh can be expressed as Eq. (5.4). cosθ vh = 1+ cos(θ a θ θ v )cosφ v 2 cosθ nh = cosθ i + cosθ vn 2cosθ vh (5.4) 60

77 Figure 5.3. Three cases for the calculation of geometrical attenuation factor, G. 5.4 The Improvement of the Torrance-Sparrow Model Another facet distribution function that models the microfacets as ellipsoids of revolution, proposed by Trowbridge and Reitz [Trowbridge 1975], provided a better match to the experimental data than that in Torrance-Sparrow model. This facet distribution function was used in this research, shown in Eq. (5.5): 61

78 c 2 D = (cosθ nh ) 2 (c 2 1) +1 2 (5.5) where c is the eccentricity of the ellipsoids. In Figure 5.4, D factor changing with the change of c value is depicted. When the c value decreases, the decrease of D factor is faster. This also results in the faster change of the gloss component. Figure 5.4. The plot of D factor changes with eccentricity of the ellipsoids. To better explain how the models parameters are related to the visual appearance of the samples, Figures from 5.5 through 5.7 were plotted. In Figure 5.5, for samples (5) and (6), the magnitude values of diffuse component of three channels and specular component of Phong model, Adr, 62

79 Adg, Adb and As, are shown. For each one of the two samples, only one kind of paint material was applied, so the magnitude values of diffuse component for all the pixels were the same. Also, in Figure 5.5, the images of samples (5) and (6) under illumination angle 0 and -55 are shown. In illumination angle 0, there were both diffuse and specular components. Thus, with high magnitude value of specular component, the shiny part could be found on sample (5). On the contrary, the magnitude value of specular component of sample (6) was very small, only one fifth of specular component, so there was no glossy or highlight part found on the image. Therefore, the larger the magnitude value of the specular component, the more the glossy effect is observed. Under the illumination angle without specular component, such as -55 in Figure 5.5, the color of each pixel was determined by the ratio of the diffuse components of the three channels. In Figures 5.6 and 5.7, the BRDFs (only including specular components) and images of two selected pixels in sample (8) under two illumination angles are shown, in order to describe the relationship between the visual appearance of the samples and parameters n and c. At first, the relative radiance values of the two pixels under two illumination angles are illustrated in Figure 5.6. Pixel 1 had larger n and smaller c values than pixel 2. Similar to the introduction of n and c factors above, the increase of n and decrease of c will make the specular component diminish faster. Thus, under the illumination angle 10, compared with pixel 2, the relative radiance value of pixel 1 was much higher. Under the other illumination angle 20, the relative radiance value of pixel 1 was close to 0, while the value of pixel 2 was about 1.5. Furthermore, in Figure 5.7, pixel 1 and 2 were included in the highlight part of area 1 and 2 under the illumination angle

80 However, in the image under illumination angle 20, the highlight could also be found in area 2, but not in area 1. Figure 5.5. The images of samples (5) and (6) used to show the relationship between model magnitude parameters and visual appearance of the images. 64

81 Figure 5.6. The BRDFs (specular components) of two selected pixels in sample (8) used to show the relationship between the relative radiance values under two illumination angles and parameters n and c. 65

82 Figure 5.7. The images of sample (8) including two selected parts used to show the relationship between the visual appearances of two parts under two illumination angles and parameters n and c. 66

83 5.5 Parameter Estimation of the Two Models Estimation for Offset Angles from Illumination Angles Before parameter estimation of the two models, the offset angle θ from the illumination angle θ a of each pixel should be calculated. A uniform diffuse gray card with 0 tilt angles, Gray 6.5 Color-aid paper, was used to estimate θ. The flowchart of the calculation and the image of the gray card are shown in Figures 5.8 and 5.9, respectively. As shown in the subplot 1 of Figure 5.8, a series of images of gray cards under different illumination angles were captured at first. Then, for each pixel, those illumination angles only including diffuse components were separated from other illumination angles, since the gray card was not the perfect reflecting diffuser. One simple way to do this was to suppose that there was no specular component for the large illumination angles. Thus, it was very easy to set up a threshold of illumination angle for one kind of material, which was used to determine the diffuse illumination angles. In addition, a start value of θ, θ 0, was determined based on the estimated highlight peak illumination angle, θ ah. When θ v and φ v were equal to 0, θ should be half of θ ah. Because the values of θ v and φ v were very small, it was reasonable to use half of θ ah as θ 0, as shown in Eq. (5.6). θ ah( i, j) mean( selected angles) ( i, j) θ = = (5.6) 0( i, j) 2 2 In the above equation, the selected angles were the angles including specular components, under which the relative radiance values were higher than the 80th percentile of all the values. For example, in the subplot 2 of Figure 5.8, five angles marked with blue circles were selected, and 67

84 θ 0 was equal to 0. Thirdly, with the threshold angle and θ 0, the diffuse illumination angles could be determined using the following Eq. (5.7). Diffuse Illumination Angles = { a 0 ( i, j ) θ θ > 35 } (5.7) According to Eq. (5.1), the start values of magnitude values of ambient and diffuse components, A e0 and A d 0 were determined using linear regression since θ n and φ n were equal to 0. Finally, as shown in Figure 5.8, the values of A e, A d and θ were optimized using nlinfit function in MATLAB. The objective of the optimization was to minimize the RMS errors of the relative radiance between the measured and estimated values. The objective was used for all the optimizations in this research. Thus, for each pixel, the offset angle θ from the illumination angle θ a was obtained. The distribution of the offset angles of pixels is shown as Figure It can be found that the values of the offset angles were positive on the left part of the sample and negative on the right part. 68

85 Figure 5.8. Diagram of flowchart used to show the calculation of the offset angle θ from the illumination angle θ a. 69

86 Figure 5.9. The uniform gray card used to correct the non-uniformity of the incident angle. Figure The offset angles from the illumination angle θ a for pixels. 70

87 Based on the value of θ of each pixel and Eqs. (5.1) and (5.2) of two models, the parameters in both the Phong and Torrance Sparrow Models were estimated according to the flowchart showed in Figure The detailed workflow used to estimate the model parameters for different samples will be shown in the following three sections. Figure Diagram of flow for the estimation of the model parameters. 71

88 5.5.2 Parameters Estimation for the Uniform Samples As described in Chapter 4, four uniform samples were used in current research. They were samples (1)-(4). For these samples, tilt angles θ n and φ n were equal to 0. Thus, the parameters of uniform samples need to be estimated are shown in Table 5.1. Although in Eq. (5.2), there is no ambient factor Ae, this factor was used in the final calculation for the Torrance-Sparrow model to minimize the effect from the ambient factor on the comparison of the two models. Another advantage to use Ae was that the same optimization results of diffuse component could be used for both models. Thus, only one optimization was required, and the calculation time was reduced. In addition, in this research, the refractive index n value of acrylic resin, 1.5, was used. Table 5.1. The parameters requiring estimation in each model for the uniform samples. Offset View Gloss Model Magnitude Angle Direction Change Phong θ θ v, φ v n Refractive Index Torrance- Sparrow Ae, Ad, As θ θ v, φ v c n = 1.5 Parameter θ calculated from the gray card image sequence was not used in the parameters estimation for the uniform samples. For best accurate estimation, θ of each pixel was recalculated for each uniform sample. At first, the diffuse component and specular component were separated, and then the values of Ae, Ad and θ were estimated. The calculation method was exactly the same with that used in the estimation for offset angle θ. As the example, the separation result of a pixel is shown in Figure

89 Figure The example pixel used to show the separation of the diffuse and specular components. For the estimation of specular component, the first step was to estimate As. The pixel in the sample with maximum relative radiance value was used to do this. When As was estimated, the start value of φ v greatly affected the estimation of As. The increase of the start value of φ v would increase the estimated value of As. Therefore, φ v was set to 0. Although the value of As might not be the true value, the final image rendering was affected very little. The reason is that even if the value of As changed, the optimization of the viewing angles will make the estimated radiance values best fit the real values. Then using either fmincon or nlinfit optimization 73

90 function, parameters As, θ v, and n (or c) were estimated at the same time. The start values of As and θ v, As 0 and θ v0 of Phong model were determined using Eq. (5.8). As 0 spey θ v0( i, j) = max[ spey ( i max, j max) = ( θ θ a ( i max, j max) = Meaured Data ( Ae+ Ad ( i, j) ) max ] = max[ As(cosθ s ( i max, j max) ( i max, j max) ) n ( imax, j max) cosθ ] i ( i max, j max) ) (5.8) In Eq. (5.8), subscript ( i max, j max) means that the pixel in the sample with maximum relative radiance value was used in the estimation. According to Eq. (5.1), the specular component of this pixel was obtained through subtracting the diffuse and ambient components from the measured data. With the estimated parameters of diffuse component of this pixel, As 0 should be the maximum value of the specular component since cos θ s should equal to 1 at this incident angle. Also, in the equation, θ v0 was determined. Based on the mathematical derivation, ( θ a θ ( i, j) ) max is the incident angle, at which the specular component has the maximum value. This formula was used for all the pixels. As 0 spey θ v0( i, j) = max[ spey ( i max, j max) = ( θ θ a ( i max, j max) ( i, j) ) max cosθ vni max, j max) = Meaured Data ( Ae + Ad ( GF)] ( i max, j max) cosθ i ( i max, j max) ) (5.9) For the Torrance-Sparrow model, As 0 and θ v0 were estimated using Eq. (5.9). θ v0 should be evaluated at first based on the equation. Then, in similar with the Phong model, As 0 was determined as the maximum value of the specular component of the pixel with maximum relative 74

91 radiance value. The D factor was equal to 1 at the angle at which the specular component had the maximum value. The simplest way to calculate the start value of parameter n (or c) was to use the illumination angle at which the relative radiance value was equal to half of the peak value. In Figure 5.13, the calculation of parameter n 0 of sample (3) is illustrated as the example. In the top subplot, with the calculated values of As 0 and θ v0, the threshold angle θ T was determined as 2.5, at which the relative radiance value was equal to half of the peak value. At illumination angle θ T, the interpolated value of n (cosθ s ) was the closest to the half value of the maximum n (cosθ s ) value. In this calculation, since the measured data were limited, the linear interpolated values were used. The relative radiance value was interpolated in every 0.5 degree illumination angle within the range of measured illumination angle. In the bottom subplot of Figure 5.13, a twodimensional look-up-table was used to determine n 0 value. In the table, the value of n (cosθ s ) changed with n value in column index and illumination angle in row index. The first step was to find the closest n (cosθ s ) value to the half peak value at illumination angle θ T, and then determine n value as 2000 using the corresponding column index. For the Torrance-Sparrow model, the overall calculation procedure was the same as with Phong model. The difference was n that (cosθ s ) was replaced using D factor. Finally, with the calculated start values, As 0, θ v0, and n 0 (or c 0 ), parameters As, θ v, and n (or c) were optimized. The start values of the three parameters of each model of sample (3) are shown in Table 5.2. In the table, lb and ub represent lower and upper bounds for the variables. If nlinfit function was used, there was no bound setting up for the optimization. 75

92 Figure The calculation of n 0 of the pixel with maximum relative radiance value in sample (3). 76

93 Table 5.2. The start values of optimization used to estimate As for the uniform sample (3). Model Optimization Set Up As θ v ( ) n or c start Phong lb 29.9 (start/2) (start-10) 1 ub (start 2) (start+10) 8000 (start 4) Torrance- Sparrow start lb (start/2) (start-10) 0 ub (start 2) 7.45 (start+10) (start 4) For other pixel without the maximum relative radiance value of the samples, the half peak value was be used to determine the value of n 0 (or c 0 ), because the value of φ 0 was unknown. Since parameter n (or c) is related to how fast the specular component falls off from the highlight peak, the boundary illumination angle of the specular component was used. It means that the specular component became 0 at the boundary illumination angle. In the real calculations, the specular component might not perfectly be 0. So if the value of specular component was small enough, the corresponding illumination angle was considered as no specular component. The top subplot in Figure 5.14 shows two boundary illumination angles of one example pixel in sample (3). As described in the previous paragraph, with each of the boundary illumination angle, the 2- D LUT was used to calculate one n 0 value. The average value of two n 0 values was used finally. For the example pixel in Figure 5.14, the value of n 0 was determined as 113. For the v 77

94 pixels without the maximum relative radiance value in Torrance-Sparrow model, the values of c 0 were determined using the same method. Figure The calculation of n 0 of one example pixel in sample (3). 78

95 With the estimated θ v0 and n 0 (or c 0 ), the value of φ v0 was calculated using a 1-D LUT. Figure 5.15 shows that the difference between the estimated and measured specular components changes with the value of φ v0. Thus, the value of φ v0 with the smallest difference value was used as φ v0 finally. Figure The calculation of φ v0 of one example pixel in sample (3). In Table 5.3, the start and bound values of the three parameters of one example pixel in sample (3) are shown. Therefore, all the required parameters of each pixel were estimated finally. The optimization results of the example pixel are shown in Table 5.4. In the table, Ae and Ad are the average values of all the pixels. So Ae, Ad and As were same for all the pixels, and 79

96 parameter Ad includes the values for three channels. In addition, the calculated BRDF of two models with the optimized parameters are shown in Figure Table 5.3. The start values of optimization used to estimate viewing angles and n (or c ) in each model for an example pixel in the uniform sample (3). Model Optimization Set Up θ v ( ) φ v ( ) n or c start Phong lb ub (start+10) 452 Torrance- Sparrow start lb ub (start+10) Table 5.4. The optimized parameters in each model for an example pixel in the uniform sample (3). Model Ae Ad R G B As θ ( ) θ v ( ) φ v ( ) n or c Phong Torrance- Sparrow

97 Figure The optimization results of two models of one example pixel in sample (3) Parameters Estimation for the Samples with Simple Surface Shapes Because the surface shapes of samples (5)-(7) were made from the canvas surface or brush marks, they were classified as the samples with simple surface shapes. For these samples, all the parameters requiring estimation are shown in Table 5.5. Table 5.5. The parameters requiring estimation for the samples with simple surface shapes. Model Magnitude Offset Angle View Direction Tilt Angle Gloss Change Refractive Index Phong θ θ n, θ v, φ n n Torrance- Sparrow Ae, Ad, As θ θ n, θ v, φ n c n =

98 According to the Eqs. (5.1) and (5.2), the small area tilt angle of each pixel was estimated using the diffuse component. However, because the illumination angles changed only in the XZ plane, only the value of the tilt angle in the XZ plane (Figure 5.1), θ n, was estimated using the diffuse component. In addition, for these three samples, only one kind of paint material was applied, so the values of Ad should be the same for all the pixels. Thus, theoretically, if the values of Ad cos(φ n ) of all the pixels were obtained, the maximum value could be used as Ad with the values of φ n equaling to 0, and then φ n of each pixel was estimated. However, due to the illumination non-uniformity and the fitting errors, the values of φ n could not be calculated using this method. As shown in Figure 5.17, estimated Ad values of different pixels of red channels of the gray card were different, although φ n was equal to 0 for each pixel. Therefore, for the samples (5)-(7), the average value of Adcos(φ n ) of all the pixels was used as the Ad value for all the pixels of each channel. The value of φ n for each pixel was estimated using specular component. Although this method lowered the estimation accuracy, it decreased the data storage amount. In addition, this method resulted in acceptable estimation accuracy of the diffuse components for most pixels, which will be shown in the results of the psychophysical experiment described in Chapter 7. 82

99 Figure The histogram of Ad values of all the pixels of gray card. Finally, parameters Ae, Ad and θ n were estimated using the diffuse component. The calculation workflow was the same with that used in the estimation for offset angle θ from the gray card. The only different thing was that θ calculated from the gray card was used, so Eq. (5.10) was used to determine the diffuse angles instead of Eq. (5.7). Diffuse Illuminaion Angles = { a ( i, j ) n0( i, j ) θ θ θ > 35 } (5.10) In the above equation, θ n 0 was the start value of θ n, and it was determined using Eq. (5.6). Phong : θ v0( i, j ) = [ θ θ Torrance Sparrow : θ a ( i, j ) v0( i, j ) 2 θ ] n max = [2 θ ( θ θ n a ( i, j ) )] max (5.11) The start and bound values used to estimate As of sample (5) are shown in Table 5.6. The pixel in the sample with maximum relative radiance value was used to do this. Because the start value 83

100 of φ n and φ v greatly affected the estimation of As, the values of these two angles were set to 0. The method and procedure to calculate As 0 was the same as that used for the uniform samples. Then using either fmincon or nlinfit optimization function, parameters As, θ v, and n (or c) were estimated at the same time. Table 5.6. The start values of optimization used to estimate As for the sample (5) with simple surface shape. Model Optimization Set Up As θ v ( ) n or c start Phong lb ub Torrance- Sparrow start lb ub With the estimated As value, the start values, the value of θ v0 of each pixel was calculated using Eq. (5.11). The value of n 0 (or c 0 ) was calculated using the method described for the uniform samples. φ n0 was calculated using a 1-D LUT, just like the calculation of φ v0 for a uniform sample. Therefore, viewing angle θ v, tilt angle φ n and n (or c) value were obtained using optimization finally. Because both φ n and φ v were in the YZ plane, these two values could not be estimated together. Therefore, since φ v was a small value, it was set to 0. In Table 5.7, the start and bound values of the three parameters are shown for one example pixel in sample (5). 84

101 The optimization results of the example pixel are shown in Table 5.8. Also, the optimization results of the BRDF of the two models are shown in Figure Table 5.7. The start values of optimization used to estimate viewing and tilt angles and n (or c) in each model for an example pixel in the sample (5) with simple surface shape. Model Optimization Set Up θ v ( ) φ n ( ) n or c start Phong lb ub Torrance- Sparrow start lb ub Table 5.8. The optimized parameters in each model for an example pixel in the sample (5) with simple surface shape. Model Ae Ad R G B As θ ( ) θ n ( ) θ v ( ) φ n ( ) n or c Phong Torrance- Sparrow

102 Figure The optimization results of two models of one example pixel in sample (5) Parameters Estimation for the Samples with Complicated Surface Shapes (Impasto Samples) As introduced in Chapter 4, impasto samples (8)-(11) had very complicated surface shapes. For these samples, all the parameters requiring estimation are shown in Table 5.9. Table 5.9. The parameters requiring estimation in each model for the samples with complicated surface shapes (the impasto samples). Model Magnitude Offset Angle Tilt Angles Gloss Change Refractive Index Phong θ θ n, φ n n Torrance- Sparrow Ad, As θ θ n, φ n c n =

103 At first, the parameters of Ad and θ n were estimated according to the method used for the samples with simple surface shapes. The only different thing for the samples with complicated surface shapes was that the parameter Ae was not used here for both models. Because for these impasto samples, multiple layers of paints were applied, the subsurface effects of these complicated surface could introduced the nonuniform diffuse contribution [Hanrahan 1993]. Therefore, for the pixels with complicated surface shapes, the diffuse components were not the cosine curve, but a complicated curve. Thus, if Ae was included, both values of Ae and Ad would be estimated incorrectly. Figure 5.19 shows an example pixel with complicated diffuse component, and the angles used to estimate the parameters in the diffuse component are also shown. If both values of Ae and Ad were included for this pixel, the results of Ae and Ad were equal to -12 and 13. These values are obviously wrong. If Ae was excluded for the estimation, the value of Ad was 1.7. Therefore, in the stage to estimate the diffuse component for the samples with complicated surface shapes, only Ad and θ n were estimated for each pixel. Figure The example pixel used to show the complicated diffuse component of sample (8). 87

104 θ θ 2 a ( i, j) θ n0( i, j ) = (5.12) Taking sample (8) as the example with a complicated surface shape, the start and bound values used to estimate As are shown in Table The method to calculate As 0 was the same with that used for the uniform samples. The value of θ n needed to be estimated again to fit the specular component better, so the value of θ v was assumed to be 0. The formula to calculate θ n0 is shown in Eq. (5.12). In addition, because the start value of φ n and φ v greatly affected the estimation of As, the values of these two angles were also set to 0. In Figure 5.20, an example pixel was used to show the difference between the values of θ n estimated from diffuse component and that estimated from the specular component. The left subplot shows the fitting result, which resulted from θ n estimated from specular component. The right subplot is the fitting result, where θ n estimated from diffuse component was used. Although both fitting results for some angles were bad, the left one was still better than the right one. Therefore, for each pixel, based on the estimated value of As, the values of θ n, φ n and n (or c) were optimized. max Figure The fitting results of Torrance-Sparrow model using different θ n values. 88

105 Table The start values of optimization used to estimate As in each model for the sample (8) with complicated surface shape (the impasto sample). Model Optimization Set Up As θ n ( ) n or c start Phong lb ub Torrance- Sparrow start lb ub The start and bound values of the three parameters were calculated using the method described for uniform samples. In Table 5.11, the start values of one example pixel in sample (8) are shown. The optimization results of this pixel are shown in Table The optimized fitting BRDF is shown in Figure Figure The optimization results of two models of one example pixel in sample (8). 89

106 Table The start values of optimization used to estimate tilt angles and n (or c) in each model for an example pixel in the sample (8) with complicated surface shape (the impasto sample). Model Optimization Set Up θ n ( ) φ n ( ) n or c start Phong lb ub Torrance- Sparrow start lb ub Table The optimized parameters in each model for an example pixel in the sample (8) with complicated surface shape (the impasto sample). Model Ad R G B As θ ( ) θ n ( ) φ n ( ) n or c Phong Torrance- Sparrow Based on the estimation results of parameters Ad and As, the ratio of these two parameters were calculated and are shown in Table For samples (8) (11), the values of Ad were different for different pixels, so the average values of Ad were used. The ratio values can show the relative gloss levels of the samples. 90

107 Table The estimated ratios of the specular component to the diffuse component. Phong Model As/Ad Torrance-Sparrow Model As/Ad (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) According to the description above, both fmincon and nlinfit non-linear optimization function were used to estimate the specular component. At the stage of comparing the two models, the function fmincon was used. The advantages of this function were that several variables were optimized at one time and a higher accuracy was obtained. However, with this function, the optimization searches performed in each direction along each variable. Thus, the searches and the evaluation of the function for each iteration needed some time. Since the optimization for each image was pixel by pixel, it took a long time to finish for one image. To improve the 91

108 calculation speed, the function nlinfit was used to do the optimization in stage of minimization of measurement number. Although the accuracy was not as high as the results from fmincon function, the final results were also acceptable visually. These results will be shown in Chapter 7. 92

109 6 Model Evaluation With the workflow of parameters estimation, the parameters of the two models were obtained. Thus, the radiance maps and images of the measured sample under any illumination angle could be calculated. In this research, both physically-based metrics and psychophysical techniques were used to compare the accuracy of the two models. In this chapter, evaluation results of the two methods will be shown and analyzed. 6.1 Physically-Based Evaluation The most direct evaluation of the rendering accuracy was to calculate the difference of the relative radiance values or digital counts between the real and rendered images. Thus, the fitting results of the red channel of one pixel from five samples with different gloss levels are shown in Figures 6.1, 6.2, 6.4, 6.5 and 6.7. In addition, the RMS (root mean square) values of relative radiance errors between the measured and fitted data are shown in Tables 6.1 and 6.2. The deeper background color of the rows represents the higher gloss level of the samples. In Table 6.1, the RMS values were calculated for the example pixels of five samples. In Table 6.2 is shown the average RMS values of all the pixels for all the measured samples. In the column with the symbol All, the data were calculated based on all the illumination angles. Moreover, the illumination angles, under which most pixels only included the diffuse components, were separated as diffuse angles. The other angles were separated as the specular angles. For the data in Specular or Diffuse column, the calculations only included the data in the specular or diffuse angles. 93

110 6.1.1 Computational Evaluation for Matte Samples As illustrated in Figures 6.1 and 6.2, the fitting results of two models for matte samples (4) and (6) were compared. Combining the data from the data in Table 6.1, it can be found that the fitting accuracies were almost the same for the two models. Also from the RMS values in Table 6.1, the two models showed very similar fitting results for both specular and diffuse components. As shown in Figure 6.3, the real captured image and the rendered images of two models of Sample (6) under illumination angle -10 degree were compared. It can be found that there was almost no visual difference of two rendered images of the two models. In addition, both rendered images show good prediction of the real image. Thus, the overall performances of two models for the matte sample were very similar and provided satisfactory results. Figure 6.1. The fitting results of the two models for sample (4), the matte neutral gray uniform sample painted on the glass. 94

111 Figure 6.2. The fitting results of the two models for sample (6), the matte sample painted on the canvas. Table 6.1 The RMS values of relative radiance errors for one pixel of five samples. Sample Number All Specular Diffuse T-S Phong T-S Phong T-S Phong (4) Matte (6) Matte (5) Middle Gloss (7) High Gloss (1) High Gloss Matte Sample Middle Glossy Sample High Glossy Sample 95

112 Real Image Phong Model Torrance-Sparrow Model Figure 6.3. The real image and two rendering images of sample (6) under -10 illumination angle Computational Evaluation for Glossy Samples Different than the matte samples, the results of the samples with medium and high gloss levels showed large differences between the two models. For the angles contained within the black ellipses in Figures 6.4 and 6.5 (enlarged in the two bottom subplots), although the Torrance- 96

113 Sparrow model didn t provide perfect fitting, much better fitting results from Torrance-Sparrow model were found. This indicates that both models tended to over predict the shininess of the sample, but the simple Phong model had this severely over predicted. This result agrees with the experiment results of Tonsho [Tonsho 2002]. In the Table 6.1, it can be found that the prediction of Phong model for sample (7) was much worse than sample (5). This result suggests that if the gloss level of the sample is higher, the over prediction is more severe. When the high dynamic images with higher gloss levels were rendered in the display monitor, some pixels were still clipping. If there was no clipping in the images, the shadow details and colors of some parts would be mostly lost. This would lead to very poor image effect. Therefore, the appropriate clipping function was used in order to optimize the visible image effect [Johnson 2003]. Thus, to better fit the data for the sample (1) with the highest gloss level, some clipping data (the digital counts were clipped to 255) in the output image were excluded from the optimization data for both models. The fitting results are shown in Figure 6.7. The two points in the figure with the highest relative radiance values were two clipping data. For these two illumination angles, the relative values greater than 15 were considered to be clipping data. Although there were great differences between the real and estimated radiance values, both estimated and real digital count values for these two angles were 255. Because there was no error between real and rendered images for the two illumination angles of this pixel, the RMS values of the two points were set to zero. Thus, the optimized n (or c) value was smaller than the true value, but it improved the fitting accuracy for the values without overexposure. From the two bottom subplots of Figure 6.7, without the clipping data in the optimization, Torrance-Sparrow model still showed the much better fitting results. In addition, in Figures 6.6 and 6.8, the real image and two rendered 97

114 images of sample (7) and (8) under a certain illumination angle are shown. It s very obvious that the rendered images from the Phong model could not show some details of the highlight. Figure 6.4. The fitting results of the two models for sample (5), the canvas samples with middle gloss level. 98

115 Figure 6.5. The fitting results of the two models for sample (7). 99

116 Real Image Phong Model Torrance-Sparrow Model Figure 6.6. The real image and two rendering images of sample (7) under -15 illumination angle. 100

117 Figure 6.7. The fitting results of two models for sample (1). 101

118 Real Image Phong Model Torrance-Sparrow Model Figure 6.8. The real image and two rendering images of sample (1) 6 illumination angle. 102

119 Moreover, the calculated RMS values of all the samples can be found in Table 6.2. They show the same results with that analyzed above. For the samples with higher gloss levels, both models showed worse fitting for the specular components. However, the Torrance-Sparrow model had obviously better performance than the Phong model. Table 6.2 The RMS values of relative radiance errors for all the pixels of all the samples. Sample Number All Specular Diffuse T-S Phong T-S Phong T-S Phong (2) Matte (4) Matte (6) Matte (3) Middle Gloss (5) Middle Gloss (8) Middle Gloss (9) Middle Gloss (10) Middle Gloss (11) Middle Gloss (1) High Gloss (7) High Gloss Matte Sample Middle Glossy Sample High Glossy Sample 103

120 6.2 Psychophysical Evaluation Compared with the computational evaluation of the models, the more important thing is the evaluation from observers. Therefore, psychophysical experiments were designed and performed to detect if the computational errors would result in the visual artifacts and which model would be the preferred one for the observers Color Management of the LCD Monitor Since the psychophysical experiments were performed on the IBM LCD T221 monitor, it was necessary to characterize the monitor in order to do the colorimetric reproduction of the display and best reproduce the color of the samples [Murphy 2005]. The LCD monitor was characterized using the PR 650 spectroradiometer in a dark room. The maximum resolution, pixels, was used. At the psychophysical evaluation stage, the resolution was used. During the characterization, a square patch on the monitor center with pixels was displayed, as shown in Figure 6.9. Also the background of the whole interface was set to black to decrease the effect of flare. At first, the digital count of the patch changed from 0 to 255 (with the increment value equaling to 25) for each channel with 0 digital count value of other two channels, as well as combined to generate a neutral ramp. As shown in Table 6.2, for red, green, blue and neutral colors, there were 44 colors of the patch generated. In addition, the digital count of each channel changed from 0 to 255 (with the increment value equaling to 64). Combined the changes of three channels, the other 124 colors were generated. Thus, there were 168 patches generated to characterize the monitor, totally. The spectral radiance values of each color patch was measured using the PR650. Finally, the 104

121 tristimulus values of each color patch were calculated, which were used to build the display color management model. Table 6.2 The digital count values of the patch used to characterize the LCD monitor. dr dg db Patch Number 0 (0_25_255) 0 (0_25_255) 0 (0_25_255) 11 0_25_ _25_ _25_ _64_255 0_64_255 0_64_ Figure 6.9. The interface used in the display characterization. 105

122 Since the colorimetric values of the illumination for the camera were different than the white point of the monitor, the display white point was not used to build the color management model. Therefore, the von Kries chromatic adaptation transformation was applied between the tristimulus values under the camera illumination and the display white point. The corresponding colors from the camera illumination to the display white point were calculated using Eq. (6.1). X Y Z Dis Dis Dis = M 1 CAT 02 M VK M CAT02 X Y Z Cam Cam Cam R X, G = M Y (6.1) CAT02 B Z where subscript Cam and Dis represent the tristiulus values under the camera illumination and the display white point, separately. M VK is the transformation matrix that used to apply von Kries chromatic adaptation. M CAT02 is the transformation matrix that used to perform transform between the tristimulus values and the CAT02 space. Both of the matrices are shown in Eq. (6.2). M VK Rw, Rw, = 0 0 Dis Cam G G 0 w, Dis w, Cam 0 B B 0 0 w, Dis w, Cam 1.79 = (6.2) M CAT =

123 In Eq. (6.2), subscript w represents the white point in camera or display system. R, G and B values in CAT02 space were calculated using Eq. (6.1). Eq. (6.3) was used to build the display color management model. The measured tristimulus values of 168 color patches were transformed to the radiometric scalars, R, G and B values. The 3-by-3 transformation matrix is the primary matrix, in which the start values of coefficients were the maximum tristimulus values of three channels of the LCD monitor. The start values of X k,min, k, min Y, and Z k, min were the tristimulus values of the black-level radiant output, which were actually the flare output of the LCD monitor. The start values of the primary and flare matrices are shown in Eq. (6.4). R G B Est X = Y Z r,max r,max r,max X Y Z g,max g,max g,max X Y Z b,max b,max b,max 1 X X Y Yk Z Z k k,min,min,min (6.3) X Y Z r,max r,max r,max X Y Z g,max g,max g,max X Y Z b,max b,max b,max X Y Z k,min k,min k,min = (6.4) X Y Z Est X = Y Z r,max r,max r,max X Y Z g,max g,max g,max X Y Z b,max b,max b,max X Y Z k,min k,min k,min R G B 1 Est (6.5) With the estimated the radiometric scalar values from Eq. (6.3), the estimated tristimulus values were calculated using Eq. (6.5). And then the MATLAB fmincon nonlinear optimization function was used to optimize the coefficients of the primary and flare matrices. The optimization 107

124 repeated the calculations described in Eqs. (6.3) and (6.5) to minimize the average CIEDE2000 color difference between the measured and estimated values of 168 color patches. The optimized coefficients of the primary and flare matrices are shown in Eq. (6.6). With Eq. (6.6), the optimized radiometric scalars were calculated, and then three 1-D Look-up-tables between the radiometric scalars and the corresponding digital counts were built using linear interpolation. The three LUTs are shown in Figure Thus, the whole color management model of LCD monitor was built. In addition, the workflow to build the display model is shown in Figure X Y Z r,max r,max r,max X Y Z g,max g,max g,max X Y Z b,max b,max b,max X Y Z k,min k,min k,min Opt = (6.6) Figure Three 1-D interpolated LUTs of LCD monitor. 108

125 Figure The workflow used to build color management model of LCD monitor. Combined the chromatic adaptation and display color management model with the color management model of the digital camera described in Chapter 3, the end-to-end color management workflow was built, shown in Figure

126 Figure The end-to-end color management workflow used for the Nikon D1 CCD camera and the LCD monitor. In the camera to display color management workflow, the color accuracy of the two parts were evaluated. One part was the camera system, and the other one was the display monitor. As introduced in Chapter 3, 28 patches were used to evaluate the color reproduction accuracy of the digital camera. For the display monitor, the 168 patches used to characterize the monitor were also used to evaluate the colorimetric accuracy. Therefore, the histogram of the E 00 values of 110

127 the 168 patches is shown in Figure In addition, the mean, standard deviation, 90 th percentile, and maximum values of E * ab and E 00 were calculated, as shown in Table 6.3. These results show that the overall colorimetric accuracy of display model was good, and much better than the camera s colorimetric accuracy. So the whole camera to display system might have bad colorimetric reproduction accuracy for some colors, which resulted from camera system model. However, as discussed in Chapter 3, in this research, since the psychophysical experiments were not performed between real objects and rendered images, the color accuracy would not affect the comparison results. Figure The histogram of E 00 values of the 168 color patches using to show the colorimetric accuracy of display. 111

128 Table 6.3. The mean, standard deviation, 90 th percentile, and maximum values of E * ab and E 00 showing the color reproduction accuracy of display of the 168 color patches used to calibrate the display monitor. Color Reproduction Accuracy Mean Standard Deviation 90 th Percentile Maximum E * ab E Paired Comparison Experiments The paired comparison experiments were designed and performed on the LCD monitor. The goal of this experiment was to determine the preferred model under certain illumination angles for the samples. There were eight samples, samples (1) - (8), were used for the measurement. For each sample, there were twenty illumination angles needed to be evaluated by the observers. Because only two models were compared, there were total 180 pairs in the test. Figure 6.14 shows the user interface in the experiment. There were three images showing at one time, one on the top and the other two on the bottom. The top image was the real image of the sample. The bottom two images were rendered images based on the fitting results of the two models. The left and right arrangement of the two images was random. Also for one sample, the images of different illumination angles were randomly shown. Therefore, the instructions for observers were shown as the following description. 112

129 Compare the bottom two images with the top image. Select the image that looks most similar to the top image based on the overall image quality, including the highlights, shadows and colors. Figure The user interface used in the paired comparison experiments Experimental Results Using Thurston s Law of Comparative Judgments [Thurstone 1927], Case V, the preferred selections from the observers were transformed to a frequency matrix that recorded the numbers of the selections for each model. After that, a proportion matrix was transformed through dividing the frequencies by the total number of the observations. Finally, the proportions were converted into the Z-scores, and the interval scales were obtained by averaging each column of 113

130 the Z-scores. In addition, the confidence intervals [Montag 2004] were calculated based on the Eq. (6.7). CI = R ± 1.38 N (6.7) where R is the interval scales from the calculation and N is the number of the observers. In this experiment, there were eighteen observers, including the students, faculty and visiting scientists in the Munsell Color Science Laboratory. Therefore, N was equal to 18 for the experiment. The evaluation results of the eighteen observers are shown in Figure The error bar of each interval scale represents the 95% confidence interval. The interval scales in Figure 6.15 illustrate the rendering accuracy of the two models compared with the original photographs of the samples under all the test illumination angles, including the angles with and without the specular components. Furthermore, some specular angles and diffuse angles were selected from all the angles, and the interval scales of these two groups of angles were also calculated and shown in Figures 6.16 and In these figures, the overlap of the error bars of two models shows the uncertainty of the selections of the observers, indicating no preferred choice from the two images. On the other side, if there was no overlap for two interval scales, there was a significant difference between the two images. 114

131 Figure Interval scales of rendering accuracy of the two models for all the illumination angles. Figure Interval scales of rendering accuracy of the two models for some specular illumination angles. 115

132 Figure Interval scales of rendering accuracy of the two models for some diffuse illumination angles. For the samples with higher gloss levels, such as samples (1), (3), (5) and (7), Torrance-Sparrow model produced higher visual accuracy, as well as the computational accuracy. The interesting thing is that the preferred selections of Torrance-Sparrow model were found not only in the images including specular components, but also for the images without specular components. These differences could result from some pixels under some diffuse illumination angles, in which the specular components were not zero and affected the visual appearance. And the observers were sensitive to the differences between these images. Moreover, for sample (5), there was also no significant difference between the two models for the specular component, since it was difficult to see the difference of the glossy part on the canvas texture. For the matte samples (2), (4) and (6), the two models had similar rendering accuracy. Although sample (8) was estimated to have a medium gloss level, there was very little difference in the 116

133 visual accuracies between the two models. Figure 6.18 shows an example image of sample (8) to illustrate this. In the figure, the area in the black rectangle of the top image was enlarged to the full size, and three images only including this area are shown in the bottom three sub images. From the three full size images, it was found that the images from Torrance-Sparrow model showed better highlight detail on the red part. However, it was very difficult for the observers to notice this. The reason is that the missing highlight details of different pixels of Phong model were not focused on one large area of the image due to the complicated surface shape of the sample. In this situation, errors in the highlights were less salient to the observers. If the surface shape, color and material were uniform or simple, just like sample (1) and (7) (Figures 6.6 and 6.8), it was easier to detect the poorly rendered highlight. For sample (5), the glossy painting on canvas, although its surface shape was not as complicated as sample (8) and only one material was applied, the appearance of the highlight was still in the small area. In addition, the highlight of the canvas sample scattered as many parts. Thus, it was also not easy to detect the highlight difference of the two models. Figure 6.19 shows the real image and two rendered images of sample (5) under 10 illumination angle. The black rectangles show the area, where Torrance-Sparrow model provides closer appearance. But it appeared that observers were not sensitive to these differences. Therefore, compared with Phong model, the Torrance-Sparrow provided better visual accuracy for the samples with simple surface shape and image contents and uniform color. 117

134 Phong Model Real Image Torrance-Sparrow Model Figure The real image and rendering images of sample (8) under 25 illumination angle. From the analyses above, the Torrance-Sparrow model provided better overall accuracy for both physically-based calculation and psychophysical evaluation. Thus, this model was used for the research and experiments described in Chapter

135 Real Image Phong Model Torrance-Sparrow Model Figure The real image and two rendering images of sample (5) under 10 illumination angle. 119

136 7 Optimization for Measurement Geometry One of the purposes of this research was to optimize the measurement geometry and minimize the measurement numbers in order to save more time and data storage space. Thus, it was necessary to explore the numbers and locations of the measurement, so that the optimized measurement geometry could be determined. In this chapter, how to minimize the measurement number and optimize the measurement locations for different samples will be introduced. According to the analyses of Chapter 6, the samples with high gloss levels were more difficult to fit. Thus, in this section, four glossy samples, samples (3), (5), (7) and (8), were selected as the research target. In addition, because of its higher fitting accuracy, the Torrance-Sparrow model was used for the calculation. 7.1 The Selections for Measurement Numbers and Locations To explore the changes of the fitting accuracy with the decrease of the numbers of measurement, several groups of angle selection were determined. There were five groups of selection used at the first stage, as shown in Table 7.1. The maximum number of measurements was 23, and the minimum was 8. At the measurement stage, there were 42 illumination angles in total. At first, the group selection (5) was determined. In this group, all the measured illumination angles between -10 and 10 were selected, since the highlight peak of most pixels of the samples were within this range. For the angles smaller than -10 or larger than 10, half of the measured angles were selected. This 120

137 selection could also cover the illumination angles with the highlight peak of most pixels. Secondly, based on the group (5), the group selections (4) and (3) were determined through removing the angles between -45 (or 45 ) and -6 (or 6 ). The purpose was to keep more illumination angles including highlight peak and shadow parts. Thirdly, in the group selection (2), four more angles with ±6 were removed based on group (3) in order to explore the effect of the small number of the specular angles. In addition, different from group (3) the absolute values of angles greater than 15 were changed to other six angles, so that the fitting results from the different combinations of diffuse angles could be detected. Finally, the minimum number of the angles used to fit the models was determined. The minimum number should not less than 8 for the samples with the complicated surface shape because of utility of the nonlinear least-square fitting for parameters evaluation. Table 7.1 The number and location of angles of each group of selection. Locations of the Angles Number of the Angles (1) -75, -50, -6, 0, 4, 14, 50, 75 8 (2) -75, -50, -30, -15, -6, 0, 6, 14, 30, 50, (3) (4) (5) -86, -65, -45, -15, -6, -4, -2, 0, 2, 4, 6, 14, 45, 65, 84-86, -65, -45, -25, -15, -8, -6, -4, -2, 0, 2, 4, 6, 8, 14, 25, 45, 65, 84-86, -65, -45, -35, -25, -15, -10, -8, -6, -4, -2, 0, 2, 4, 6, 8, 10, 14, 25, 35, 45, 65,

138 Thus, with the determined selections of the angles, five groups of parameters of Torrance- Sparrow models were estimated using the nlinfit optimization function in MATLAB. Also, five groups of rendered images for different illumination angles were generated. 7.2 Psychophysical Evaluation for Optimal Selection With five groups of rendered images, the paired comparison experiments were performed again. The user interface was the same as that used in the first experiment (Figure 6.14). The top image was also the real image of the sample. Different from the first experiments, for each sample, the real image was also used in the one of the bottom images. The purpose was to detect if the rendered images were accurate enough. If so, the rendered image should not be differentiate from the real image. Therefore, there were six groups of images, corresponding to five groups of rendered images using five groups of angle selection and one group of real images. So the bottom two images were selected from the six groups of images for a certain illumination angle of one sample. The selection and showing of any two images were random, and the left and right locations were randomly arranged. In this experiment, seven illumination angles were selected for each sample, including four specular angles, two diffuse angles and one grazing angle. The grazing angle was a large illumination angle, under which there were shadows on the images. Seventeen observers performed the experiments. Because the real image was also one of the test images, it was possible that unanimous judgment was made by the observers when selecting the real image. In this case, the infinite Z-scores values appeared. If so, a logistic function was used instead of the Z-score transformation to calculate the interval scales. This function is shown as Eq. (7.1). 122

139 V = ln[ f ij N f ij ] (7.1) where f ij is the frequency stimulus that i is chosen over stimulus j, and N is the number of observations. The results from the experiment of all the samples are shown in Figures The red points are interval scales standing for the rendering accuracy, and the error bars are the 95% confidence intervals. The large overlap of two error bars between the real and reproduced image shows the satisfactory accuracy of the reproduced image, since there was no significant visual difference between the real images and reproduced images. Thus, for all the samples under all the illumination angles without grazing angles, the satisfactory angle selection could be found. From the results for all the six angles without grazing angles, for samples (7), (5) and (3), the samples without complicated surface shape, the minimum angle with satisfactory accuracy was the selection in the group (1). The number of measurements in this group was eight. But for the sample (8) (the impasto sample), eight angles were not enough, and group (2) was the minimum selection of angles. In Figure 7.5, two rendered images from group (1) and (5) were shown with the real image under an illumination angle including specular components. For group (1) angle selection, because there were only eight values used to fit the BRDF curve, the results were not visually satisfactory. The image rendered from group (1) selection in Figure 7.5 shows that some parts of highlight were rendered incorrectly. In Figure 7.1, for sample (7) (the brushed sample), there was no one angle selection that was good enough for the two tested diffuse angles. The reason is that some pixels under the two angles still included the specular components, for which 123

140 the fitting was not accurate enough. Thus, the observers were sensitive to the differences in these pixels. All without Grazing Angles Specular Angles Diffuse Angles Grazing Angles Figure 7.1. The interval scales of rendering accuracy optimized with different groups of angle number selection using Torrance-Sparrow models for sample (7). 124

141 All without Grazing Angles Specular Angles Diffuse Angles Grazing Angles Figure 7.2. The interval scales of rendering accuracy optimized with different groups of angle number selection using Torrance-Sparrow models for sample (5). 125

142 All without Grazing Angles Specular Angles Diffuse Angles Grazing Angles Figure 7.3. The interval scales of rendering accuracy optimized with different groups of angle number selection using Torrance-Sparrow models for sample (3). 126

143 All without Grazing Angles Specular Angles Diffuse Angles Grazing Angles Figure 7.4. The interval scales of rendering accuracy optimized with different groups of angle number selection using Torrance-Sparrow models for sample (8). 127

144 Real Image Group (5) Group (1) Figure 7.5. The real image and two rendering images optimized using different groups of angle numbers of sample (8). For all the samples, the data in the grazing angles have low accuracy. Therefore, the rendering accuracies under the tested grazing angle were very low. There were two factors that accounted 128

145 for this. One was that the relative radiance values in the grazing angles were very low, so the fitting weights were also low. Thus, the data in the smaller illumination angles with larger relative radiance values contributed much more to the RMS values, and also had priority to be fitted better. In addition, for some pixels, the values in grazing angles were too small, so the fitting results were clipped to zero. The other factor causing low fitting accuracy was real selfshadows in the pixels with complicated BRDF curve. One example of this is shown in Figure 7.6. It can be found the fittings for the data under the grazing angles were poor, because the shadow values could not be modeled using the simple cosine function. Figure 7.6. The fitting results of one pixel of sample (8) used to show the fitting for the grazing anlges. 129

146 According to the above analyses, for all the four glossy samples under all illumination angles without grazing angles, acceptable visual accuracy was found. This demonstrates that the methods for modeling parameters, including some simplified calculations were reasonable. In addition, as introduced in Chapter 5, the MATLAB nlinfit function was used to do all the optimization describe in this Chapter. Thus, with much faster calculation and good visual accuracy, this function should be used instead fmincon function in future research. 7.3 Computational Evaluation for Optimal Selection To evaluate the accuracy of all the illumination angles for the five groups of angle selection, the RMS values of relative radiance errors of 2000 pixels between the measured data and fitting data were calculated. The results are shown with the blue points in Figures For sample (8) (the impasto sample), the fitting accuracy of group (2) was also satisfied with a small number of angles. This result confirms that from the psychophysical experiment. However, for sample (7), (5) and (3), the samples without complicated surface shape, different with the experiments results, the RMS values of the group (1) selection were not low enough for all the illumination angles without grazing angles. It means that there were some images under certain illumination angles from group (1) that could not be well fit, and these images were not used in the psychophysical experiments. For samples (7) (the brushed sample) and (3) (the uniform sample), the fitting results of the group (2) indicated low RMS values with 11 measurement angles. However, for sample (5) (the canvas sample), this was not that case. It indicated that no one of the groups with smaller angle numbers provided high accuracy. Thus, more groups of the selection should be tested. In 130

147 addition, for the grazing angle, the RMS values of the group (2) were almost the highest. But in actual rendering, the grazing angles were not preferred over other angles. Moreover, according to the results from the psychophysical experiments, even for the group selections with highest interval scales, the renderings were also significantly different from the real images under grazing illumination angles. Therefore, it was reasonable to select the group that had the greater the fitting accuracy for specular and diffuse angles at the cost of low accuracy for grazing angles. All without Grazing Angles Specular Angles Diffuse Angles Grazing Angles Figure 7.7. The RMS values of the relative radiance of 2000 pixels optimized with different groups of angle number selection using Torrance-Sparrow models for sample (7). 131

148 All without Grazing Angles Specular Angles Diffuse Angles Grazing Angles Figure 7.8. The RMS values of the relative radiance of 2000 pixels optimized with different groups of angle number selection using Torrance-Sparrow models for sample (5). 132

149 All without Grazing Angles Specular Angles Diffuse Angles Grazing Angles Figure 7.9. The RMS values of the relative radiance of 2000 pixels optimized with different groups of angle number selection using Torrance-Sparrow models for sample (3). 133

150 All without Grazing Angles Specular Angles Diffuse Angles Grazing Angles Figure The RMS values of the relative radiance of 2000 pixels optimized with different groups of angle number selection using Torrance-Sparrow models for sample (8). For sample (3) (the uniform sample), one may notice that the RMS values of group (2) were much lower than that of group (3), (4) and (5). It seems unreasonable that the lower RMS values were generated from the smaller number of angles. As introduced in Chapter 6, when the RMS values were calculated, if both the real values and model estimates were greater than the exposure threshold value, the RMS values were set to zero. However, at the parameters 134

151 estimation stage, if more clipping data were used, the fitting accuracy of the data that were not clipping would be decreased. Thus, the overall RMS values would be increased. The fitting results of one pixel from sample (3), which were used as the example, are shown in Figures 7.11 and These two figures illustrate the fitting results of one pixel of sample (3) optimized with group (2) and group (6) selections. The blue points in the figures are the angles that were used to estimate the parameters. In the final rendering, the data greater than 10 in the figures were clipping data, so the RMS values of these were equal to zero. Thus, only the points with the values smaller than 10 contributed to the RMS calculation. Therefore, the RMS values of group (2) were smaller than that of group (6). For the sample (7) (the brushed sample) and (8) (the impasto sample), the overall RMS values of group (2) were also lower than that of group (3). For the sample (5) (the canvas sample), this case happened for the diffuse angles. According the calculation of each illumination angle, this result was caused mainly by the RMS values from the illumination angle close to ±30 degrees. Thus, the missing ±30 degrees in the group (4) resulted in the higher RMS values. It also suggests that ±30 degree angles were the important angles in parameter estimation to achieve higher accuracy for these three samples. Furthermore, in the analyses, the situation of sample (3) introduced in the above paragraph happened rarely for the other three samples. It indicated that sample (3) includes many more pixels, in which there were large proportions of the specular components. 135

152 Figure The fitting results of sample (3) optimized with the group (2) selection. 136

153 Figure The fitting results of sample (3) optimized with the group (6) selection. 7.4 Optimization of the Measurement Numbers and Locations To further explore the factors that relate to the fitting accuracy and the measurement location and number, the specular peak width and histogram of the peak values of the four samples were plotted, as shown in Figures According to the histogram of the locations of peak values for samples (7), (5) and (8), with nonuniform surface, it can be found there were some peak values existing in the angle ranges from -35 to 25 and from 25 to 35. For the three samples, the half width values of the half 137

154 specular peak of most pixels were about 5. Thus, if no angles from ±45 to ±15 were used for the parameters estimation, the specular component of this part of pixels cannot be estimated. This is why the group (3) of selection had lower accuracy than group (2). For sample (5) (the canvas sample), since there were more peak values existing within ±10 degree, two specular illumination angles instead of two grazing angles finally were added to estimate the parameters. The RMS accuracy of this group selection is shown as the red circle in Figure 7.8. It shows that the accuracy for the specular angles improved and the same accuracy for diffuse and grazing angles with group (2) selection. For sample (3) (the uniform sample), since most of the peak values were -2 to 6, two diffuse angles could be removed from group (2) selection. The resulting accuracy is shown as the red circle in Figure 7.8. The results were satisfactory for all the illumination angles without grazing angles. Therefore, for all the four samples, the optimized angle selections were determined, as shown in Table 7.2. The numbers of angles for all the samples were smaller or equal to 11. Based on all the results and analyses in this Chapter, the determination of the optimized locations and minimized numbers for a certain sample were highly related to the histogram of the locations of peak values and specular peak width values. For the current sample collection, the minimum specular peak width values of each sample were calculated. Therefore, for the measurement system used in current research, if the approximate histogram of the peak values of the sample could be obtained, the optimized angle selection can be determined. For the uniform samples, the canvas samples and the simple brushed samples, the histogram of the peak values should be very 138

155 close to that of representative sample (3), (5) and (7). For impasto samples with complicated surface shape, there should be more statistical data of more impasto samples to determine the optimum number and location of the measurements. (a) (b) Figure The specular peak width and histogram of the locations of peak values for sample (7). (a) (b) Figure The specular peak width and histogram of the locations of peak values for sample (5). 139

156 (a) (b) Figure The specular peak width and histogram of the locations of peak values for sample (3). (a) (b) Figure The specular peak width and histogram of the locations of peak values for sample (8). 140

157 Table 7.2. The Minimized numbers and the optimized locations of the measurement angles. Sample Number Locations of Angles Numbers of Angles (7) -75, -50, -30, -15, -6, 0, 6, 14, 30, 50, (5) -65, -30, -15, -4, -8, 0, 4, 8, 14, 30, (3) -65, -30, -15, -6, 0, 6, 14, 30, 65 9 (8) -75, -50, -30, -15, -6, 0, 6, 14, 30, 50,

158 8 Conclusions and Future Research 8.1 Conclusions In this research, a digital camera with a simple one degree of freedom measurement system was built to record the relative radiance values of artist paint surfaces as a function of illumination angle. To explore the relationship between the radiance values of the paint surface and the illumination angle, two simple lighting reflection models were used. One was the empirical Phong model, and the other was the physically-based Torrance-Sparrow model. Thirteen paint samples were prepared varied in gloss levels, painting materials and colors. These were used to evaluate the accuracy of two models. Besides computational accuracy, paired comparison experiments were also performed to explore the visual accuracy of the two models. Finally, how the numbers and locations of measurements were minimized and optimized was analyzed. There are several conclusions can be drawn, which will be listed in the following paragraphs. According to the computational results, both the Phong and Torrance-Sparrow models provided satisfactory accuracy for the matte samples. For the glossy samples, the accuracy provided by either model was not good, and errors increase with gloss level. However, the Torrance-Sparrow provided much better computational accuracy for the glossy samples. According to the psychophysical results described in Chapter 6, the Torrance-Sparrow model had higher visual accuracy for glossy samples. But for the samples with different materials and 142

159 complex surface shape, two models provided similar visual accuracy. For matte samples, both models also produced similar renderings. Based on the discussions in Chapter 7, measurement geometry in the current measurement system could be minimized for different samples. Also, the measurement angle locations could be optimized. The minimization and the optimization were mainly determined by the minimum half width of the highlight peak and the histogram of the peak locations. Thus, the optimized measurement geometry will be very helpful to save measurement time and data storage. In addition, since MATLAB nlinfit function can provide good visual accuracy and faster computation speed, this function should be used for future research. 8.2 Future Research Based on the discussions and conclusions in this thesis, future research can be developed to further improve the rendered accuracy of the samples Model Development Although the Torrance-Sparrow model provided much better accuracy than the Phong model, it still showed worse fitting results when the gloss levels of the samples increased. Therefore, more complicated micro-facet distribution functions should be tested in the future to find better fitting results for glossy samples. According to the results from the literature review, candidates might include some functions compared in Trowbridge paper [Trowbridge 1975], Cook-Torrance Specular Micro-Facet Model [Cook 1981], He-Torrance Comprehensive Analytic model [He 143

160 1991], Lafortune Generalized Cosine Lobe Model [Lafortune 1997], polynomial texture mapping [Malzbender 2001], bidirectional sub-surface scattering reflectance distribution function model (BSSRDF) [Jensen 2001], modified Beard-Maxwell Bidirectional Reflectance Model [Westlund 2002]. In addition, besides the ellipsoid of revolution proposed by Trowbridge, other shapes for the surface of revolution might provide better fitting results. Moreover, because the self shadowing could not be fitted using a simple cosine function, it should be considered separately, as shown in the example in Figure 7.6. As described in Chapter 7, for the pixels with small relative radiance values under large illumination angles (not in the shadow or mask areas), the fitting weights were low and some estimated values were clipped to zero. To improve this fitting accuracy of these pixels, one solution could be the change of objective function. For example, using the ratios of the RMS value of the measured and estimated radiances as the objective function could be used as the objective function, but this method will lower the fitting accuracy of the values under smaller illumination angles. Thus, an appropriate weight to fit the values under large illumination angles needs to be found, so that the rendering accuracies of the images under both large and small illumination angles will be acceptable Higher Resolution Images For real artist paintings in the museums, there could be some paintings, which have complicated surface shape. If the camera with higher resolution is used and sample has the same size, the area of one pixel will be decreased. Thus, the complexity of surface shape will decrease and the surface normal distribution of the microfacets could be easier to be modeled [Tan 2008]. This 144

161 could result in the simpler BRDF. As illustrated in Figures 5.19 and 5.20, it is obvious that the BRDF curves were not well fit using the current models. Therefore, if measured area can be recorded using more pixels using a higher resolution digital camera, the BRDF curves could fit the model better. However, the increased resolution will also increase data storage, so the appropriate resolutions for different kinds of samples should be determined Two-Dimensional BRDF Measurement Since the current measurement system is a simplified one degree of freedom system, the four degrees of freedom BRDF of the samples cannot be measured. Therefore, to measure the complete BRDF properties, the proposed MCSL imaging gonio-spectrophotometer should be developed and tested. 145

162 9 References [Akao 2004] Akao Y, Tsumura N, Herzog PG, Miyake Y, Hill B. Gonio-Spectral Imaging of Paper and Cloth Samples Under Oblique Illumination Conditions Based on Image Fusion Techniques. J. of Imaging Science and Technology, 48(3), pp , [ASTM Standard E ] Standard Practice for Angle Resolved Optical Scatter Measurement on Specular or Diffuse Surfaces. American Society for Testing and Materials, December [Blinn 1997] Blinn JF. Model of Light Reflection for Computer Synthesized Pictures. Procedings of ACM SIGGRAPH 77, pp , [Dana 1999] Dana KJ, Ginneken BV, Nayar SK and Koenderink JJ. Reflectance and Texture of Real-World Surfaces. ACM Transactions on Graphics, 18(1), pp. 1-34, January [Day 2000] Day EA. The Effects of Multi-channel Visible Spectrum Imaging on Perceived Spatial Image Quality and Color Reproduction Accuracy. Mater Thesis, Rochester Institute of Technology, pp , April [Foo 1997] Foo SC. A Gonioreflectometer for Measuring the Bidirectional Reflectance of Material for Use in Illumination Computation. M.S. Thesis, Cornell University, [Haneishi 1998] Haneishi H, Iwanami T, Tsumura N and Miyake Y. Goniospectral Imaging of 3D Objects. The Sixth Color Imaging Conference: Color Science, Systems and Applications, November

163 [Haneishi 2001] Haneishi H, Iwanami T, Honma T, Tsumura N and Miyake Y. Goniospectral Imaging of Three- Dimensional Objects. J. of Imaging Science and Technology, 45(5), pp , [Hanrahan 1993] Hanrahan P and Krueger W. Reflection from layered surfaces due to subsurface scattering. In SIGGRAPH 93 Conference Proceedings, pp , August [Havran 2005] Havran V, Neumann A, Zotti G, Purgathofer W and Seidel H. On Cross-Validation and Resampling of BRDF Data Measurement. Proceedings of the 21 st Spring Conference on Computer Graphics, Budmerice, Slovakia, pp , May [Hawkins 2001] Hawkins T, Cohen J and Debevec P. A Photometric Approach to Digitizing Cultural Artifacts. 2 nd International Symposium on Virtual Reality, Archaeology, and Cultural Heritage, Glyfada, [He 1991] He XD, Torrance KE, Sillion FX and Greenberg DP. A Comprehensive Physical Model for Light Reflection. ACM SIGGRAPH Computer Graphics, 25(4), pp , July [ISO 14524] ISO14524, Photography - Electronic still-picture cameras - Methods for measuring optoelectronic conversion functions (OECFs), 1st Edition, [Jensen 2001] Jensen HW, Marschner S, Levoy M and Hanrahan P. A practical model for subsurface light transport. Proceedings of SIGGRAPH 2001, In Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, pp ,

164 [Johnson 2003] Johnson, GM and Fairchild, MD. Rendering HDR images. In Proceedings of IS&T/SID 11th Color Imaging Conference, pp , [Ju 2002] Ju DY, Yoo J-H, Seo KC. Sharp G and Lee SW. Image-Based Illumination for Electronic Display of Artistic Paintings. Ann Arbor, MI: University of Michigan, Report nr CSE-TR , [Khan 2006] Khan EA, Akyuz AO and Reinhard E. Ghost Removal in High Dynamic Range Images. IEEE International Conference on Image Processing, Atlanta, USA, pp , August [Leloup 2006] Leloup F, De Waele T, Versluys J, Hanselaer P, Pointer MR and KaHo St.-Lieven. Full 3D BSDF spectroradiometer. ISCC/CIE Expert Symposium, May [Li 2006] Li H, Foo SC, Torrance KE and Westin SH. Automated Three-axis Gonioreflectometer for Computer Graphics Applications. Optical Engineering, 45(4), April [Lun 1999] Lun K. A Method of Light Reflectance Measurement. Master Thesis, The University of British Columbia, April [Malzbender 2001] Malzbender T, Gelb D and Wolters H, Polynomial Texture Maps. The 28th Annual Conference on Computer Graphics and Interactive Techniques, pp , [Marschner 2000] 148

165 Marschner SR, Westin SH, Lafortune EPF and Torrance KE. Image-based Bidirectional Reflectance Distribution Function Measurement. Appl. Opt. 39(16), pp , June [Matusik 2003] Matusik S, Pfister H, Brand M and McMillan L. A data-driven reflectance model. ACM Transactions on Graphics, 22(3), pp , July [Montag 2004] Montag ED, Louis Leon Thurstone in Monte Carlo: creating error bars for the method of paired comparison. IS&T/SPIE Symposium on Electronic Imaging: Science and Technology, SPIE. 5294, pp , [Murphy 2005] Murphy EP, A Testing Procedure to Characterize Color and Spatial Quality of Digital Cameras Used to Image Cultural Heritage. Mater Thesis, Rochester Institute of Technology, pp , February [Murray-Coleman 1990] Murray-Coleman JF, Smith AM. The Automated Measurement of BRDFs and Their Application to Luminaire Modeling. Journal of the Illuminating Engineering Society, Winter [Nakaguchi 2005] Nakaguchi T, Kawanishi M, Tsumura N and Miyake Y. Optimization of Camera and Illumination Direction on Goniospectral Imaging Method. J. of The Society of Photographic Science and Technology of Japan, 68(6), pp , [Nicodemus 1977] Nicodemus FE, Richmond JC, Hsia JJ, Ginsberg IW, Limperis T. Geometrical Consideration and Nomenclature for Reflectance: U.S. Department of Commerce. National Bureau of Standards. October

166 [Pointer 2005] Pointer MR, Barnes NJ, Clarke PJ and Shaw MJ. A New Goniospectrophotometer for Measuring Gonio-Apparent Materials. NPL Report, March [Reinhard 2005] Reinhard E, Ward G, Pattanaik S and Debevec P. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting. Morgan Kaufmann, [Sandmeier 1995] Sandmeier S, Sandmeier W, Itten KI, Schaepman ME and Kellenberger TW. The Swiss Fieldgoniometer System. Proceedings of IGARSS 95, IEEE International Geoscience and Remote Sensing Symposium, pp , July [Sandmeier 1996] Sandmeier S, Sandmeier W, Itten KI, Schaepman ME, and Kellenberger TW, Acquisition of Bidirectional Reflectance Data Using the Swiss Field-Goniometer System (FIGOS). Proc. of EARSeL Symposium, Basel, Switzerland, pp , [Shirley 1997] Shirley P, Hu H, Smits B, and Lafortune EP. A practitioners assessment of light reflection models. Pacific Graphics 97, October [Tan 2008] Tan P, Lin S, Quan L and Guo B,. Filtering and Rendering of Resolution-Dependent Reflectance Models. IEEE Transactions on Visulization and Computer Graphics, 14(2), pp , March [Thurstone 1927] Thurstone LL. A Law of Comparative Judgment. Psychological Review, 34, pp , [Tominaga 2001] 150

167 Tominaga S, Matsumoto T and Tanaka N. 3D Recording and Rendering of Art Paintings. Proceedings of Ninth Color Imaging Conference, Scotsdale, AZ, IS&T, pp , [Tonsho 2002] Tonsho K, Akao Y, Tsumura N and Miyake Y. Development of Gonio-photometric Imaging System for Recording Reflectance Spectra of 3D Objects. Proceedings of SPIE, 4663, pp , [Trowbridge 1975] Trowbridge TS, Reitz KP. Average irregularity representation of a rough surface for ray reflection. Journal of Optical Society of America, pp , May, [Turner 1998] Turner M and Brown J. The Sandmeier Field Goniometer: A Measurement Tool for Bi- Directional Reflectance. NASA Commercial Remote Sensing Verification and Validation Symposium, August [Torrance 1967] Torrance KE and Sparrow EM. Theory for Off-specular Reflection from Roughened Surfaces. Journal of Optical Society of America, 57, pp , [Ward 1992] Ward GJ. Measuring and Modeling Anisotropic Reflection. SIGGRAPH 92: Proceedings of the 19 th annual conference on Computer Graphics and Interactive Techniques, New York, NY, USA: ACM Press, [Westlund 2002] Westlund HB and Meyer GW. A BRDF Database Employing the Beard-Maxwell Reflection Model. Graphics Interface, pp , May

168 10 Appendices 10.1 Appendix One: Rendered Images Illumination Angle Real Images Estimated Images by Phong Model Estimated Images by Torrance-Sparrow Model Figure Comparison of the real images and estimated images of sample (1). 152

169 Illumination Angle Real Images Estimated Images by Phong Model Estimated Images by Torrance-Sparrow Model Figure Comparison of the real images and estimated images of sample (5). 153

170 Illumination Angle Real Images Estimated Images by Phong Model Estimated Images by Torrance-Sparrow Model Figure Comparison of the real images and estimated images of sample (6). 154

171 Illumination Angle Real Images Estimated Images by Phong Model Estimated Images by Torrance-Sparrow Model Figure Comparison of the real images and estimated images of sample (7). 155

172 Illumination Angle Real Images Estimated Images by Phong Model Estimated Images by Torrance-Sparrow Model Figure Comparison of the real images and estimated images of sample (8). 156

173 Illumination Angle Real Images Estimated Images by Phong Model Estimated Images by Torrance-Sparrow Model Figure Comparison of the real images and estimated images of sample (9). 157

174 Illumination Angle Real Images Estimated Images by Phong Model Estimated Images by Torrance-Sparrow Model Figure Comparison of the real images and estimated images of sample (10). 158

175 Illumination Angle Real Images Estimated Images by Phong Model Estimated Images by Torrance-Sparrow Model Figure Comparison of the real images and estimated images of sample (11). 159

Experimental Validation of Analytical BRDF Models

Experimental Validation of Analytical BRDF Models Experimental Validation of Analytical BRDF Models Addy Ngan, Frédo Durand, Wojciech Matusik Massachusetts Institute of Technology Goal Evaluate and analyze the performance of analytical reflectance models

More information

Estimation of Surface Spectral Reflectance on 3D. Painted Objects Using a Gonio-Spectral Imaging System

Estimation of Surface Spectral Reflectance on 3D. Painted Objects Using a Gonio-Spectral Imaging System Estimation of Surface Spectral Reflectance on 3D Painted Objects Using a Gonio-Spectral Imaging System Akira Kimachi, Shogo Nishi and Shoji Tominaga Osaka Electro-Communication University, Chiba University

More information

CS 5625 Lec 2: Shading Models

CS 5625 Lec 2: Shading Models CS 5625 Lec 2: Shading Models Kavita Bala Spring 2013 Shading Models Chapter 7 Next few weeks Textures Graphics Pipeline Light Emission To compute images What are the light sources? Light Propagation Fog/Clear?

More information

Reflection models and radiometry Advanced Graphics

Reflection models and radiometry Advanced Graphics Reflection models and radiometry Advanced Graphics Rafał Mantiuk Computer Laboratory, University of Cambridge Applications To render realistic looking materials Applications also in computer vision, optical

More information

Radiance. Radiance properties. Radiance properties. Computer Graphics (Fall 2008)

Radiance. Radiance properties. Radiance properties. Computer Graphics (Fall 2008) Computer Graphics (Fall 2008) COMS 4160, Lecture 19: Illumination and Shading 2 http://www.cs.columbia.edu/~cs4160 Radiance Power per unit projected area perpendicular to the ray per unit solid angle in

More information

Lecture 4: Reflection Models

Lecture 4: Reflection Models Lecture 4: Reflection Models CS 660, Spring 009 Kavita Bala Computer Science Cornell University Outline Light sources Light source characteristics Types of sources Light reflection Physics-based models

More information

Shading & Material Appearance

Shading & Material Appearance Shading & Material Appearance ACM. All rights reserved. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/. MIT EECS 6.837 Matusik

More information

Image-based BRDF Representation

Image-based BRDF Representation JAMSI, 11 (2015), No. 2 47 Image-based BRDF Representation A. MIHÁLIK AND R. ĎURIKOVIČ Abstract: To acquire a certain level of photorealism in computer graphics, it is necessary to analyze, how the materials

More information

Estimating the surface normal of artwork using a DLP projector

Estimating the surface normal of artwork using a DLP projector Estimating the surface normal of artwork using a DLP projector KOICHI TAKASE 1 AND ROY S. BERNS 2 1 TOPPAN Printing co., ltd. 2 Munsell Color Science Laboratory, Rochester Institute of Technology Summary:

More information

Overview. Radiometry and Photometry. Foundations of Computer Graphics (Spring 2012)

Overview. Radiometry and Photometry. Foundations of Computer Graphics (Spring 2012) Foundations of Computer Graphics (Spring 2012) CS 184, Lecture 21: Radiometry http://inst.eecs.berkeley.edu/~cs184 Overview Lighting and shading key in computer graphics HW 2 etc. ad-hoc shading models,

More information

BRDF Computer Graphics (Spring 2008)

BRDF Computer Graphics (Spring 2008) BRDF Computer Graphics (Spring 2008) COMS 4160, Lecture 20: Illumination and Shading 2 http://www.cs.columbia.edu/~cs4160 Reflected Radiance proportional to Irradiance Constant proportionality: BRDF [CW

More information

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons Reflectance 1 Lights, Surfaces, and Cameras Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons 2 Light at Surfaces Many effects when light strikes a surface -- could be:

More information

Simulating Gloss of Curved Paper by Using the Point Spread Function of Specular Reflection

Simulating Gloss of Curved Paper by Using the Point Spread Function of Specular Reflection Simulating Gloss of Curved Paper by Using the Point Spread Function of Specular Reflection Norimichi Tsumura*, Kaori Baba*, and Shinichi Inoue** *Department of Information and Image Sciences, Chiba University,

More information

Analysis of spectrophotometer specular performance using goniometric information

Analysis of spectrophotometer specular performance using goniometric information Analysis of spectrophotometer specular performance using goniometric information David R. Wyble * Munsell Color Science Laboratory Rochester Institute of Technology, Rochester, NY 14623 ABSTRACT The 1986

More information

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3 Image Formation: Light and Shading CSE 152 Lecture 3 Announcements Homework 1 is due Apr 11, 11:59 PM Homework 2 will be assigned on Apr 11 Reading: Chapter 2: Light and Shading Geometric image formation

More information

Simulating Gloss of Curved Paper by Using the Point Spread Function of Specular Reflection

Simulating Gloss of Curved Paper by Using the Point Spread Function of Specular Reflection 25 Bull. Soc. Photogr. Imag. Japan. (2015) Vol. 25 No. 2: 25 30 Original Paper Simulating Gloss of Curved Paper by Using the Point Spread Function of Specular Reflection Norimichi Tsumura*, Kaori Baba*,

More information

Validation of the Gonioreflectometer

Validation of the Gonioreflectometer Validation of the Gonioreflectometer Hongsong Li Kenneth E. Torrance PCG-03-2 May 21, 2003 i Abstract This report describes a series of experiments conducted in the Light Measurement Laboratory of the

More information

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker CMSC427 Shading Intro Credit: slides from Dr. Zwicker 2 Today Shading Introduction Radiometry & BRDFs Local shading models Light sources Shading strategies Shading Compute interaction of light with surfaces

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 3, 2017 Lecture #13 Program of Computer Graphics, Cornell University General Electric - 167 Cornell in

More information

And if that 120MP Camera was cool

And if that 120MP Camera was cool Reflectance, Lights and on to photometric stereo CSE 252A Lecture 7 And if that 120MP Camera was cool Large Synoptic Survey Telescope 3.2Gigapixel camera 189 CCD s, each with 16 megapixels Pixels are 10µm

More information

Visual Appearance and Color. Gianpaolo Palma

Visual Appearance and Color. Gianpaolo Palma Visual Appearance and Color Gianpaolo Palma LIGHT MATERIAL Visual Appearance Color due to the interaction between the lighting environment (intensity, position, ) and the properties of the object surface

More information

Radiometry and reflectance

Radiometry and reflectance Radiometry and reflectance http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 16 Course announcements Homework 4 is still ongoing - Any questions?

More information

Engineered Diffusers Intensity vs Irradiance

Engineered Diffusers Intensity vs Irradiance Engineered Diffusers Intensity vs Irradiance Engineered Diffusers are specified by their divergence angle and intensity profile. The divergence angle usually is given as the width of the intensity distribution

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 27, 2015 Lecture #18 Goal of Realistic Imaging The resulting images should be physically accurate and

More information

Illumination and Shading - II

Illumination and Shading - II Illumination and Shading - II Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 2/19/07 1 From last time Light Sources Empirical Illumination Shading Local vs Global Illumination 2/19/07

More information

Light Reflection Models

Light Reflection Models Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 21, 2014 Lecture #15 Goal of Realistic Imaging From Strobel, Photographic Materials and Processes Focal Press, 186.

More information

Announcements. Image Formation: Light and Shading. Photometric image formation. Geometric image formation

Announcements. Image Formation: Light and Shading. Photometric image formation. Geometric image formation Announcements Image Formation: Light and Shading Homework 0 is due Oct 5, 11:59 PM Homework 1 will be assigned on Oct 5 Reading: Chapters 2: Light and Shading CSE 252A Lecture 3 Geometric image formation

More information

Local Reflection Models

Local Reflection Models Local Reflection Models Illumination Thus Far Simple Illumination Models Ambient + Diffuse + Attenuation + Specular Additions Texture, Shadows, Used in global algs! (Ray tracing) Problem: Different materials

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

Estimation of Reflection Properties of Silk Textile with Multi-band Camera

Estimation of Reflection Properties of Silk Textile with Multi-band Camera Estimation of Reflection Properties of Silk Textile with Multi-band Camera Kosuke MOCHIZUKI*, Norihiro TANAKA**, Hideaki MORIKAWA* *Graduate School of Shinshu University, 12st116a@shinshu-u.ac.jp ** Faculty

More information

Reflective Illumination for DMS 803 / 505

Reflective Illumination for DMS 803 / 505 APPLICATION NOTE // Dr. Michael E. Becker Reflective Illumination for DMS 803 / 505 DHS, SDR, VADIS, PID & PLS The instruments of the DMS 803 / 505 series are precision goniometers for directional scanning

More information

Surface Reflection Models

Surface Reflection Models Surface Reflection Models Frank Losasso (flosasso@nvidia.com) Introduction One of the fundamental topics in lighting is how the light interacts with the environment. The academic community has researched

More information

Photometric Stereo.

Photometric Stereo. Photometric Stereo Photometric Stereo v.s.. Structure from Shading [1] Photometric stereo is a technique in computer vision for estimating the surface normals of objects by observing that object under

More information

Council for Optical Radiation Measurements (CORM) 2016 Annual Technical Conference May 15 18, 2016, Gaithersburg, MD

Council for Optical Radiation Measurements (CORM) 2016 Annual Technical Conference May 15 18, 2016, Gaithersburg, MD Council for Optical Radiation Measurements (CORM) 2016 Annual Technical Conference May 15 18, 2016, Gaithersburg, MD Multispectral measurements of emissive and reflective properties of displays: Application

More information

Fundamentals of Rendering - Reflectance Functions

Fundamentals of Rendering - Reflectance Functions Fundamentals of Rendering - Reflectance Functions Image Synthesis Torsten Möller Mike Phillips Reading Chapter 8 of Physically Based Rendering by Pharr&Humphreys Chapter 16 in Foley, van Dam et al. Chapter

More information

Timothy Walsh. Reflection Models

Timothy Walsh. Reflection Models Timothy Walsh Reflection Models Outline Reflection Models Geometric Setting Fresnel Reflectance Specular Refletance & Transmission Microfacet Models Lafortune Model Fresnel Incidence Effects Diffuse Scatter

More information

Draft from Graphical Models and Image Processing, vol. 58, no. 5, September Reflectance Analysis for 3D Computer Graphics Model Generation

Draft from Graphical Models and Image Processing, vol. 58, no. 5, September Reflectance Analysis for 3D Computer Graphics Model Generation page 1 Draft from Graphical Models and Image Processing, vol. 58, no. 5, September 1996 Reflectance Analysis for 3D Computer Graphics Model Generation Running head: Reflectance Analysis for 3D CG Model

More information

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Shading Models. Welcome!

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Shading Models. Welcome! INFOGR Computer Graphics J. Bikker - April-July 2016 - Lecture 10: Shading Models Welcome! Today s Agenda: Introduction Light Transport Materials Sensors Shading INFOGR Lecture 10 Shading Models 3 Introduction

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 21: Light, reflectance and photometric stereo Announcements Final projects Midterm reports due November 24 (next Tuesday) by 11:59pm (upload to CMS) State the

More information

DIRS Technical Report Tech Report #

DIRS Technical Report Tech Report # Rochester Institute of Technology Technical Memorandum Topic: NEFDS Beard-Maxwell BRDF Model Implementation in Matlab Primary Author: Matt Montanaro Collaborators: Carl Salvaggio Scott Brown David Messinger

More information

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading CENG 477 Introduction to Computer Graphics Ray Tracing: Shading Last Week Until now we learned: How to create the primary rays from the given camera and image plane parameters How to intersect these rays

More information

Principles of Appearance Acquisition and Representation

Principles of Appearance Acquisition and Representation Principles of Appearance Acquisition and Representation SIGGRAPH 2008 Class Notes Tim Weyrich Princeton University USA Jason Lawrence University of Virginia USA Hendrik Lensch Max-Planck-Institut für Informatik

More information

Fundamentals of Rendering - Reflectance Functions

Fundamentals of Rendering - Reflectance Functions Fundamentals of Rendering - Reflectance Functions CMPT 461/761 Image Synthesis Torsten Möller Reading Chapter 8 of Physically Based Rendering by Pharr&Humphreys Chapter 16 in Foley, van Dam et al. Chapter

More information

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field Traditional Image Generation Course 10 Realistic Materials in Computer Graphics Surfaces + BRDFs Reflectance Fields USC Institute for Creative Technologies Volumetric scattering, density fields, phase

More information

Radiometry. Radiometry. Measuring Angle. Solid Angle. Radiance

Radiometry. Radiometry. Measuring Angle. Solid Angle. Radiance Radiometry Radiometry Computer Vision I CSE5A ecture 5-Part II Read Chapter 4 of Ponce & Forsyth Solid Angle Irradiance Radiance BRDF ambertian/phong BRDF Measuring Angle Solid Angle By analogy with angle

More information

Shading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham

Shading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham Shading / Light Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham Phong Illumination Model See Shirley, Ch 10 and http://en.wikipedia.org/wiki/phong_shading

More information

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 CSE 167: Introduction to Computer Graphics Lecture #7: Color and Shading Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday,

More information

Illumination. Illumination CMSC 435/634

Illumination. Illumination CMSC 435/634 Illumination CMSC 435/634 Illumination Interpolation Illumination Illumination Interpolation Illumination Illumination Effect of light on objects Mostly look just at intensity Apply to each color channel

More information

Radiometry & BRDFs CS295, Spring 2017 Shuang Zhao

Radiometry & BRDFs CS295, Spring 2017 Shuang Zhao Radiometry & BRDFs CS295, Spring 2017 Shuang Zhao Computer Science Department University of California, Irvine CS295, Spring 2017 Shuang Zhao 1 Today s Lecture Radiometry Physics of light BRDFs How materials

More information

Computer Graphics (CS 543) Lecture 8 (Part 1): Physically-Based Lighting Models

Computer Graphics (CS 543) Lecture 8 (Part 1): Physically-Based Lighting Models Computer Graphics (CS 543) Lecture 8 (Part 1): Physically-Based Lighting Models Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) BRDF Evolution BRDFs have evolved historically

More information

Announcements. Light. Properties of light. Light. Project status reports on Wednesday. Readings. Today. Readings Szeliski, 2.2, 2.3.

Announcements. Light. Properties of light. Light. Project status reports on Wednesday. Readings. Today. Readings Szeliski, 2.2, 2.3. Announcements Project status reports on Wednesday prepare 5 minute ppt presentation should contain: problem statement (1 slide) description of approach (1 slide) some images (1 slide) current status +

More information

Philpot & Philipson: Remote Sensing Fundamentals Interactions 3.1 W.D. Philpot, Cornell University, Fall 12

Philpot & Philipson: Remote Sensing Fundamentals Interactions 3.1 W.D. Philpot, Cornell University, Fall 12 Philpot & Philipson: Remote Sensing Fundamentals Interactions 3.1 W.D. Philpot, Cornell University, Fall 1 3. EM INTERACTIONS WITH MATERIALS In order for an object to be sensed, the object must reflect,

More information

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller Global Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Reading Foley, van Dam (better): Chapter 16.7-13 Angel: Chapter 5.11, 11.1-11.5 2 Limitation of local illumination A concrete

More information

Mahdi M. Bagher / Cyril Soler / Nicolas Holzschuch Maverick, INRIA Grenoble-Rhône-Alpes and LJK (University of Grenoble and CNRS)

Mahdi M. Bagher / Cyril Soler / Nicolas Holzschuch Maverick, INRIA Grenoble-Rhône-Alpes and LJK (University of Grenoble and CNRS) Mahdi M. Bagher / Cyril Soler / Nicolas Holzschuch Maverick, INRIA Grenoble-Rhône-Alpes and LJK (University of Grenoble and CNRS) Wide variety of materials characterized by surface reflectance and scattering

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Mu lt i s p e c t r a l

Mu lt i s p e c t r a l Viewing Angle Analyser Revolutionary system for full spectral and polarization measurement in the entire viewing angle EZContrastMS80 & EZContrastMS88 ADVANCED LIGHT ANALYSIS by Field iris Fourier plane

More information

UWE has obtained warranties from all depositors as to their title in the material deposited and as to their right to deposit such material.

UWE has obtained warranties from all depositors as to their title in the material deposited and as to their right to deposit such material. Sohaib, A., Farooq, A., Smith, L., Smith, M. and Broadbent, L. () BRDF of human skin in the visible spectrum., (). pp. 0-. ISSN 00- Available from: http://eprints.uwe.ac.uk/ We recommend you cite the published

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

02 Shading and Frames. Steve Marschner CS5625 Spring 2016

02 Shading and Frames. Steve Marschner CS5625 Spring 2016 02 Shading and Frames Steve Marschner CS5625 Spring 2016 Light reflection physics Radiometry redux Power Intensity power per unit solid angle Irradiance power per unit area Radiance power per unit (solid

More information

A Multiscale Analysis of the Touch-Up Problem

A Multiscale Analysis of the Touch-Up Problem A Multiscale Analysis of the Touch-Up Problem Journal: CIC18 Manuscript ID: Draft Presentation Type: Oral Date Submitted by the Author: n/a Complete List of Authors: Ferwerda, James; Rochester Institute

More information

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface. Lighting and Photometric Stereo CSE252A Lecture 7 Announcement Read Chapter 2 of Forsyth & Ponce Might find section 12.1.3 of Forsyth & Ponce useful. HW Problem Emitted radiance in direction f r for incident

More information

Overview. Hierarchy. Level of detail hierarchy Texture maps Procedural shading and texturing Texture synthesis and noise.

Overview. Hierarchy. Level of detail hierarchy Texture maps Procedural shading and texturing Texture synthesis and noise. Overview Level of detail hierarchy Texture maps Procedural shading and texturing Texture synthesis and noise Hierarchy Physics Computer Graphics Geometrical optics Macro-structures Transport Micro-structures

More information

Ray Tracing: Special Topics CSCI 4239/5239 Advanced Computer Graphics Spring 2018

Ray Tracing: Special Topics CSCI 4239/5239 Advanced Computer Graphics Spring 2018 Ray Tracing: Special Topics CSCI 4239/5239 Advanced Computer Graphics Spring 2018 Theoretical foundations Ray Tracing from the Ground Up Chapters 13-15 Bidirectional Reflectance Distribution Function BRDF

More information

Shading. Brian Curless CSE 557 Autumn 2017

Shading. Brian Curless CSE 557 Autumn 2017 Shading Brian Curless CSE 557 Autumn 2017 1 Reading Optional: Angel and Shreiner: chapter 5. Marschner and Shirley: chapter 10, chapter 17. Further reading: OpenGL red book, chapter 5. 2 Basic 3D graphics

More information

Imaging Sphere Measurement of Luminous Intensity, View Angle, and Scatter Distribution Functions

Imaging Sphere Measurement of Luminous Intensity, View Angle, and Scatter Distribution Functions Imaging Sphere Measurement of Luminous Intensity, View Angle, and Scatter Distribution Functions Hubert Kostal, Vice President of Sales and Marketing Radiant Imaging, Inc. 22908 NE Alder Crest Drive, Suite

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 2

GEOG 4110/5100 Advanced Remote Sensing Lecture 2 GEOG 4110/5100 Advanced Remote Sensing Lecture 2 Data Quality Radiometric Distortion Radiometric Error Correction Relevant reading: Richards, sections 2.1 2.8; 2.10.1 2.10.3 Data Quality/Resolution Spatial

More information

Radiometry. Reflectance & Lighting. Solid Angle. Radiance. Radiance Power is energy per unit time

Radiometry. Reflectance & Lighting. Solid Angle. Radiance. Radiance Power is energy per unit time Radiometry Reflectance & Lighting Computer Vision I CSE5A Lecture 6 Read Chapter 4 of Ponce & Forsyth Homework 1 Assigned Outline Solid Angle Irradiance Radiance BRDF Lambertian/Phong BRDF By analogy with

More information

Multi angle spectroscopic measurements at University of Pardubice

Multi angle spectroscopic measurements at University of Pardubice Multi angle spectroscopic measurements at University of Pardubice Petr Janicek Eliska Schutzova Ondrej Panak E mail: petr.janicek@upce.cz The aim of this work was: to conduct the measurement of samples

More information

Sung-Eui Yoon ( 윤성의 )

Sung-Eui Yoon ( 윤성의 ) CS380: Computer Graphics Illumination and Shading Sung-Eui Yoon ( 윤성의 ) Course URL: http://sglab.kaist.ac.kr/~sungeui/cg/ Course Objectives (Ch. 10) Know how to consider lights during rendering models

More information

Radiance, Irradiance and Reflectance

Radiance, Irradiance and Reflectance CEE 6100 Remote Sensing Fundamentals 1 Radiance, Irradiance and Reflectance When making field optical measurements we are generally interested in reflectance, a relative measurement. At a minimum, measurements

More information

DIRSIG4 Reflectance Properties. Introduction. Contents. From DirsigWiki

DIRSIG4 Reflectance Properties. Introduction. Contents. From DirsigWiki DIRSIG4 Reflectance Properties From DirsigWiki Contents 1 Introduction 1.1 Material Measurements 1.2 DHR Integration 2 Reflectance Models 2.1 NEF Beard-Maxwell BRDF 2.2 Phong BRDF 2.3 Priest-Germer BRDF

More information

Capturing light. Source: A. Efros

Capturing light. Source: A. Efros Capturing light Source: A. Efros Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection? Lenses

More information

Lighting affects appearance

Lighting affects appearance Lighting affects appearance 1 Source emits photons Light And then some reach the eye/camera. Photons travel in a straight line When they hit an object they: bounce off in a new direction or are absorbed

More information

Announcements. Radiometry and Sources, Shadows, and Shading

Announcements. Radiometry and Sources, Shadows, and Shading Announcements Radiometry and Sources, Shadows, and Shading CSE 252A Lecture 6 Instructor office hours This week only: Thursday, 3:45 PM-4:45 PM Tuesdays 6:30 PM-7:30 PM Library (for now) Homework 1 is

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Thomas Buchberger, Matthias Zwicker Universität Bern Herbst 2008 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation

More information

rendering equation computer graphics rendering equation 2009 fabio pellacini 1

rendering equation computer graphics rendering equation 2009 fabio pellacini 1 rendering equation computer graphics rendering equation 2009 fabio pellacini 1 physically-based rendering synthesis algorithms that compute images by simulation the physical behavior of light computer

More information

The Rendering Equation. Computer Graphics CMU /15-662

The Rendering Equation. Computer Graphics CMU /15-662 The Rendering Equation Computer Graphics CMU 15-462/15-662 Review: What is radiance? Radiance at point p in direction N is radiant energy ( #hits ) per unit time, per solid angle, per unit area perpendicular

More information

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

Light Tec Scattering measurements guideline

Light Tec Scattering measurements guideline Light Tec Scattering measurements guideline 1 Our Laboratory Light Tec is equipped with a Photometric Laboratory (a dark room) including: Goniophotometers: REFLET 180S. High specular bench (10 meters),

More information

dq dt I = Irradiance or Light Intensity is Flux Φ per area A (W/m 2 ) Φ =

dq dt I = Irradiance or Light Intensity is Flux Φ per area A (W/m 2 ) Φ = Radiometry (From Intro to Optics, Pedrotti -4) Radiometry is measurement of Emag radiation (light) Consider a small spherical source Total energy radiating from the body over some time is Q total Radiant

More information

Scientific imaging of Cultural Heritage: Minimizing Visual Editing and Relighting

Scientific imaging of Cultural Heritage: Minimizing Visual Editing and Relighting Scientific imaging of Cultural Heritage: Minimizing Visual Editing and Relighting Roy S. Berns Supported by the Andrew W. Mellon Foundation Colorimetry Numerical color and quantifying color quality b*

More information

Switzerland ABSTRACT. Proc. of SPIE Vol N-1

Switzerland ABSTRACT. Proc. of SPIE Vol N-1 Two-dimensional refractive index profiling of optical fibers by modified refractive near-field technique A. El Sayed* a,b, Soenke Pilz b, Manuel Ryser a, Valerio Romano a,b a Institute of Applied Physics,

More information

Shading. Reading. Pinhole camera. Basic 3D graphics. Brian Curless CSE 557 Fall Required: Shirley, Chapter 10

Shading. Reading. Pinhole camera. Basic 3D graphics. Brian Curless CSE 557 Fall Required: Shirley, Chapter 10 Reading Required: Shirley, Chapter 10 Shading Brian Curless CSE 557 Fall 2014 1 2 Basic 3D graphics With affine matrices, we can now transform virtual 3D objects in their local coordinate systems into

More information

Point Spread Function of Specular Reflection and Gonio-Reflectance Distribution 1

Point Spread Function of Specular Reflection and Gonio-Reflectance Distribution 1 Journal of Imaging Science and Technology R 59(1): 010501-1 010501-10, 2015. c Society for Imaging Science and Technology 2015 Point Spread Function of Specular Reflection and Gonio-Reflectance Distribution

More information

Micro-scale Surface and Contaminate Modeling for Polarimetric Signature Prediction

Micro-scale Surface and Contaminate Modeling for Polarimetric Signature Prediction Micro-scale Surface and Contaminate Modeling for Polarimetric Signature Prediction M.G. Gartley, S.D. Brown and J.R. Schott Digital Imaging and Remote Sensing Laboratory Chester F. Carlson Center for Imaging

More information

Discussion. Smoothness of Indirect Lighting. History and Outline. Irradiance Calculation. Irradiance Caching. Advanced Computer Graphics (Spring 2013)

Discussion. Smoothness of Indirect Lighting. History and Outline. Irradiance Calculation. Irradiance Caching. Advanced Computer Graphics (Spring 2013) Advanced Computer Graphics (Spring 2013 CS 283, Lecture 12: Recent Advances in Monte Carlo Offline Rendering Ravi Ramamoorthi http://inst.eecs.berkeley.edu/~cs283/sp13 Some slides/ideas courtesy Pat Hanrahan,

More information

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

Lighting. Figure 10.1

Lighting. Figure 10.1 We have learned to build three-dimensional graphical models and to display them. However, if you render one of our models, you might be disappointed to see images that look flat and thus fail to show the

More information

Radiometry Measuring Light

Radiometry Measuring Light 1 Radiometry Measuring Light CS 554 Computer Vision Pinar Duygulu Bilkent University 2 How do we see? [Plato] from our eyes flows a light similar to the light of the sun [Chalcidius, middle ages] Therefore,

More information

Acquisition and Representation of Material. Appearance for Editing and Rendering

Acquisition and Representation of Material. Appearance for Editing and Rendering Acquisition and Representation of Material Appearance for Editing and Rendering Jason Davis Lawrence A Dissertation Presented to the Faculty of Princeton University in Candidacy for the Degree of Doctor

More information

Re-rendering from a Dense/Sparse Set of Images

Re-rendering from a Dense/Sparse Set of Images Re-rendering from a Dense/Sparse Set of Images Ko Nishino Institute of Industrial Science The Univ. of Tokyo (Japan Science and Technology) kon@cvl.iis.u-tokyo.ac.jp Virtual/Augmented/Mixed Reality Three

More information

Illumination & Shading: Part 1

Illumination & Shading: Part 1 Illumination & Shading: Part 1 Light Sources Empirical Illumination Shading Local vs Global Illumination Lecture 10 Comp 236 Spring 2005 Computer Graphics Jargon: Illumination Models Illumination - the

More information

Global Illumination The Game of Light Transport. Jian Huang

Global Illumination The Game of Light Transport. Jian Huang Global Illumination The Game of Light Transport Jian Huang Looking Back Ray-tracing and radiosity both computes global illumination Is there a more general methodology? It s a game of light transport.

More information

w Foley, Section16.1 Reading

w Foley, Section16.1 Reading Shading w Foley, Section16.1 Reading Introduction So far, we ve talked exclusively about geometry. w What is the shape of an object? w How do I place it in a virtual 3D space? w How do I know which pixels

More information

Lighting and Reflectance COS 426

Lighting and Reflectance COS 426 ighting and Reflectance COS 426 Ray Casting R2mage *RayCast(R3Scene *scene, int width, int height) { R2mage *image = new R2mage(width, height); for (int i = 0; i < width; i++) { for (int j = 0; j < height;

More information

Measuring Light: Radiometry and Cameras

Measuring Light: Radiometry and Cameras Lecture 11: Measuring Light: Radiometry and Cameras Computer Graphics CMU 15-462/15-662, Fall 2015 Slides credit: a majority of these slides were created by Matt Pharr and Pat Hanrahan Simulating a pinhole

More information

Illumination and Shading

Illumination and Shading Illumination and Shading Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 2/14/07 1 From last time Texture mapping overview notation wrapping Perspective-correct interpolation Texture

More information

Computer Graphics. Illumination and Shading

Computer Graphics. Illumination and Shading () Illumination and Shading Dr. Ayman Eldeib Lighting So given a 3-D triangle and a 3-D viewpoint, we can set the right pixels But what color should those pixels be? If we re attempting to create a realistic

More information

Lecture 22: Basic Image Formation CAP 5415

Lecture 22: Basic Image Formation CAP 5415 Lecture 22: Basic Image Formation CAP 5415 Today We've talked about the geometry of scenes and how that affects the image We haven't talked about light yet Today, we will talk about image formation and

More information

A Study of Scattering Characteristics for Microscale Rough Surface

A Study of Scattering Characteristics for Microscale Rough Surface Rose-Hulman Institute of Technology Rose-Hulman Scholar Graduate Theses - Physics and Optical Engineering Graduate Theses Spring 5-2014 A Study of Scattering Characteristics for Microscale Rough Surface

More information