Faceting Artifact Analysis for Computer Graphics

Size: px
Start display at page:

Download "Faceting Artifact Analysis for Computer Graphics"

Transcription

1 Faceting Artifact Analysis for Computer Graphics Lijun Qu Gary Meyer Department of Computer Science and Engineering and Digital Technology Center University of Minnesota Abstract The faceting signal, defined in this paper as the difference signal between a rendering of the original geometric model and a simplified version of the geometric model, is responsible for the faceting artifacts commonly observed in the renderings of coarse geometric models. We analyze the source of the faceting artifacts and develop a perceptual metric for the visibility of the faceting signal. To illustrate the utility of the faceting signal, we describe two applications. First, we present a perceptually guided level of detail system that accounts for lighting, texturing, distance and other factors that affect the performance of the human visual system. Second, we calculate the lower bound contrast of a texture that can be used to hide the faceting artifacts by using the masking properties of the human visual system. 1 Introduction Faceting artifacts due to tessellation are commonly present in computer generated pictures. To combat this problem, the computer graphics community has taken two approaches. First, geometric models of higher resolution can be used. The faceting artifacts will disappear as the screen projection of each element of the geometric model reaches the scale of each pixel in the image. Second, shading models superior to flat shading can be used. For example, Phong shading decreases the faceting appearance because it includes a better normal approximation as part of the shading calculation. However, both approaches require more computation and thus penalize the frame rate. Visual perception provides another avenue for solving this problem without compromising the rendering speed. This solution can be accomplished by carefully selecting the surface texture so as to obscure the faceting artifacts. Visual masking has been used to mask certain artifacts in computer graphics for a long time. Bolin and Meyer [3] show that fewer samples are required in a ray tracing program when the image contains many frequencies, because the image content can hide the sampling noise. Ferwerda et al. [9] present a model of visual masking for computer graphics and show that surface texture can be used to mask faceting artifacts. However, most approaches use the principles of visual masking without identifying the artifact signal that is to be hidden. Knowing the artifact signal is critical for the application of visual masking. In order to apply the principle of visual masking, we need to understand the frequency composition of both the artifact signal and the signal that serves as the masker. In this paper, we develop the concept of the faceting signal, defined as the difference signal between two images, one produced with the polygonal representation and another made with the ideal representation of a geometric model. The faceted appearance in a computer generated picture can be characterized by the faceting signal. It is important to understand the nature of the faceting signal, for example, its contrast, amplitude and frequency distribution. This information can reveal solutions to the faceting problem. Even though it is very important to understand the nature of the faceting signal, it remains an open research topic. In this paper, we develop the concept of the faceting signal, and create a perceptual metric for determining the visibility of the faceting signal. We present two applications that can benefit from this analysis. First, we present a level of detail system in which a level of detail model is selected for display based on the visibility of its faceting signal. Second, we predict the minimum contrast necessary for a texture to successfully hide the faceting signal for a geometric model. 1.1 Contributions The contributions of the paper are the following: Develop the concept of the faceting signal. Analyze the source that affects the faceting signal. Develop a perceptual metric for determining the visibility of the faceting signal.

2 Develop a level of detail management system guided by the faceting signal. Develop a surface texturing algorithm directed by the faceting signal. 2 Prior Work Flat Shading Gouraud Shading Phong Shading Error analysis in computer graphics: The computer graphics community often has to make a compromise between rendering speed and image quality. Not only does error analysis provide an estimate to the error in a computer generated image, it also often sheds light on how to control the error. Crow [7] analyzed the aliasing artifacts in computer generated images and presented several solutions to the problem of anti-aliasing. Error analysis has also been proposed for global illumination algorithms. Lischinski et al. [15] described a method for determining a posteriori bounds and estimates for local and total errors in radiosity solutions. Arvo et al. [1] proposed a framework for estimating errors in global illumination algorithms. Visual difference metrics (VDMs): Visual difference metrics [8, 16] were originally developed in the imaging science industry to evaluate the effects of changing parameters in an imaging system. Several metrics incorporating most aspects of VDMs were developed for realistic image synthesis [9, 5]. These metrics have proven to be very useful for computer graphics applications. Lately, Ramanarayanan et al. [22] propose the concept of visual equivalence and develop a metric for visual equivalence. Perceptually guided rendering: The computer graphics community has a long history of exploring the properties of the human visual system to accelerate expensive global illumination algorithms [4, 28, 27]. Not only does visual perception provide a way to allocate computational resources, it also serves as stopping condition for expensive algorithms. Other research includes using perception to get away with less computation by using visual masking of the visual system. Walter et al. [29] describes a system that accelerates global illumination by using texture masking. Perceptually Guided Level of Detail: More recently, researchers have begun to apply visual perception to polygonal techniques. Watson et al. [31] studied visual fidelity of polygonal models. Reddy [24, 25] explored extensively the properties of the visual system and their application in level of detail systems. Luebke et al. [18] developed a mesh simplification algorithm guided by the contrast sensitivity function of the human visual system. Williams et al. [32] extended the result of Luebke et al. [18] to lit and textured models. Other properties of the visual system have also been explored in level of detail systems [6, 11, 12, 21]. Please refer to the LOD book [17] for more details Figure 1. The shading profile for a cylinder using flat shading, Gouraud shading, and Phong shading. 3 Faceting Artifact Analysis We define the faceting signal as the difference signal between a rendering of a highly detailed geometric model, serving as the gold standard, and a rendering of a simplified version of the geometric model under the same image synthesis conditions. The faceting signal of an object is created by placing a sphere of cameras around the ideal polygonal representation of an object and around a simplified version of the object, taking images of each object under the same image synthesis conditions, and subtracting each pair of images. The difference signal between these two images is the faceting signal at that certain viewing circumstance. Similar to [24, 14], we place a sphere of cameras at the vertices of a rhombicuboctahedron to capture the faceting signal of a geometric model. Figure 1 shows the shading profile for a scanline of a regularly tessellated cylinder using different shading models. We assume a directional light source shines perpendicular to the cylinder surface. Figure 2(a), (c), (e) illustrate the faceting signals for different shading models. In this section, we explain how the faceting signal changes as a function of surface curvature, lighting, shading model, textures, and some perceptual aspects that affect the visibility of the faceting signal. A cylinder is used to simplify the illustration. 3.1 Surface Curvature It is well known that silhouette edges of an object are most problematic for low resolution models. The faceting signal can be used to explain why silhouette area can be an issue for low resolution models. Figure 1 shows a scanline of the faceting signal of a cylinder using flat shading, Gouraud shading, and Phong shading. For flat shading, it is obvious that the steps become narrower and higher along the silhouettes. Bump mapping [2] allows for simulating 2

3 Flat Shading (a) Smooth Shading (c) (e) Phong Shading Flat Shading (b) Smooth Shading (d) Phong Shading Figure 2. The faceting signals corresponding to different shading models and their frequency distributions. the details of an object using a lower resolution model but has problem with silhouettes. Several techniques [26, 33] have been developed to address this problem. We can illustrate why the silhouettes are problematic by analyzing the faceting signal of a cylinder. We derive two equations that describe how the faceting signal changes as a function of surface location and surface curvature. We assume a directional light source shines perpendicular to the cylinder surface. Let s further assume that the cylinder sits on top of the xz plane and the center of the cylinder is located at the origin. We focus our analysis on the diffuse component of the shading I for the cylinder. I(x, r) = n l = r2 x 2 where x runs from the center of the cylinder to the right. Take the derivative against x, we have: x di(x) = r r 2 x dx = 1 2 r dx (1) (r/x) 2 1 di(x) represents the faceting signal as a function of x. Its absolute value increases as x increases. This explains why the amplitude of the faceting signal increases along the silhouettes, as shown in Figure 2(a). r (f) Take its derivative against radius r, we have x 2 di(r) = r 2 dr (2) r 2 x2 di(r) represents the faceting signal as a function of r. It is clear that Equation 2 is a decreasing function of r, which explains that the amplitude of the faceting signal becomes smaller as the radius of the cylinder becomes bigger (smaller curvature). 3.2 General Lighting Level The magnitude of the faceting signal becomes stronger as the light source becomes brighter. This is due to the linearity of the shading intensity with regard to the lighting intensity in the lighting equation. By using the linearity of Fourier analysis, we can conclude that in the frequency domain the magnitude of the frequencies increases as the intensity of the general lighting level increases. 3.3 Shading Model Different shading models can have a dramatic impact on the faceting signal. Figure 2 shows pictures of the faceting signals and their frequency distributions for three shading models: flat shading, smooth shading, and phong shading. Flat shading causes clear faceting artifacts; smooth shading shows less faceting artifacts; Phong shading is almost equivalent to the ideal signal and is much better than the previous two shading methods in terms faceting artifacts. This is due to the fact that different shading models use different surface normal approximations to calculate the pixel intensities. 3.4 Texture Mapping Textures on the surface can also have an impact on the visibility of the faceting signal. Texture mapping can be considered as a way to alter the faceting signal. There are different ways that a texture can be mapped onto a surface and thus change the faceting signal. For example, texels are multiplied with the shading of a fragment for diffuse color texturing. When a diffuse color texture is applied to a surface, due to multiplication, the magnitude of the faceting signal is reduced, thus decreasing the visibility of the faceting signal. 3.5 Perceptual Aspects The viewing distance of an observer also impacts the visibility of the faceting signal by affecting the perceived frequency of the faceting signal (in unit of cycles per visual 3

4 degree). As the viewing distance increases, the visual angle subtended by an object becomes smaller. Therefore, its size on the image plane becomes smaller. This means that the frequencies of the object becomes higher, and the frequencies of its faceting signal also become higher. Since the human visual system is less sensitive to stimuli of high frequencies, the faceting artifacts are less visible as the viewing distance increases. This effect can explain why a coarse geometric object looks better when the viewing distance increases. Other parameters also affect the visibility of the faceting signal such as the adaptation state of the observer, eccentricity, etc. 4 Characterization of the Faceting Signal 4.1 Root Mean Square Metric for the Faceting Signal The root mean square image metric has traditionally been used for image processing tasks because it is relatively easy to deal with mathematically. Lindstrom and Turk [14] observe that the goal of mesh simplification is to produce a simplified model that is visually similar to the original model and they therefore propose to use root mean square image metric to prioritize edge collapse operations. In our faceting signal analysis framework, the facetness of a coarse geometric model can be described using the root mean square image difference as: d RMS (M 1, M 2 ) = 1 m n (yij mn y1 ij )2 i=1 j=1 where M 1 and M 2 represent two versions of a geometric model. yij represents the pixel value of a rendering of the first model at pixel location (i, j), yij 1 represents the pixel value of a rendering of the second model at pixel location (i, j). 4.2 A Perceptual Metric for the Faceting Signal Despite its advantages, the root mean square error metric does not correlate well with the visibility of the faceting signal. In this section, we propose a perceptual metric for determining the visibility of the faceting signal, taking into account the luminance non-linearity and contrast sensitivity function of the human visual system. A perceptual metric can further take into account other viewing parameters such as the lighting adaptation level, observer distance, etc, which all affect the visibility the faceting signal. The structure of our perceptual metric is closely related visual difference metrics [8, 16]. It consists of three stages: amplitude nonlinearity, contrast sensitivity function, and psychometric function. The result from contrast sensitivity function filtering is passed through the psychometric function and the result is turned into probability of visibility. These three stages are discussed in the following Luminance Nonlinearity Processing It is well known that the visual sensitivity and perception of lightness are nonlinear function of luminance. The amplitude nonlinearity describes the sensitivity variations as a function of background luminance. Various models have been proposed in the literature such as log model and various power laws. In this paper, we have chosen to use the cube root model of the nonlinearity of the visual system, which can be described as: r o = r 1/3 i where output value r o is obtained by returning the cube root of the input value r i Contrast Sensitivity Function The contrast sensitivity function of the human visual system describes the variations in visual sensitivity as of function of spatial frequencies. In our implementation, we have chosen to use the contrast sensitivity function described by Daly [8]. This function models the sensitivity of the visual system as a function of radial spatial frequency ρ in c/deg, orientation θ in degrees, light adaptation level l in cd/m 2, image size i 2 in visual degrees, lens accommodation due to viewing distance d in meters, and eccentricity e in degrees. S(ρ, θ, i 2 ρ, d, e) = P min[s 1 (, l, i 2 ), S 1 (ρ, l, i 2 )]; r a r e r θ r a =.856 d.14, where d is the viewing distance in meters. r e = 1/(1 + ke), where e is the eccentricity in visual degrees, k =.24, r θ = (1 ob)/2 cos(4θ) + (1 + ob)/2, where ob =.78. S 1(ρ,l,i 2 )=((3.23(ρ 2 i 2 ).3 ) 5 +1).2 A l ɛρe (B l ɛρ) 1+.6e B l ɛρ where A l =.81(1 +.7/l).2 and B l =.3(1 + 1/l) Psychometric Function The psychometric function describes the probability of detection as a function of the signal contrast [19]. An example of the psychometric function is shown below: P (c) = 1 e (c/α)β (3) where P (c) is the probability of detecting a signal of contrast c, α is the detection threshold, and β describes the 4

5 slope of the psychometric function and is largely invariant across many tasks Discussion Even though our perceptual metric is based on psychophysical data, we still carried out a few test cases that allowed us to verify the validity of our visibility measure experimentally. Figure 3 shows a patch of a sine wave and its visibility maps for viewing distances of 1, 2, and 3 meters. Brighter regions in the visibility map indicate higher probability of visibility. For a sine wave, the peaks and troughs are shown on the visibility map. As the distance increases, the peaks and troughs become narrower, which indicates that its visibility decreases as the viewing distance increases. The average probability of visibility for each viewing distance is.71,.52, and.41, respectively. Besides viewing distance, we also performed experiments to verify that our metric works under different luminance values. Figure 3. A sine wave and its visibility map at viewing distance 1, 2, and 3 meters. As an example of a more complicated faceting signal and its visibility map, Figure 4 shows an example of the faceting signal (left) of the Stanford Bunny and its visibility map (right) at viewing distance of 1 meter. From the visibility map, we can observe that the silhouette edges are more visible. Figure 4. The left image shows an example of the faceting signal and the right image shows its visibility map. Figure 5 (a) shows the magnitude of the faceting signal changes calculated with the root mean square image difference metric as the model size becomes smaller. Figure 5 (b) shows the how the probability of visibility for the faceting signal changes calculated with the perceptual metric developed in Section 4.2 as the model size changes. Both error measures become larger as the model size becomes smaller. Different curves represent the error metric for the faceting signal at different camera viewing distances. The two figures show similar trend as the model size decreases and as the camera moves farther away from the model. One of the differences is that the perceptual metric takes into account observer viewing distance while the root mean square error metric does not. The root mean square error metric does take into account general lighting intensities. 5 Level of Detail Selection The perceptual metric described above has direct application in perceptually guided level of detail management. A level of detail system mostly deals with three major issues: creation, selection, and switching. LOD creation includes generating several representations of a geometric model, which include both static level of detail models and dynamic data structures from which a level of detail model can be extracted at runtime. Selection deals with the problem of selecting the most appropriate model based on a given set of rendering and viewing parameters, or a given error threshold. Switching deals with how we change from one LOD model to another LOD model and minimizes the switching artifacts. In this section, our effort is focused on the selection problem of the LOD system. We describe a LOD system that uses our visibility measure in the selection of level of detail models. 5.1 Faceting Signal Visibility Pre- Computation Similar to the perceptually guided level of detail system described by Reddy [24], our perceptually driven level of detail system has an off-line preprocessing stage and a runtime stage. The preprocessing stage employs our perceptually guided visibility metric and calculates a table of average probability of visibility for use in the LOD runtime. We pre-compute the average probability of visibility at different lighting intensities, observer distances, and camera distances. Please note that we take the average lighting intensity of an environment because the space of lighting is large and it can be difficult to pre-calculate. One of the fundamental differences between our approach and Reddy s is that our system computes the visibility of the faceting signal while Reddy s system computes the visibility of the image content. The off-line pre-computation process can be described as: for each lighting intensity 5

6 camera distance 1 camera distance 2 camera distance camera distance 1 camera distance 2 camera distance RMS Error.8.6 Probability of Visibility Simplified Models Simplified Models Figure 5. The rms (left) and the probability of visibility (right) of the faceting signal for the Stanford Bunny of different model sizes at different camera distances. for each observer viewing distance for each camera distance for each level of detail model render ideal and current model find the faceting signal find its visibility end for end for end for end for In order to make the computation tractable, we quantize the lighting intensity, observer viewing distance, and camera distance into a set of bins based on system requirements. For example, if we understand beforehand that the observer will be looking at the object at a certain distance, we will focus the pre-computation at that distance range. Similarly, we quantize the lighting intensity, camera distance to the object, etc. Figure 5 (right) shows the average probability of visibility for the Stanford bunny at different camera distances. 5.2 LOD Runtime During runtime, all the parameters such as observer distance, camera distance, general lighting intensity are found. Then, given a visibility threshold, these parameters are used to index into the table to find the appropriate level of detail. One of the advantages of our system is that the user can choose a perceptually meaningful threshold. Also our system allows for specifying the observer s viewing distance. The visibility map pre-calculation is the most expensive part of our system. For example, for the Stanford bunny model at seven different resolutions (simplified from the original using Qslim [1]), on a machine with 1.8G Xeon CPU and 1G memory, it takes about 1 minutes to calculate the visibility of the faceting signal at three different camera distances, one lighting intensity levels, 24 viewing positions, one observer distance. 6 Surface Texturing In this section, we discuss another application of faceting signal analysis for determining surface textures that minimize the faceting artifacts for bicubic subdivision surfaces. We draw upon research in visual masking to make the faceting artifacts less visible by using a texture as a masker that obscures the faceting signal. We select bicubic subdivision surfaces as our model because it is easy to calculate and control its major faceting signal frequencies. We choose to use Perlin noise [2] as surface textures because it is easy to control its frequencies. According to the theory of visual masking, we will seek textures that are similar in frequency and orientation to the faceting signal. Furthermore, the texture must have a certain contrast to be able to hide the faceting signal. The following section derives such a lower bound contrast for the Perlin texture. 6.1 Visual Masking Visual masking refers to the phenomenon where a foreground signal decreases the visibility of a background signal. In the visual masking experiment, the foreground signal used to mask the background signal is called the masker, and the background signal is called the signal [13]. This is a very strong phenomenon which has been explored both in computer graphics for realistic image synthesis and outside of computer graphics in areas such as auditory perception. Several vision models that incorporate masking have been developed in the imaging science industry [8, 16] and more recently in computer graphics [9]. Visual masking studies show that the foreground signal has stronger masking properties when it is similar in frequency and orientation to the 6

7 background signal. Furthermore, the higher the contrast of the foreground signal, the stronger the visual masking effect it produces. In the case of faceting artifacts, the background signal is the faceting signal, and the foreground signal is the texture. Visual masking has proven to be useful in computer graphics in the area of realistic image synthesis [23, 29]. 6.2 Surface Texture Contrast Lower Bound m = (L max L min )/2L where L max and L min are the maximum and minimum intensity of the signal, and L is the average intensity. Using the above equation, the contrasts of the faceting signal and texture are, respectively m 1 = 2i 2 a/2i 1 i 2 = a/i 1, m 2 = 2i 1 b/2i 1 i 2 = b/i 2 Let us represent the ideal shading profile for a geometric model as s 1 (t) = i 1 (t) + a(t) cos(ωt) where i 1 (t) is the piecewise constant approximation of the ideal signal. a(t) cos(ωt) is the localized frequency component of the faceting signal. In a similar manner, let us represent the texture as s 2 (t) = i 2 (t) + b(t) cos(ωt) where i 2 (t) is the localized average intensity of the texture, and b(t) cos(ωt) is the localized frequency component of the texture. The problem is to find the condition in which b(t) cos(ωt) can mask out the faceting signal a(t) cos(ωt). We assume the signal and the masker have the same frequency ω and phase. For clarity, we will drop t from this point onwards. We will now analyze what happens for diffuse texture mapping, where the texture is modulated by the lighting. This can be represented as a pointwise multiplication of the shading profile s 1 (t) and the texture s 2 (t). In this case, the intensity profile for the rendered image is: where ab/2 + ab/2 cos(2ωt) can be omitted because they are higher order terms. (a and b are small compared to i 1 and i 2.) Therefore, we have s(t) = i 1 i 2 + (i 1 b + i 2 a) cos(ωt) where i 1 i 2 represents the localized average intensity after texture mapping, i 2 a cos(ωt) is the difference signal to be masked, and i 1 b cos(ωt) is the masker. Given i 1, i 2, a and b, we can find their relationship using experimental data in psychophysical studies. Since all this experimental data is in the units of contrast, we have to convert i 1, i 2, a, and b into contrasts. The Michaelson contrast is calculated using the equation In our implementation, we have followed the visual masking function proposed by Watson [3]. A similar visual masking function was used in [29]. This formula describes the relationship between the normalized increment threshold c/c and the background normalized contrast c/c as c(c) = C max(1, (c/c) w ) (4) In the contrast masking experiment, an observer is asked to discriminate two images of different contrasts. One image has contrast c, and the second image has contrast c+ c. The increment c is varied to find the c at which the observer can just discriminate from c. When the background contrast is zero, one measures a contrast detection threshold C. w is a constant with value.7. Using equation 4, we can compute the lower bound contrast for the texture to be able to mask the difference signal: m 2 = m 1 + c = m 1 + C max(1, (m 1 /C) w ) (5) 6.3 Results In this section, we report some promising results for bicubic subdivision surfaces. For subdivision surfaces, the major frequency of the faceting signal can be easily calculated by counting the number of subdivisions in object s(t) = s 1 s 2 = i 1 i 2 + (i 1 b + i 2 a) cos(ωt) + ab cos 2 (ωt) space. We therefore generate Perlin texture of the same frequency and use this texture as diffuse color texture. = i 1 i 2 + (i 1 b + i 2 a) cos(ωt) + ab/2 + ab/2 cos(2ωt) Please also note that it is important to match the orientation of the faceting signal and the orientation of the texture in order to achieve maximum masking effect. We can then use Equation 5 to calculate the minimum texture contrast. Figure 6(a) shows a rendering of the Utah teapot with faceting artifacts. We have picked two spots on the teapot, marked by a green dot and a yellow dot, as shown in Figure 6(a). m 1 is the contrast of the faceting signal, and m 2 is the minimum contrast of the texture computed using Equation 5. The following table shows the values of these two parameters at the two spots. Location m 1 m 2 green dot yellow dot

8 We see that the texture in Figure 6(b) can just mask out the faceting artifacts at the green dot. The texture in Figure 6(c) has enough contrast so that it can mask out the faceting artifacts all the way to the yellow dot. Although it was possible to find a texture contrast that would mask the faceting artifacts across the entire top half of the teapot, as can be seen in Figure 6(b) and 6(c) this texture contrast was not sufficient to obscure the artifacts on the bottom portion of the teapot. This is because the direction of minimum curvature for the bottom part of the teapot is orthogonal to the direction of minimum curvature for the top portion. Because the texture being used only has frequency information in one direction, masking of the faceting artifacts in the bottom half of the teapot can only be accomplished by additional subdivision. Figure 6(d) shows the result of further subdividing the lower portion of the teapot to eliminate the faceting artifacts. 7 Conclusions and Future Work In this paper, we introduced the concept of the faceting signal, which describes the artifact signal induced by coarse geometric models. We analyzed how it is affected by surface curvature, lighting, shading model, etc. We then described two metrics that characterize its visibility: the traditional root mean square error metric and a perceptually driven visibility metric. The importance of the faceting signal lies in the fact that it represents the artifact signal for coarse geometric models. The introduction of the faceting signal offers opportunities in designing better perceptual guided algorithms for computer graphics, especially algorithms that take advantage of visual masking, where identifying the artifact signal is critical to come up with a good masker. We also include two examples that illustrate the utility of the faceting signal. First, we developed a perceptually guided level of detail selection system that selects the most appropriate geometric model. Second, we developed a surface texturing algorithm that calculates the lower bound contrast for a texture that can serve as a successful masker. For future work, we would like to explore more applications that can benefit from the analysis of the faceting signal. Also, it would be interesting to extend our surface texturing algorithm to more complicated surfaces and textures. Acknowledgements I would like to thank Stanford computer graphics group for making the Bunny model available, Michael Garland for his QSlim software. References [1] J. Arvo, K. Torrance, and B. Smits. A framework for the analysis of error in global illumination algorithms. In SIG- GRAPH 94: Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages 75 84, [2] J. F. Blinn. Simulation of wrinkled surfaces. In SIGGRAPH 78: Proceedings of the 5th annual conference on Computer graphics and interactive techniques, pages , [3] M. R. Bolin and G. W. Meyer. A frequency based ray tracer. In SIGGRAPH 95: Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pages , [4] M. R. Bolin and G. W. Meyer. A perceptually based adaptive sampling algorithm. In SIGGRAPH 98: Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pages , [5] M. R. Bolin and G. W. Meyer. Visual difference metric for realistic image synthesis. In B. E. Rogowitz and T. N. Pappas, editors, Proc. SPIE Vol. 3644, p , Human Vision and Electronic Imaging IV, volume 3644, pages 16 12, May [6] R. Brown, L. Cooper, and B. Pham. Visual attention-based polygon level of detail management. In GRAPHITE 3: Proceedings of the 1st international conference on Computer graphics and interactive techniques in Australasia and South East Asia, pages 55 ff, 23. [7] F. C. Crow. The aliasing problem in computer-generated shaded images. Commun. ACM, 2(11):799 85, [8] S. Daly. The visible differences predictor: an algorithm for the assessment of image fidelity. pages , [9] J. A. Ferwerda, P. Shirley, S. N. Pattanaik, and D. P. Greenberg. A model of visual masking for computer graphics. In ACM SIGGRAPH Conference Proceedings, pages , [1] M. Garland and P. S. Heckbert. Surface simplification using quadric error metrics. In SIGGRAPH 97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages , [11] S. Howlett, J. Hamill, and C. O Sullivan. Predicting and evaluating saliency for simplified polygonal models. ACM Trans. Appl. Percept., 2(3):286 38, y 5. [12] C. H. Lee, A. Varshney, and D. W. Jacobs. Mesh saliency. ACM Trans. Graph., 24(3): , 25. [13] G. E. Legge and J. M. Foley. Contrast masking in human vision. Journal of the Optical Society of America, pages , dec 198. [14] P. Lindstrom and G. Turk. Image-driven simplification. ACM Trans. Graph., 19(3):24 241, 2. [15] D. Lischinski, B. Smits, and D. P. Greenberg. Bounds and error estimates for radiosity. In SIGGRAPH 94: Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages 67 74, [16] J. Lubin. A visual discrimination model for imaging system design and evaluation. In Vision Models for Target Detection and Recognition, pages World Scientific, [17] D. Luebke, B. Watson, J. D. Cohen, M. Reddy, and A. Varshney. Level of Detail for 3D Graphics. Elsevier Science Inc., 22. [18] D. P. Luebke and B. Hallen. Perceptually-driven simplification for interactive rendering. In Proceedings of the 12th Eurographics Workshop on Rendering Techniques, pages , 21. [19] J. Nachmias. On the psychometric function for contrast detection. Vision Research, pages ,

9 (a) (b) (c) (d) Figure 6. Image (a) shows the teapot with faceting artifacts. Image (b) and (c) show textures of different contrasts mapped on the teapot. Image (d) shows that the bottom half of the teapot requires more tessellation than the top half the teapot. [2] K. Perlin. An image synthesizer. In SIGGRAPH 85: Proceedings of the 12th annual conference on Computer graphics and interactive techniques, pages , [21] L. Qu and G. W. Meyer. Perceptually driven interactive geometry remeshing. In SI3D 6: Proceedings of the 26 symposium on Interactive 3D graphics and games, pages , 26. [22] G. Ramanarayanan, J. Ferwerda, B. Walter, and K. Bala. Visual equivalence: Towards a new standard for image fidelity. ACM Transactions on Graphics (Proceedings of SIGGRAPH 27), 26(3):to appear, 27. [23] M. Ramasubramanian, S. N. Pattanaik, and D. P. Greenberg. A perceptually based physical error metric for realistic image synthesis. In ACM SIGGRAPH Conference Proceedings, pages 73 82, [24] M. Reddy. Perceptually Modulated Level of Detail for Virtual Environments. PhD thesis, University of Edinburgh, Apr [25] M. Reddy. Perceptually optimized 3d graphics. IEEE Comput. Graph. Appl., 21(5):68 75, 21. [26] P. V. Sander, X. Gu, S. J. Gortler, H. Hoppe, and J. Snyder. Silhouette clipping. In SIGGRAPH : Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages , 2. [27] W. A. Stokes, J. A. Ferwerda, B. Walter, and D. P. Greenberg. Perceptual illumination components: a new approach to efficient, high quality global illumination rendering. In SIGGRAPH 4: ACM SIGGRAPH 24 Papers, pages , 24. [28] V. Volevich, K. Myszkowski, A. Khodulev, and E. A. Kopylov. Using the visual differences predictor to improve performance of progressive global illumination computation. ACM Trans. Graph., 19(2): , 2. [29] B. Walter, S. N. Pattanaik, and D. P. Greenberg. Using perceptual texture masking for efficient image synthesis. Comput. Graph. Forum, 21(3), 22. [3] A. B. Watson. Efficiency of an image code based on human vision. Journal of the Optical Society of America, pages , dec [31] B. Watson, A. Friedman, and A. McGaffey. Measuring and predicting visual fidelity. In SIGGRAPH 1: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages , 21. [32] N. Williams, D. Luebke, J. D. Cohen, M. Kelley, and B. Schubert. Perceptually guided simplification of lit, textured meshes. In symposium on Interactive 3D graphics, pages , 23. [33] H. Wu, L.-Y. Wei, X. Wang, and B. Guo. Silhouette texture. In Rendering Techniques, pages , 26. 9

Perceptual Analysis of Level-of-Detail: The JND Approach

Perceptual Analysis of Level-of-Detail: The JND Approach Perceptual Analysis of Level-of-Detail: The JND Approach Abstract Multimedia content is becoming more widely used in applications resulting from the easy accessibility of high-speed networks in the public

More information

Using Perceptual Texture Masking for Efficient Image Synthesis

Using Perceptual Texture Masking for Efficient Image Synthesis EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel (Guest Editors) Volume 21 (2002), Number 3 Using Perceptual Texture Masking for Efficient Image Synthesis Abstract Texture mapping has become indispensable

More information

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1) Computer Graphics Lecture 14 Bump-mapping, Global Illumination (1) Today - Bump mapping - Displacement mapping - Global Illumination Radiosity Bump Mapping - A method to increase the realism of 3D objects

More information

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Efficient Rendering of Glossy Reflection Using Graphics Hardware Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,

More information

We present a method to accelerate global illumination computation in pre-rendered animations

We present a method to accelerate global illumination computation in pre-rendered animations Attention for Computer Graphics Rendering Hector Yee PDI / DreamWorks Sumanta Pattanaik University of Central Florida Corresponding Author: Hector Yee Research and Development PDI / DreamWorks 1800 Seaport

More information

Shadow Mapping and Visual Masking in Level of Detail Systems 1

Shadow Mapping and Visual Masking in Level of Detail Systems 1 Shadow Mapping and Visual Masking in Level of Detail Systems 1 Ronan Dowling May 14, 2003 1 Submitted under the faculty supervision of Professor Gary Meyer, in partial fulfillment of the requirements for

More information

Functional Difference Predictors (FDPs): Measuring Meaningful Image Differences

Functional Difference Predictors (FDPs): Measuring Meaningful Image Differences Functional Difference Predictors (FDPs): Measuring Meaningful Image Differences James A. Ferwerda Program of Computer Graphics Cornell University Fabio Pellacini Pixar Animation Studios Emeryville, CA

More information

Computer Graphics I Lecture 11

Computer Graphics I Lecture 11 15-462 Computer Graphics I Lecture 11 Midterm Review Assignment 3 Movie Midterm Review Midterm Preview February 26, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading: Reading Required: Watt, Section 5.2.2 5.2.4, 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading: 18. Projections and Z-buffers Foley, et al, Chapter 5.6 and Chapter 6 David F. Rogers

More information

Perceptually Guided Rendering of Textured Point-based Models

Perceptually Guided Rendering of Textured Point-based Models Eurographics Symposium on Point-Based Graphics (2006) M. Botsch, B. Chen (Editors) Perceptually Guided Rendering of Textured Point-based Models Lijun Qu Xiaoru Yuan Minh X. Nguyen Gary W. Meyer Baoquan

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

Computer Graphics. Illumination and Shading

Computer Graphics. Illumination and Shading () Illumination and Shading Dr. Ayman Eldeib Lighting So given a 3-D triangle and a 3-D viewpoint, we can set the right pixels But what color should those pixels be? If we re attempting to create a realistic

More information

Accelerated Ambient Occlusion Using Spatial Subdivision Structures

Accelerated Ambient Occlusion Using Spatial Subdivision Structures Abstract Ambient Occlusion is a relatively new method that gives global illumination like results. This paper presents a method to accelerate ambient occlusion using the form factor method in Bunnel [2005]

More information

Simple Silhouettes for Complex Surfaces

Simple Silhouettes for Complex Surfaces Eurographics Symposium on Geometry Processing(2003) L. Kobbelt, P. Schröder, H. Hoppe (Editors) Simple Silhouettes for Complex Surfaces D. Kirsanov, P. V. Sander, and S. J. Gortler Harvard University Abstract

More information

CS 130 Final. Fall 2015

CS 130 Final. Fall 2015 CS 130 Final Fall 2015 Name Student ID Signature You may not ask any questions during the test. If you believe that there is something wrong with a question, write down what you think the question is trying

More information

Radiosity. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Radiosity. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen Radiosity Radiosity Concept Global computation of diffuse interreflections among scene objects Diffuse lighting changes fairly slowly across a surface Break surfaces up into some number of patches Assume

More information

3D Rasterization II COS 426

3D Rasterization II COS 426 3D Rasterization II COS 426 3D Rendering Pipeline (for direct illumination) 3D Primitives Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation

More information

Topics and things to know about them:

Topics and things to know about them: Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will

More information

Evaluation of Two Principal Approaches to Objective Image Quality Assessment

Evaluation of Two Principal Approaches to Objective Image Quality Assessment Evaluation of Two Principal Approaches to Objective Image Quality Assessment Martin Čadík, Pavel Slavík Department of Computer Science and Engineering Faculty of Electrical Engineering, Czech Technical

More information

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005 Real Time Rendering CS 563 Advanced Topics in Computer Graphics Songxiang Gu Jan, 31, 2005 Introduction Polygon based rendering Phong modeling Texture mapping Opengl, Directx Point based rendering VTK

More information

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute Lightcuts Jeff Hui Advanced Computer Graphics 2010 Rensselaer Polytechnic Institute Fig 1. Lightcuts version on the left and naïve ray tracer on the right. The lightcuts took 433,580,000 clock ticks and

More information

Computer graphics and visualization

Computer graphics and visualization CAAD FUTURES DIGITAL PROCEEDINGS 1986 63 Chapter 5 Computer graphics and visualization Donald P. Greenberg The field of computer graphics has made enormous progress during the past decade. It is rapidly

More information

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005 A Developer s Survey of Polygonal Simplification algorithms CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005 Some questions to ask Why simplification? What are my models like? What matters

More information

Pipeline Operations. CS 4620 Lecture 10

Pipeline Operations. CS 4620 Lecture 10 Pipeline Operations CS 4620 Lecture 10 2008 Steve Marschner 1 Hidden surface elimination Goal is to figure out which color to make the pixels based on what s in front of what. Hidden surface elimination

More information

CS770/870 Spring 2017 Color and Shading

CS770/870 Spring 2017 Color and Shading Preview CS770/870 Spring 2017 Color and Shading Related material Cunningham: Ch 5 Hill and Kelley: Ch. 8 Angel 5e: 6.1-6.8 Angel 6e: 5.1-5.5 Making the scene more realistic Color models representing the

More information

Computer Graphics. Lecture 13. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura

Computer Graphics. Lecture 13. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura Computer Graphics Lecture 13 Global Illumination 1: Ray Tracing and Radiosity Taku Komura 1 Rendering techniques Can be classified as Local Illumination techniques Global Illumination techniques Local

More information

Using Perceptual Texture Masking for Efficient Image Synthesis

Using Perceptual Texture Masking for Efficient Image Synthesis EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel (Guest Editors) Volume 21 (2002), Number 3 Using Perceptual Texture Masking for Efficient Image Synthesis Bruce Walter Sumanta N. Pattanaik Donald P. Greenberg

More information

Rendering and Radiosity. Introduction to Design Media Lecture 4 John Lee

Rendering and Radiosity. Introduction to Design Media Lecture 4 John Lee Rendering and Radiosity Introduction to Design Media Lecture 4 John Lee Overview Rendering is the process that creates an image from a model How is it done? How has it been developed? What are the issues

More information

Fast Texture Based Form Factor Calculations for Radiosity using Graphics Hardware

Fast Texture Based Form Factor Calculations for Radiosity using Graphics Hardware Fast Texture Based Form Factor Calculations for Radiosity using Graphics Hardware Kasper Høy Nielsen Niels Jørgen Christensen Informatics and Mathematical Modelling The Technical University of Denmark

More information

Animation & Rendering

Animation & Rendering 7M836 Animation & Rendering Introduction, color, raster graphics, modeling, transformations Arjan Kok, Kees Huizing, Huub van de Wetering h.v.d.wetering@tue.nl 1 Purpose Understand 3D computer graphics

More information

Computer Graphics. Illumination and Shading

Computer Graphics. Illumination and Shading Rendering Pipeline modelling of geometry transformation into world coordinates placement of cameras and light sources transformation into camera coordinates backface culling projection clipping w.r.t.

More information

A Qualitative Analysis of 3D Display Technology

A Qualitative Analysis of 3D Display Technology A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract

More information

Texture-Mapping Tricks. How Bad Does it Look? We've Seen this Sort of Thing Before. Sampling Texture Maps

Texture-Mapping Tricks. How Bad Does it Look? We've Seen this Sort of Thing Before. Sampling Texture Maps Texture-Mapping Tricks Filtering Textures Textures and Shading Bump Mapping Solid Textures How Bad Does it Look? Let's take a look at what oversampling looks like: Click and drag the texture to rotate

More information

Displacement Mapping

Displacement Mapping HELSINKI UNIVERSITY OF TECHNOLOGY 16.4.2002 Telecommunications Software and Multimedia Laboratory Tik-111.500 Seminar on computer graphics Spring 2002: Rendering of High-Quality 3-D Graphics Displacement

More information

Introduction Rasterization Z-buffering Shading. Graphics 2012/2013, 4th quarter. Lecture 09: graphics pipeline (rasterization and shading)

Introduction Rasterization Z-buffering Shading. Graphics 2012/2013, 4th quarter. Lecture 09: graphics pipeline (rasterization and shading) Lecture 9 Graphics pipeline (rasterization and shading) Graphics pipeline - part 1 (recap) Perspective projection by matrix multiplication: x pixel y pixel z canonical 1 x = M vpm per M cam y z 1 This

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 3, 2017 Lecture #13 Program of Computer Graphics, Cornell University General Electric - 167 Cornell in

More information

Complex Features on a Surface. CITS4241 Visualisation Lectures 22 & 23. Texture mapping techniques. Texture mapping techniques

Complex Features on a Surface. CITS4241 Visualisation Lectures 22 & 23. Texture mapping techniques. Texture mapping techniques Complex Features on a Surface CITS4241 Visualisation Lectures 22 & 23 Texture Mapping Rendering all surfaces as blocks of colour Not very realistic result! Even with shading Many objects have detailed

More information

Perceptual Quality Metrics for 3D Meshes: Towards an Optimal Multi-Attribute Computational Model

Perceptual Quality Metrics for 3D Meshes: Towards an Optimal Multi-Attribute Computational Model Perceptual Quality Metrics for 3D Meshes: Towards an Optimal Multi-Attribute Computational Model Guillaume Lavoué Université de Lyon, CNRS Insa-Lyon, LIRIS UMR 5205 France Irene Cheng, Anup Basu Department

More information

Visualisatie BMT. Rendering. Arjan Kok

Visualisatie BMT. Rendering. Arjan Kok Visualisatie BMT Rendering Arjan Kok a.j.f.kok@tue.nl 1 Lecture overview Color Rendering Illumination 2 Visualization pipeline Raw Data Data Enrichment/Enhancement Derived Data Visualization Mapping Abstract

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Realtime Shading of Folded Surfaces

Realtime Shading of Folded Surfaces Realtime Shading of Folded Surfaces B.Ganster R. Klein M. Sattler R. Sarlette {ganster, rk, sattler, sarlette}@cs.uni-bonn.de University of Bonn Institute of Computer Science II Computer Graphics Römerstrasse

More information

Perceptually Guided Simplification of Lit, Textured Meshes

Perceptually Guided Simplification of Lit, Textured Meshes Perceptually Guided Simplification of Lit, Textured Meshes David Luebke 1, Jonathan Cohen 2, Nathaniel Williams 1, Mike Kelly 1, Brenden Schubert 1 1 University of Virginia, 2 Johns Hopkins University

More information

Computer Graphics Introduction. Taku Komura

Computer Graphics Introduction. Taku Komura Computer Graphics Introduction Taku Komura What s this course all about? We will cover Graphics programming and algorithms Graphics data structures Applied geometry, modeling and rendering Not covering

More information

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves CSC 418/2504: Computer Graphics Course web site (includes course information sheet): http://www.dgp.toronto.edu/~elf Instructor: Eugene Fiume Office: BA 5266 Phone: 416 978 5472 (not a reliable way) Email:

More information

Introduction to Visualization and Computer Graphics

Introduction to Visualization and Computer Graphics Introduction to Visualization and Computer Graphics DH2320, Fall 2015 Prof. Dr. Tino Weinkauf Introduction to Visualization and Computer Graphics Visibility Shading 3D Rendering Geometric Model Color Perspective

More information

Shading. Flat shading Gouraud shading Phong shading

Shading. Flat shading Gouraud shading Phong shading Shading Flat shading Gouraud shading Phong shading Flat Shading and Perception Lateral inhibition: exaggerates perceived intensity Mach bands: perceived stripes along edges Icosahedron with Sphere Normals

More information

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM NAME: Login name: Computer Science 46 Midterm 3//4, :3PM-:5PM This test is 5 questions, of equal weight. Do all of your work on these pages (use the back for scratch space), giving the answer in the space

More information

COMP 4801 Final Year Project. Ray Tracing for Computer Graphics. Final Project Report FYP Runjing Liu. Advised by. Dr. L.Y.

COMP 4801 Final Year Project. Ray Tracing for Computer Graphics. Final Project Report FYP Runjing Liu. Advised by. Dr. L.Y. COMP 4801 Final Year Project Ray Tracing for Computer Graphics Final Project Report FYP 15014 by Runjing Liu Advised by Dr. L.Y. Wei 1 Abstract The goal of this project was to use ray tracing in a rendering

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334

More information

Computer Graphics. Lecture 10. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura 12/03/15

Computer Graphics. Lecture 10. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura 12/03/15 Computer Graphics Lecture 10 Global Illumination 1: Ray Tracing and Radiosity Taku Komura 1 Rendering techniques Can be classified as Local Illumination techniques Global Illumination techniques Local

More information

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows Recollection Models Pixels Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows Can be computed in different stages 1 So far we came to Geometry model 3 Surface

More information

Texture Mapping. Brian Curless CSE 457 Spring 2016

Texture Mapping. Brian Curless CSE 457 Spring 2016 Texture Mapping Brian Curless CSE 457 Spring 2016 1 Reading Required Angel, 7.4-7.10 Recommended Paul S. Heckbert. Survey of texture mapping. IEEE Computer Graphics and Applications 6(11): 56--67, November

More information

Pipeline Operations. CS 4620 Lecture 14

Pipeline Operations. CS 4620 Lecture 14 Pipeline Operations CS 4620 Lecture 14 2014 Steve Marschner 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives

More information

A Shadow Volume Algorithm for Opaque and Transparent Non-Manifold Casters

A Shadow Volume Algorithm for Opaque and Transparent Non-Manifold Casters jgt 2008/7/20 22:19 page 1 #1 Vol. [VOL], No. [ISS]: 1?? A Shadow Volume Algorithm for Opaque and Transparent Non-Manifold Casters Byungmoon Kim 1, Kihwan Kim 2, Greg Turk 2 1 NVIDIA, 2 Georgia Institute

More information

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

CPSC / Illumination and Shading

CPSC / Illumination and Shading CPSC 599.64 / 601.64 Rendering Pipeline usually in one step modelling of geometry transformation into world coordinate system placement of cameras and light sources transformation into camera coordinate

More information

Texture Mapping. Reading. Implementing texture mapping. Texture mapping. Daniel Leventhal Adapted from Brian Curless CSE 457 Autumn 2011.

Texture Mapping. Reading. Implementing texture mapping. Texture mapping. Daniel Leventhal Adapted from Brian Curless CSE 457 Autumn 2011. Reading Recommended Texture Mapping Daniel Leventhal Adapted from Brian Curless CSE 457 Autumn 2011 Angel, 8.6, 8.7, 8.9, 8.10, 9.13-9.13.2 Paul S. Heckbert. Survey of texture mapping. IEEE Computer Graphics

More information

MET71 COMPUTER AIDED DESIGN

MET71 COMPUTER AIDED DESIGN UNIT - II BRESENHAM S ALGORITHM BRESENHAM S LINE ALGORITHM Bresenham s algorithm enables the selection of optimum raster locations to represent a straight line. In this algorithm either pixels along X

More information

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

Acquisition and Visualization of Colored 3D Objects

Acquisition and Visualization of Colored 3D Objects Acquisition and Visualization of Colored 3D Objects Kari Pulli Stanford University Stanford, CA, U.S.A kapu@cs.stanford.edu Habib Abi-Rached, Tom Duchamp, Linda G. Shapiro and Werner Stuetzle University

More information

https://ilearn.marist.edu/xsl-portal/tool/d4e4fd3a-a3...

https://ilearn.marist.edu/xsl-portal/tool/d4e4fd3a-a3... Assessment Preview - This is an example student view of this assessment done Exam 2 Part 1 of 5 - Modern Graphics Pipeline Question 1 of 27 Match each stage in the graphics pipeline with a description

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

Acceptable distortion and magnification of images on reflective surfaces in an augmented-reality system

Acceptable distortion and magnification of images on reflective surfaces in an augmented-reality system Manuscript Click here to view linked References Click here to download Manuscript 0_OPRErevised_manuscript(correced).docx OPTICAL REVIEW 0 0 0 0 0 0 Acceptable distortion and magnification of images on

More information

Curves & Surfaces. Last Time? Progressive Meshes. Selective Refinement. Adjacency Data Structures. Mesh Simplification. Mesh Simplification

Curves & Surfaces. Last Time? Progressive Meshes. Selective Refinement. Adjacency Data Structures. Mesh Simplification. Mesh Simplification Last Time? Adjacency Data Structures Curves & Surfaces Geometric & topologic information Dynamic allocation Efficiency of access Mesh Simplification edge collapse/vertex split geomorphs progressive transmission

More information

Institutionen för systemteknik

Institutionen för systemteknik Code: Day: Lokal: M7002E 19 March E1026 Institutionen för systemteknik Examination in: M7002E, Computer Graphics and Virtual Environments Number of sections: 7 Max. score: 100 (normally 60 is required

More information

Shadow and Environment Maps

Shadow and Environment Maps CS294-13: Special Topics Lecture #8 Advanced Computer Graphics University of California, Berkeley Monday, 28 September 2009 Shadow and Environment Maps Lecture #8: Monday, 28 September 2009 Lecturer: Ravi

More information

Illumination & Shading: Part 1

Illumination & Shading: Part 1 Illumination & Shading: Part 1 Light Sources Empirical Illumination Shading Local vs Global Illumination Lecture 10 Comp 236 Spring 2005 Computer Graphics Jargon: Illumination Models Illumination - the

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 27, 2015 Lecture #18 Goal of Realistic Imaging The resulting images should be physically accurate and

More information

Perceptually Guided Simplification of Lit, Textured Meshes

Perceptually Guided Simplification of Lit, Textured Meshes Perceptually Guided Simplification of Lit, Textured Meshes Nathaniel Williams * David Luebke Jonathan D. Cohen Michael Kelley Brenden Schubert ABSTRACT We present a new algorithm for best-effort simplification

More information

Computer Graphics and GPGPU Programming

Computer Graphics and GPGPU Programming Computer Graphics and GPGPU Programming Donato D Ambrosio Department of Mathematics and Computer Science and Center of Excellence for High Performace Computing Cubo 22B, University of Calabria, Rende 87036,

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Homework #2. Shading, Projections, Texture Mapping, Ray Tracing, and Bezier Curves

Homework #2. Shading, Projections, Texture Mapping, Ray Tracing, and Bezier Curves Computer Graphics Instructor: Brian Curless CSEP 557 Autumn 2016 Homework #2 Shading, Projections, Texture Mapping, Ray Tracing, and Bezier Curves Assigned: Wednesday, Nov 16 th Due: Wednesday, Nov 30

More information

Einführung in Visual Computing

Einführung in Visual Computing Einführung in Visual Computing 186.822 Textures Werner Purgathofer Surface-Rendering Methods polygon rendering methods ray tracing global illumination environment mapping texture mapping bump mapping Werner

More information

Complex Shading Algorithms

Complex Shading Algorithms Complex Shading Algorithms CPSC 414 Overview So far Rendering Pipeline including recent developments Today Shading algorithms based on the Rendering Pipeline Arbitrary reflection models (BRDFs) Bump mapping

More information

Objectives. Introduce Phong model Introduce modified Phong model Consider computation of required vectors Discuss polygonal shading.

Objectives. Introduce Phong model Introduce modified Phong model Consider computation of required vectors Discuss polygonal shading. Shading II 1 Objectives Introduce Phong model Introduce modified Phong model Consider computation of required vectors Discuss polygonal shading Flat Smooth Gouraud 2 Phong Lighting Model A simple model

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Progressive Mesh. Reddy Sambavaram Insomniac Games

Progressive Mesh. Reddy Sambavaram Insomniac Games Progressive Mesh Reddy Sambavaram Insomniac Games LOD Schemes Artist made LODs (time consuming, old but effective way) ViewDependentMesh (usually used for very large complicated meshes. CAD apps. Probably

More information

Lightcuts: A Scalable Approach to Illumination

Lightcuts: A Scalable Approach to Illumination Lightcuts: A Scalable Approach to Illumination Bruce Walter, Sebastian Fernandez, Adam Arbree, Mike Donikian, Kavita Bala, Donald Greenberg Program of Computer Graphics, Cornell University Lightcuts Efficient,

More information

Graphics for VEs. Ruth Aylett

Graphics for VEs. Ruth Aylett Graphics for VEs Ruth Aylett Overview VE Software Graphics for VEs The graphics pipeline Projections Lighting Shading VR software Two main types of software used: off-line authoring or modelling packages

More information

Tiled Texture Synthesis

Tiled Texture Synthesis International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 16 (2014), pp. 1667-1672 International Research Publications House http://www. irphouse.com Tiled Texture

More information

Interactive Rendering of Globally Illuminated Glossy Scenes

Interactive Rendering of Globally Illuminated Glossy Scenes Interactive Rendering of Globally Illuminated Glossy Scenes Wolfgang Stürzlinger, Rui Bastos Dept. of Computer Science, University of North Carolina at Chapel Hill {stuerzl bastos}@cs.unc.edu Abstract.

More information

Mesh Simplification. Mesh Simplification. Mesh Simplification Goals. Mesh Simplification Motivation. Vertex Clustering. Mesh Simplification Overview

Mesh Simplification. Mesh Simplification. Mesh Simplification Goals. Mesh Simplification Motivation. Vertex Clustering. Mesh Simplification Overview Mesh Simplification Mesh Simplification Adam Finkelstein Princeton University COS 56, Fall 008 Slides from: Funkhouser Division, Viewpoint, Cohen Mesh Simplification Motivation Interactive visualization

More information

Texture Mapping. Brian Curless CSE 457 Spring 2015

Texture Mapping. Brian Curless CSE 457 Spring 2015 Texture Mapping Brian Curless CSE 457 Spring 2015 1 Reading Required Angel, 7.4-7.10 Recommended Paul S. Heckbert. Survey of texture mapping. IEEE Computer Graphics and Applications 6(11): 56--67, November

More information

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0)

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0) Acceleration Techniques V1.2 Anthony Steed Based on slides from Celine Loscos (v1.0) Goals Although processor can now deal with many polygons (millions), the size of the models for application keeps on

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Final Projects Proposals due Thursday 4/8 Proposed project summary At least 3 related papers (read & summarized) Description of series of test cases Timeline & initial task assignment The Traditional Graphics

More information

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Ground Truth. Welcome!

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Ground Truth. Welcome! INFOGR Computer Graphics J. Bikker - April-July 2015 - Lecture 10: Ground Truth Welcome! Today s Agenda: Limitations of Whitted-style Ray Tracing Monte Carlo Path Tracing INFOGR Lecture 10 Ground Truth

More information

Graphics and Interaction Rendering pipeline & object modelling

Graphics and Interaction Rendering pipeline & object modelling 433-324 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering

More information

A Real-time System of Crowd Rendering: Parallel LOD and Texture-Preserving Approach on GPU

A Real-time System of Crowd Rendering: Parallel LOD and Texture-Preserving Approach on GPU A Real-time System of Crowd Rendering: Parallel LOD and Texture-Preserving Approach on GPU Chao Peng, Seung In Park, Yong Cao Computer Science Department, Virginia Tech, USA {chaopeng,spark80,yongcao}@vt.edu

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

Simplification of Meshes into Curved PN Triangles

Simplification of Meshes into Curved PN Triangles Simplification of Meshes into Curved PN Triangles Xiaoqun Wu, Jianmin Zheng, Xunnian Yang, Yiyu Cai Nanyang Technological University, Singapore Email: {wuxi0006,asjmzheng,xnyang,myycai}@ntu.edu.sg Abstract

More information

Rasterization. MIT EECS Frédo Durand and Barb Cutler. MIT EECS 6.837, Cutler and Durand 1

Rasterization. MIT EECS Frédo Durand and Barb Cutler. MIT EECS 6.837, Cutler and Durand 1 Rasterization MIT EECS 6.837 Frédo Durand and Barb Cutler MIT EECS 6.837, Cutler and Durand 1 Final projects Rest of semester Weekly meetings with TAs Office hours on appointment This week, with TAs Refine

More information

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture Texture CS 475 / CS 675 Computer Graphics Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. Lecture 11 : Texture http://en.wikipedia.org/wiki/uv_mapping

More information

A Comparison of Mesh Simplification Algorithms

A Comparison of Mesh Simplification Algorithms A Comparison of Mesh Simplification Algorithms Nicole Ortega Project Summary, Group 16 Browser Based Constructive Solid Geometry for Anatomical Models - Orthotics for cerebral palsy patients - Fusiform

More information

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture CS 475 / CS 675 Computer Graphics Lecture 11 : Texture Texture Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. http://en.wikipedia.org/wiki/uv_mapping

More information

Simple Lighting/Illumination Models

Simple Lighting/Illumination Models Simple Lighting/Illumination Models Scene rendered using direct lighting only Photograph Scene rendered using a physically-based global illumination model with manual tuning of colors (Frederic Drago and

More information

Mach band effect. The Mach band effect increases the visual unpleasant representation of curved surface using flat shading.

Mach band effect. The Mach band effect increases the visual unpleasant representation of curved surface using flat shading. Mach band effect The Mach band effect increases the visual unpleasant representation of curved surface using flat shading. A B 320322: Graphics and Visualization 456 Mach band effect The Mach band effect

More information

G 2 Interpolation for Polar Surfaces

G 2 Interpolation for Polar Surfaces 1 G 2 Interpolation for Polar Surfaces Jianzhong Wang 1, Fuhua Cheng 2,3 1 University of Kentucky, jwangf@uky.edu 2 University of Kentucky, cheng@cs.uky.edu 3 National Tsinhua University ABSTRACT In this

More information

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary Summary Ambien Occlusion Kadi Bouatouch IRISA Email: kadi@irisa.fr 1. Lighting 2. Definition 3. Computing the ambient occlusion 4. Ambient occlusion fields 5. Dynamic ambient occlusion 1 2 Lighting: Ambient

More information