Stylized Rendering in Lugaru

Similar documents
Nonphotorealism. Christian Miller CS Fall 2011

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

Enhancing Information on Large Scenes by Mixing Renderings

CHAPTER 1 Graphics Systems and Models 3

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

Physically-Based Laser Simulation

Non-Photo Realistic Rendering. Jian Huang

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

Art Based Rendering of Fur by Instancing Geometry

A model to blend renderings

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

CS 130 Final. Fall 2015

Image Precision Silhouette Edges

Self-shadowing Bumpmap using 3D Texture Hardware

A Hybrid Approach to Real-Time Abstraction

Non-Photorealistic Experimentation Jhon Adams

The Rasterization Pipeline

Deferred Rendering Due: Wednesday November 15 at 10pm

Simple Silhouettes for Complex Surfaces

Advanced Deferred Rendering Techniques. NCCA, Thesis Portfolio Peter Smith

X 2 -Toon: An Extended X-Toon Shader

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Illustrative Rendering in Team Fortress 2

12/3/2007. Non-Photorealistic Rendering (NPR) What is NPR exactly? What is NPR exactly? What is NPR exactly? What is NPR exactly?

Chapter 7 - Light, Materials, Appearance

Computer Graphics (CS 543) Lecture 10: Normal Maps, Parametrization, Tone Mapping

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON

Non-Photorealistic Rendering

Pipeline Operations. CS 4620 Lecture 14

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Shadow Techniques. Sim Dietrich NVIDIA Corporation

Render all data necessary into textures Process textures to calculate final image

Real-time non-photorealistic rendering

COMPUTER GRAPHICS COURSE. Rendering Pipelines

LOD and Occlusion Christian Miller CS Fall 2011

Rendering Grass Terrains in Real-Time with Dynamic Lighting. Kévin Boulanger, Sumanta Pattanaik, Kadi Bouatouch August 1st 2006

Adaptive Point Cloud Rendering

LEVEL 1 ANIMATION ACADEMY2010

Shadow Rendering EDA101 Advanced Shading and Rendering

TSBK03 Screen-Space Ambient Occlusion

EFFICIENT STIPPLE RENDERING

Accelerated Ambient Occlusion Using Spatial Subdivision Structures

AGDC Per-Pixel Shading. Sim Dietrich

Massively Parallel Non- Convex Optimization on the GPU Through the Graphics Pipeline

Advanced Distant Light for DAZ Studio

Real-Time Image Based Lighting in Software Using HDR Panoramas

ART 268 3D Computer Graphics Texture Mapping and Rendering. Texture Mapping

Hardware Displacement Mapping

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming

Graphics Hardware and Display Devices

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03

Drawing a Crowd. David Gosselin Pedro V. Sander Jason L. Mitchell. 3D Application Research Group ATI Research

Technical Quake. 1 Introduction and Motivation. Abstract. Michael Batchelder Kacper Wysocki

CS451Real-time Rendering Pipeline

x ~ Hemispheric Lighting

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR)

Computer Graphics 10 - Shadows

Shading. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller/Fuhrmann

Rasterization Overview

CocoVR - Spherical Multiprojection

Graphics Shaders. Theory and Practice. Second Edition. Mike Bailey. Steve Cunningham. CRC Press. Taylor&FnincIs Croup tootutor London New York

Applications of Explicit Early-Z Culling

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment

Shadows in the graphics pipeline

Chapter 10 Computation Culling with Explicit Early-Z and Dynamic Flow Control

Geometry Shader - Silhouette edge rendering

Non-photorealistic Rendering

Enhancing Traditional Rasterization Graphics with Ray Tracing. March 2015

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Fine Tone Control in Hardware Hatching

CPSC 314 LIGHTING AND SHADING

28 SAMPLING. ALIASING AND ANTI-ALIASING

BCC Rays Ripply Filter

9. Illumination and Shading

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Computer Graphics. Shadows

Lighting/Shading III. Week 7, Wed Mar 3

Level of Details in Computer Rendering

Oso Toon Shader. Step 1: Flat Color

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Image Precision Silhouette Edges

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

Mattan Erez. The University of Texas at Austin

In- Class Exercises for Shadow Algorithms

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu

Shading 1: basics Christian Miller CS Fall 2011

Non-Linearly Quantized Moment Shadow Maps

Pipeline Operations. CS 4620 Lecture 10

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

Graphics and Interaction Rendering pipeline & object modelling

Hardware-Assisted Relief Texture Mapping

Spring 2009 Prof. Hyesoon Kim

Graphics for VEs. Ruth Aylett

Exaggerated Shading for Depicting Shape and Detail. Szymon Rusinkiewicz Michael Burns Doug DeCarlo

Fast Texture Based Form Factor Calculations for Radiosity using Graphics Hardware

Real-Time Hair Rendering on the GPU NVIDIA

The Terrain Rendering Pipeline. Stefan Roettger, Ingo Frick. VIS Group, University of Stuttgart. Massive Development, Mannheim

Real-Time Rendering of Watercolor Effects for Virtual Environments

Transcription:

Stylized Rendering in Lugaru Michael Lester Cary Laxer Rose-Hulman Institute of Technology Figure 1: A screenshot from the final stylized version of Lugaru Contents these techniques can achieve a compelling style without reducing performance 1 Related Work Toon Shading 1 Previous Work Our Technique 3 Implementation 4 Future Work 41 X-Toon 4 Gooch Shading 1 1 Related Work 1 3 3 3 3 3 Silhouette Edge Rendering 31 Implementation 311 Cross Filter 31 Sobel Filter 3 Future Work 4 5 5 5 6 4 Surface Angle Silhouetting 41 Previous Work 4 Implementation 43 Future Work 6 6 7 7 While Akenine presents stylized rendering from a theory perspective, several papers have been published from the industry that describe implementation details, similar to this paper Mitchel et al describe the fascinating approach which Valve took while implementing their lighting pipeline in Team Fortress Prince of Persia (008) uses techniques quite similar to those covered in this paper, and to great effect [St-Amour 010] Another game which applies similar stylized techniques is Gearbox Software s Borderlands [Thibault and Cavanaugh 010] 5 Remaining Work 7 Abstract We analyze a variety of techniques that can be applied to real-time applications in order to convey a stylized look, and present the results of applying these techniques posthumously to the 3D action game Lugaru Also known as NonPhotorealstic Rendering (NPR), stylized rendering provides the opportunity for a fresh look to games with dated graphics In this paper, we look at three classes of NPR techniques that can be relatively easily implemented into alreadyreleased applications: Toon Shading, Silhouette Edge Rendering, and Surface Angle Silhouetting Working together, e-mail: lesterwm@siggraphorg laxer@rose-hulmanedu A thorough survey of the stylized rendering field has been conducted by Akenine-Moller et al in there soon to be classic, Real-Time Rendering [008] Our research closely follows several of the topics covered in that text For new entrants to the field of stylized rendering or NPR, their work is a must read Toon Shading Probably the most widespread form of stylized rendering, Toon Shading reduces the amount of visual detail on objects in order to emphasize their key features Basically, the model is rendered in a limited set of solid colors with solid divisions between them By removing visual clutter, such as high-frequency textures or complex shading, only visually relevant information is emphasized The Link character depicted in Figure from the The Legend of Zelda: the Wind Waker is a great example of toon shading in games In his book Understanding Comics [1993], McCloud notes that traditional animators use this technique to achieve what he calls amplification through simplification Reducing shading detail in characters, especially their faces, allows a wider audience to relate to the character

Figure : An example of Toon Shading from the game The Legend of Zelda: The Wind Waker (Image courtesy of Nintendo Co) 1 Previous Work Lake et al were among the first to propose the use of this technique in real-time applications [000] They refer to their approach as hard shading Instead of smoothly shading an object from light to dark areas as in Gouraud or Phong shading, hard shading finds the transitional boundary and shades each side with a lit or unlit color This has the effect of quantizing the shading for an object Thus, instead of a smooth transition between areas of light and shadow, their is a hard boundary Since these boundaries will follow the object s contours, shape is clearly perceived while unnecessary or destracting information is reduced Equation (1) shows the traditional shading equation used for smooth shading C i = a g a m + a l a m + max(l n, 0) d l d m (1) Here, C i is the vertex color, a g is the coefficient of global ambient light, a l and d l are the ambient and diffuse coefficients of the light source, and a m and d m are the ambient and diffuse coefficients of the object s material L is the unit vector from the light source to the vertex, and n is the unit vector normal to the surface at the vertex L n results in the cosine of the angle formed between the two vectors Instead of simply shading the object with the result of equation (1), hard shading uses the L n term to access a 1D texture map In the simplest form, this texture contains just two texels, one for the lit color and one for the shadowed color For an example texture, see Figure 3 This texture is precomputed before runtime The shadowed texel color can be computed by replacing the L n with 0, which is equivalent to illumination only from ambient light The lit texel is similarly calculated by replacing L n with 1 At runtime, L n is used to access the texture The result of the texture lookup is the color at that vertex Thus, if the light direction is perpindicular to or pointing away from the surface normal, then L n is equal to 0 and the dark texel is accessed But if the surface normal points in the direction of the light source, L n is greater than 05 and the lit texel is accessed Our Technique Because the hard shading technique proposed by Lake et al was created before the advent of the programmable pipeline, there are many improvements to be had Previously, both the lit and unlit diffuse colors of the model were embedded in the precomputed 1D texture map The problem with this approach is that a different texture is required for every material in the game Modern games may have several thousand materials, so this is clearly unfavorable Instead, the 1D texture, or shade map, stores what we call the shadow term When the object is shaded, its unlit diffuse color, or albedo, is multiplied by the shadow term This results in a color equivalent to the original hard shading technique However, a single 1D texture can be used to shade multiple objects because their diffuse color is not stored in the texture itself As suggested by Lake, by using 1D texture maps of much higher resolution than, artists gain more control over the shading style [000] We take adantage of this in several ways With increased resolution, the hard edge between lit and unlit areas can be softened while still remaining distinct The additional texels also allow for more than one shading edge For example, the texture map in figure 4 produces 3 discrete lighting steps: highlight, normal, and shadowed This allows a much broader range of stylized to be implemented Previously, the clamped L n term was used to access the texture map However, because the dot product varies from -1 to 1, this severely limits the control we have over the shading of shadowed areas Because any dot product less than zero will be clamed to zero, surfaces that are pointing away from the light source will always have a dot product of 0 and thus access the darkest area of the shade map Thus, the dynamic range of the shade map is used only to control surfaces that are facing towards the light source We can easily achieve full control over the shadowed areas by a simple bias and scale of the L n term Our shading equation then becomes: Figure 3: L n is used to access the 1D texture map [Lake et al 000] C i = a g a m +a l a m +shademap( 1 + (L n) ) d l d m () Note that now the shade of shadowed areas can now be controlled through the shade map, giving the texture artist greater control over the style of the object The result of the shade map access is multiplied by the objects albedo to calculate the shaded pixel color The values in the shade map can vary only within the range 01 This means that when shading objects, their albedo can only

Figure 4: A 3 shade texture can be used to provide 3 distinct shading intensities be darkened or unchanged By introducing a scale, s, to the shade map color, we allow for the possibility of true highlights, where the objects albedo is actually brightened Because a scale applied to 0 is still 0, this effectively increases the shade map s dynamic range, allowing it to range from 0 to s Our final shading equation is then: C i = a g a m+a l a m+shademap( 3 Implementation 1 + (L n) ) s d l d m (3) While the original hard shading algorithm only ran on each vertex, our technique is implemented in the pixel shader, which means it calculates the lighting for each pixel in the scene This reduces the dependency on the number of polygons in an object and allows even rough models to be shaded appropriately The original Lugaru rendering engine relied entirely on textures, there were no lighting calculations performed at all As such, the ambient and diffuse terms were rarely set correctly for any given model Instead, these terms were essentially baked into the material texture Because Lugaru takes place entirely in outdoor environments, there is only a single light source, the sun Knowing this allows us to further simplify our lighting equation We can assume that a l and d l will never vary, and thus remove them from our equation and bake them into the material s diffuse texture This greatly simplifies our lighting equation which now becomes: C i = diffusetexture(u, v) shademap( 1 + (L n) ) s (4) The scale variable, s, determines the amount by which the shade map can brighten the diffuse texture We found that a scale of 15 produces good looking results, as the original textures were fairly dark Fully lit areas of objects, especially characters with high-frequency textures, tend to take on a glowing appearance The 15 scale tends to wash out some of the detail of high-frequency textures which fortunately mimics HDR effects such as bloom In fact, the scale introduced to our lighting equation is, in a very rough way, imitating HDR We are trading reduced resolution in our shade map for increased dynamic range This allows us to simulate a bloom effect almost for free GLSL shader code for calculating the pixel color //Calculate the LdotN term then bias // and scale it to the range 01 intensity = dot(light[0]position, normalvec); intensity = (intensity + 10)/0; //Get our diffuse color from the material texture tap = textured(diffusetex, gl_texcoord[0]st); diffuse = taprgb; //Use the intensity to access the shade map, // then scale by 15 to achieve "glow" effect shadowterm = texture1d(toontex, intensity)rgb * 15; //Shade the pixel to get our final color color = diffuse*shading; Figure 5: The bunny on the left was shaded with traditonal Phong lighting, while the bunny on the right was shaded with our toon shading technique using the texture from Figure 4 You can see in the results of this shading model in Figure 14 The scale produces a pleasant glow, while the hard line between light and shadow effectively describes the characters contours Because of the high-frequency diffuse textures featured by most objects in the game, this technique is not true toon shading, but similar goals are still accomplished The discrete shading helps to simplify the object, while the hard line between light and shadow emphasizes the contours and shape of the object Visual noise is reduced by the glow effect which helps to focus the payers attention on more relevant information There are still many improvements to be made, but we are pleased with the result 4 Future Work 41 X-Toon Barla et al introduce the idea of adding a second dimension to the shade map [006] The horizontal axis still corresponds to the shade as in the traditional 1D shade map, but the vertical axis corresponds to what Barla calls tone detail ; each step along this axis is essentially its own shade map Thus the D texture can be though of as a stack of traditional 1D shade maps An arbitrary attribute, such as depth, can be used to access the vertical dimension In that case, as the object gets farther away, a different shade map is used to shade the object (see Figure 6) This allows for many effects such, depth-of-field, specular highlights, or level-of-abstraction (LOA) By implementing this technique to extend our current algorithm, we can greatly simplify the shading of distant character models This can be used to make distant enemies more clearly visible or unimportant characters less so We could also achieve better LOA for the distant terrain, having it morph more smoothly into the fog color or skybox color as it become farther away 4 Gooch Shading While the hard shading technique effectively thresholds the diffuse lighting value, we can also use our shade map to warp it By introducing color into the shade map, we can alter

that we would want for each object would most likely vary, which means a new shade map would need to be authored for each of these variations This extention would, especially in conjuntion with silhouette rendering, provide even more stylized abstraction 3 Silhouette Edge Rendering Silhouette Edge Rendering is perhaps the most throroughly researched topic related to stylized rendering There are several different classes of algorithms that each contain a multitude of techniques Any given technique can typically be classified as one of the following: Geometric Silhouetting Render the front faces normally, but render the backfaces in such a way as to make the silhouette edges visible This includes techniques such as shell expansion [Hart et al 001], or z-bias [Raskar and Cohen 1999] Image-based Silhouetting Execute image-space edge detection filters on the depth and normal maps Composite the results onto the original image Silhouette Edge Detection Traverse the edge list for each object, marking those that are a silhouette edge Render the silhouette edges while ignoring internal edges Figure 6: An example of a D shade map The second dimension controls the level of abstraction [Barla et al 006] the hue of objects based on its illumination But why is this helpful? Traditional artists discovered long ago that warm hues appear to pop off the page while cool hues tend to recede Gooch et al propose introducing an artificial hue shift to objects in order to advantage of this observation [1998] So instead of simply shading shadowed areas dark and lit areas bright, we given dark areas a cool blue hue and bright areas a warm red hue Valve used this technique to with great success in their recent game Team Fortress [Mitchell et al 007] Hybrid Silhouetting Detect the silhouette edges, then perform image-space operations upon them, such as linking and smoothing [Northrup and Markosian 000] Since our research focused on implementing stylization techniques posthumously into Lugaru, we required a silhouette edge rendering algorithm that had the fewest possible dependencies with the game data and logic Image-based techniques meet this criteria, as only the normal and depth maps are needed Unlike the other classes of techniques, no preprocessing of the data is needed, which greatly simplifies the implementation Another advantage of image-based techniques is that they will work with all kinds of objects, such as N-Patch primitives, or alpha blended textures This is especially important in our implementation as Lugaru relies heavily on alpha-blended textures to render foliage Image-based edge detection is applicable to many fields other than computer graphics, and as such, it has been the focus of much research However, for most techniques the general approach is the same Most of the variation occurs in the actual edge detection filter applied to the images or images The basic algorithm is thus: 1 Render the world-space normals and z-depths to a texture (or textures) Draw a screen-filling quad textured with the normal and depth map 3 Apply an edge detection filter to the image Figure 7: A temperature shift (varying the hue from warm to cool) makes lit surfaces pop and shadowed areas recede [Gooch et al 1998] This technique could be added quite easily to our own, the shade map can be edited so that the left side is cool and the right is warm However, the amount of temperature shift 4 Composite the filtered image with the normally rendered scene Edge detection filters work by calculating the gradient of the image By analyzing the gradient of the image, first order (or higher) discontinuities can be found With regards to the depth and normal maps, these discontinuities correspond to silhouette and contour edges

31 Implementation There are many different edge detection algorithms, each with their own advantages and performance costs For our artistic purposes we wanted relatively thick lines but wanted to keep the performance cost to a minimum On modern hardware, the most common choke point is texture accesses This means we need to keep the kernel size to a minimum while still achieving reasonable results Following these criteria, we went through a myriad of different techniques, two of which are described here 311 Cross Filter First we followed an implementation given by Card and Mitchel [00] First, sample the depth and normal map 5 times in a cross pattern The differences between the outer samples and the central samples are accumulated If this sum is larger than a certain threshold, this point must be near an edge Conversely, points on a planar surface will have a sum of close to 0, and thus will not be marked as an edge For the normal map samples, the difference is multiplicatively accumulated as the dot product of the central normal and the sample normal The GLSL shader code is as follows: float sampledepth = textured(depthtex, gl_texcoord[0]st)r; vec3 samplenormal = textured(normaltex, gl_texcoord[0]st)rgb; for (i = 0; i < 4; i++) { float depth = textured(depthtex, gl_texcoord[0]st + offsets[i])r; vec3 norm = textured(normaltex, gl_texcoord[0]st + offsets[i])rgb; ddepth += abs(sampledepth - depth); dnorm *= -step(00, dot(samplenormal, norm) - normalthreshold); } ddepth = 10 - ddepth; //Composite with both depth and normal edges color *= ddepth * dnorm This algorithm is easily understood and implemented, but unfortunately didn t produce satisfactory results (see Figure 8) Due to the low precision and sample size of the normal map, there were many misses Malan suggests using a multisampled normal map with a sample offset of just half a pixel (with linear interpolation enabled) so that each texture access will sample from the 4 surrounding texels [009] This effectively results in a 0 pixel neighborhood being sampled Unfortunately, we did not have access to a multisampled buffer, so this enhancement could not be studied Another problem which emerged was that of false edges being detected where there were none These occurred at pyramid points, on objects with low tesselation This is most likely due to the way the difference between normal vectors is accumulated 31 Sobel Filter Deciding to increase the number of texture samples offers the potential to reduce or eliminate the problems of the last technique Thus, we next tried a 3x3 Sobel filter This filter is actually made up of two seperable filters, both horizontal and vertical, which are run over the image and then composited together First, 9 samples are taken in a 3x3 grid Then these samples are weighted by either the horizontal Figure 8: False edges and misses produced by the initial edge detection filter or vertical kernel, pictured in Figure??, and then summed The sums from the horizontal and vertical filters at each point yield the gradient vector This vector points in the direction of the greatest change which, when filtering a depth map, will most likely be an edge The magnitude of the gradient vector is the rate of change in that direction Thus, if the magnitude is above a certain threshold, the point is on an edge Unfortunately, the implementation is not quite as straightforward The unoptimized GLSL shader code is detailed below: vec3 centernormal = textured(normaltex, gl_texcoord[0]st)rgb; // fetch the 3x3 neighbourhood and use the RGB // vector s length as intensity value for (int i=0; i<3; i++) { for (int j=0; j<3; j++) { normsample = textured(normaltex, gl_texcoord[0]st + offsets[i])rgb; depthsample = textured(depthtex, gl_texcoord[0]st + offsets[i])r; normi[i][j] = dot(normsample, centernormal); depthi[i][j] = depthsample; } } // calculate the convolution values for all the masks for (int i=0; i<; i++) { float depthdp3 = dot(g[i][0], depthi[0]) + dot(g[i][1], depthi[1]) + dot(g[i][], depthi[]); } float normdp3 = dot(g[i][0], normi[0]) + dot(g[i][1], normi[1]) + dot(g[i][], normi[]); depthcnv[i] = depthdp3 * depthdp3; normcnv[i] = normdp3 * normdp3; float normedge = step(008, (05 * sqrt(normcnv[0]* normcnv[0]+normcnv[1]*normcnv[1]))); float depthedge = step(0001, sqrt(depthcnv[0]* depthcnv[0]+depthcnv[1]*depthcnv[1]));

return 10 - (normedge + depthedge); The Sobel filter is well known for being good at detecting first order discontinuities while disregarding those of higherfrequency This works well in our case because depth maps are typically very low frequency As we expected, the Sobel filter performs better than our original filter However, there are still several obvious unpleasantries With this implementation, edges that are nearly horizontal are often missed, while vertical edges are almost always detected This is most likely due to an implementation problem Despite the greater number of samples, the normal map filter still misses quite a few edges Since we are still filtering a single sampled normal map, the same precision problem still lingers Using a 9 samples allows us to detect edges from samples that are farther away, resulting in increased line width But increased line width means increased aliasing There are many potential techniques that can be used to remedy this; these are discussed in Section 3 such that for each frame, the edge line is drawn using with a slightly different aliasing pattern When viewed at real-time frame rates, these aliasing patterns will merge and appear antialiased Again, like introducing a multisampled normal buffer, the cost of introducing temporal antialiasing is amortized because if all the edges of a scene are antialiased, then the whole scene is effectively antialiased An extention of this would be to implement full Morphological Antialiasing (MFAA) [Reshetov 009], but we will leave further research into that topic as an exercise for the reader 4 Surface Angle Silhouetting While technically Surface Angle Silhouetting is a form of Silhouette Edge Detection, we make the distinction here due to the vast difference in implementation details In this technique, surfaces that are nearly edge-on to the viewing angle are said to be silhouette edges and are drawn dark This form of silhouetting can be implemented in a single pass and require only a single dot product and texture fetch, and is thus extremely fast and effecient 41 Previous Work Figure 9: A Sobel filter eliminates false hits, but still misses many edges, especially those that are nearly horizontal 3 Future Work One of the biggest problems with our current edge rendering technique is that the lines are very thin and contribute very little to the final composited scene Card and Mitchell suggest using morphological image processing to dilate the lines for a stronger effect [Card and Mitchell 00] In addition to strengthening the silhouette rendering effect, dilation could also cover some of the missed edges, though considering the current state, this is unlikely A more direct approach to reducing the amount of misses is to increase the precision of the depth and normal maps Cavanaugh presents a very effective method of encoding the depth texture to maximize its performance when filtered [Thibault and Cavanaugh 010] As for the normal map, the best way to increase precision is to use a multisampled buffer The cost of multisampling is slightly amortized by the fact that the edges detected from this normal map will be semi-antialiased [Malan 009] Another method of antialiasing the edge lines can be adapted from the technique known as temporal anti-aliasing The idea is that every frame, the sample offsets are rotated by a random amount This effectively jitters the samples Figure 10: Surfaces that are viewed at glancing angles are drawn dark, producing artistic outline [Gooch et al 1999] This technique was first researched by Gooch et al for the purposes of technical rendering [1999] To determine if a surface is edge on, the dot product of the surface normal and the view vector is computed If the dot product is near zero, the surface is edge-on Gooch originally proposed implementing this technique by adding a dark edge line to the environment map as a preprocess Since N V is used to access the environment map, values close to 0 will access the edge and be darkened Since the edge line is uniform around the outside of the map, this is equivalent to using N V to access a 1D texture [Akenine-Moller et al 008] Because the silhouette edge thickness depends on the curvature of the object, line width can vary greatly This can be both a feature and a downfall For highly tesellated objects, such as those pictured in Figure 10, this technique produces artistic looking outlines But for objects of low or no curvature, such as cubes, this technique fails This dependency on curvature also makes the style almost impossible to control Due to these limitations, Surface Angle Silhouetting is often neglected as a stylization technique For example, Wu considered using it in the game Cel Damage, but found that it only produced pleasant results for one quarter of the in-game models [Wu 00]

4 Implementation The implementation of this technique can be simplified even further than accessing a 1D texture Since are edges are sharp, a surface is either on-edge or it is not, we have no need for the extra resolution that a texture provides Instead, we simply use boolean logic If the viewing angle is less than a certain threshold, the surface is on-edge and it is darkened The GLSL shader code is presented below: float surfaceanglecheck(vec3 N, vec3 V) { float viewangle = dot(n, V); // If the viewangle is less than 0, // set edge to 0 float edge = step(0, viewangle); return edge; } //in main() //if the pixel s surface is on-edge, draw it black color *= surfaceanglecheck(normalvec, viewvec); As mentioned previously, this technique only produces pleasing results for objects with relatively high curvature Thus it is only enabled for character models, and disabled for all other objects The results can be seen in Figure 11 Notice that, due to the low amount of tesselation in the character model, there are large polygonal areas that have been marked as edges, especially on the feet Otherwise, on more finely tesselated areas of the model, the technique produces width-varying, artistic looking strokes 43 Future Work Performing the simplification from 1D texture to boolean logic does preclude some extra functionality For instance, we could use the 1D texture to implement view-dependent toon shading effects This leads us back to the extention discussed in Section 41 If we use N V to access the extra dimension of the shade map, we could draw a dark line at the bottom of the texture to implement Surface Angle Silhouetting, in addition to other view-dependent effects 5 Remaining Work Although these techniques help to stylize the otherwise standard rendering of Lugaru, there is still much more to be done Rendering techniques are only one part of the equation, in order to truly convey a style, all aspects of the game need to work in unison The game s textures are probably the most immediately noticeable aspect and are therefore primarily responsible for conveying a certain style With stylized rendering but hyper-realistic textures, the games shading is at odds with the textures The player receives mixed signals as to the style the game is trying to convey This is true of other aspects as well If a player s actions follow a comic book style, eg visual sound effect and a freeze frame when an enemy is punched, the player would expect the sound effects to be similarly styled Thus, posthumously altering Lugaru s rendering engine is not enough to completely stylize the game To truly convey a style, the textures, models, gameplay, sound effects, etc all need to be updated While this may seem to be a monumental task, there is a plethora of research related to stylizing or abstracting realistic textures and models Only once all aspects have been retuned with that same style in mind can the game be considered stylized Figure 11: Surface angle silhouetting on the bunny character The varying line width results in artistic looking silhouette edge strokes References Akenine-Moller, T, Haines, E, and Hoffman, N 008 Real-Time Rendering 3rd Edition A K Peters, Ltd, Natick, MA, USA Barla, P, Thollot, J, and Markosian, L 006 X- toon: an extended toon shader In Proceedings of the 4th international symposium on Non-photorealistic animation and rendering, ACM, New York, NY, USA, NPAR 06, 17 13 Card, D, and Mitchell, J L, 00 Non-photorealistic rendering with pixel and vertex shaders Gooch, A, Gooch, B, Shirley, P, and Cohen, E 1998 A non-photorealistic lighting model for automatic technical illustration In Proceedings of the 5th annual conference on Computer graphics and interactive techniques, ACM, New York, NY, USA, SIGGRAPH 98, 447 45 Gooch, B, Sloan, P-P J, Gooch, A, Shirley, P, and Riesenfeld, R 1999 Interactive technical illustra-

tion In Proceedings of the 1999 symposium on Interactive 3D graphics, ACM, New York, NY, USA, I3D 99, 31 38 Hart, E, Gosselin, D, and Isidoro, J 001 Vertex shading with direct3d and opengl Game Developers Conference Lake, A, Marshall, C, Harris, M, and Blackstein, M 000 Stylized rendering techniques for scalable realtime 3d animation In Proceedings of the 1st international symposium on Non-photorealistic animation and rendering, ACM, New York, NY, USA, NPAR 00, 13 0 Malan, H, 009 Graphics techniques in crackdown, Mar McCloud, S 1993 Understanding Comics Harper Collins Publishers, New York, NY, USA Mitchell, J L, Francke, M, and Eng, D 007 Illustrative rendering in team fortress In ACM SIGGRAPH 007 courses, ACM, New York, NY, USA, SIGGRAPH 07, 19 3 Northrup, J D, and Markosian, L 000 Artistic silhouettes: a hybrid approach In Proceedings of the 1st international symposium on Non-photorealistic animation and rendering, ACM, New York, NY, USA, NPAR 00, 31 37 Raskar, R, and Cohen, M 1999 Image precision silhouette edges In Proceedings of the 1999 symposium on Interactive 3D graphics, ACM, New York, NY, USA, I3D 99, 135 140 Reshetov, A 009 Morphological antialiasing In Proceedings of the Conference on High Performance Graphics 009, ACM, New York, NY, USA, HPG 09, 109 116 St-Amour, J-F 010 The illustrative rendering of prince of persia Stylized Rendering in Games SIGGRAPH 010 Courses, July Thibault, A, and Cavanaugh, S 010 Making concept art real for borderlands Stylized Rendering in Games SIGGRAPH 010 Course Notes, July Wu, D 00 Personal communication

Figure 1: Final Result Figure 13: A Screenshot from Lugaru showing stylization in action

Figure 14: Comparison between normal and stylized rendering