Artistic Rendering of Function-based Shape Models

Similar documents
Non-Photorealistic Experimentation Jhon Adams

Non-Photo Realistic Rendering. Jian Huang

Real-Time Non- Photorealistic Rendering

Non-photorealistic Rendering

Non-Photorealistic Rendering

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Topics and things to know about them:

Artistic Rendering of Functionally Defined Volumetric Objects

Spring 2012 Final. CS184 - Foundations of Computer Graphics. University of California at Berkeley

Animation & Rendering

How to draw and create shapes

Image Precision Silhouette Edges

Computer Graphics Fundamentals. Jon Macey

Answer Key: Three-Dimensional Cross Sections

Rendering Silhouettes with Virtual Lights

Image Precision Silhouette Edges

Computer Graphics. Shadows

CHAPTER 1 Graphics Systems and Models 3

NPR. CS 334 Non-Photorealistic Rendering. Daniel G. Aliaga

Types of Computer Painting

Pen Tool, Fill Layers, Color Range, Levels Adjustments, Magic Wand tool, and shadowing techniques

Introduction Ray tracing basics Advanced topics (shading) Advanced topics (geometry) Graphics 2010/2011, 4th quarter. Lecture 11: Ray tracing

Technion - Computer Science Department - Technical Report CIS

Chapter 17: The Truth about Normals

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

Rendering Nonphotorealistic Strokes with Temporal and Arc-Length Coherence

A model to blend renderings

Homework #2. Shading, Ray Tracing, and Texture Mapping

CS452/552; EE465/505. Finale!

Shading. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller/Fuhrmann

Nonphotorealistic rendering

Ray Tracing through Viewing Portals

Enhancing Information on Large Scenes by Mixing Renderings

Real-time non-photorealistic rendering

Nonphotorealism. Christian Miller CS Fall 2011

Level of Details in Computer Rendering

CS 488. More Shading and Illumination. Luc RENAMBOT

The Traditional Graphics Pipeline

Solid Modelling. Graphics Systems / Computer Graphics and Interfaces COLLEGE OF ENGINEERING UNIVERSITY OF PORTO

CHETTINAD COLLEGE OF ENGINEERING & TECHNOLOGY CS2401 COMPUTER GRAPHICS QUESTION BANK

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project

We can use square dot paper to draw each view (top, front, and sides) of the three dimensional objects:

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL

Chapter 1 Introduction

AC : COMPUTER-BASED NON-PHOTOREALISTIC RENDERING. Marty Fitzgerald, East Tennessee State University

CS559 Computer Graphics Fall 2015

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

CSCI 4972/6963 Advanced Computer Graphics Quiz 2 Tuesday April 17, 2007 noon-1:30pm

Mach band effect. The Mach band effect increases the visual unpleasant representation of curved surface using flat shading.

Graphics (INFOGR ): Example Exam

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Light: Geometric Optics

03 Vector Graphics. Multimedia Systems. 2D and 3D Graphics, Transformations

Welcome to CS 4/57101 Computer Graphics

Ray Tracer Due date: April 27, 2011

Lecture 17: Solid Modeling.... a cubit on the one side, and a cubit on the other side Exodus 26:13

Render methods, Compositing, Post-process and NPR in NX Render

CS3621 Midterm Solution (Fall 2005) 150 points

Oso Toon Shader. Step 1: Flat Color

Measurements using three-dimensional product imaging

View-Dependent Particles for Interactive Non-Photorealistic Rendering

Visible Surface Ray-Tracing of Stereoscopic Images

Computer Graphics I Lecture 11

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

6.837 Introduction to Computer Graphics Quiz 2 Thursday November 20, :40-4pm One hand-written sheet of notes allowed

THE combination of Darwin s theory and computer graphics

CEng 477 Introduction to Computer Graphics Fall 2007

Announcements. Written Assignment 2 is out see the web page. Computer Graphics

End-Term Examination

Real-Time Shadows. Last Time? Textures can Alias. Schedule. Questions? Quiz 1: Tuesday October 26 th, in class (1 week from today!

Lighting. Figure 10.1

AP Physics: Curved Mirrors and Lenses

Chapter - 2: Geometry and Line Generations

Surface Curvature Estimation for Edge Spinning Algorithm *

CS 4620 Midterm, March 21, 2017

Assignment 6: Ray Tracing

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

Ray Tracing. Kjetil Babington

The Traditional Graphics Pipeline

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping

Previously... contour or image rendering in 2D

A Sketch Interpreter System with Shading and Cross Section Lines

Ray Tracer I: Ray Casting Due date: 12:00pm December 3, 2001

Computer Graphics. Lecture 9 Environment mapping, Mirroring

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized

Making Impressionist Paintings Out of Real Images

Simple Nested Dielectrics in Ray Traced Images

Light: Geometric Optics (Chapter 23)

THE AUSTRALIAN NATIONAL UNIVERSITY Final Examinations(Semester 2) COMP4610/COMP6461 (Computer Graphics) Final Exam

Applications of Explicit Early-Z Culling

Homework #2. Hidden Surfaces, Projections, Shading and Texture, Ray Tracing, and Parametric Curves

d f(g(t), h(t)) = x dt + f ( y dt = 0. Notice that we can rewrite the relationship on the left hand side of the equality using the dot product: ( f

Appendix E Calculating Normal Vectors

CS 130 Final. Fall 2015

Algebra Based Physics

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007

The Traditional Graphics Pipeline

3D Modeling: Surfaces

Photorealistic 3D Rendering for VW in Mobile Devices

Transcription:

Artistic Rendering of Function-based Shape Models by Shunsuke Suzuki Faculty of Computer and Information Science Hosei University n00k1021@k.hosei.ac.jp Supervisor: Alexander Pasko March 2004 1

Abstract We explore approaches to artistic (non-photorealistic) rendering of functionbased 3D models described in the HyperFun language. Although ray tracing is originally a technique for photorealistic rendering, we dare render HyperFun models in an artistic style using ray tracing. Examples of artistic rendering are watercolor painting, pen or pencil sketch, and cartoon style rendering. We tested several techniques and proposed a method for rendering in the pencil sketch style. In this method, we generate the 3D model s silhouette by analyzing an angle between its normal vector and the view vector. The pencil dots and strokes for interior shading are generated by dithering, random-dithering or straight line segments with the direction obtained by the cross product of its normal vector and the view vector. In interior shading method, we studied which techniques can generate more artistic images. Thus, we obtained synthetic artistic images by utilizing the principle of ray-tracing and different shading techniques. 2

Chapter 1 Introduction Realistic rendering is commonly considered a goal of many computer graphics techniques. However, non-photorealistic (or artistic) rendering generating handmade images is also under intensive research recently. The difference between realistic images and non-photorealistic images is similar to difference between photos and hand-made illustrations. At present, it is quite easy to create realistic images using different computer graphics techniques. On the other hand, if we can produce artistic images automatically, the utility value of computer graphics will become bigger, and computer graphic will become more attractive. In this work, we consider approaches to artistic rendering of function-based 3D models in HyperFun by utilizing the principle of ray tracing. HyperFun is a 3D function-based modeling language. The models are rendered in the form of f ( x, y, z) 0. When a point x, y, z is considered as the point of 3-dimensional space, HyperFun models consist of the set of points observing the rule (the points that 0 and over values are returned). For example, the rule for modeling a sphere having radius 5 and center coordinates 0,0,0 is f ( x, y, z) 5 2 ( x 2 y 2 z 2 ) 0. The sphere is made from the set of the points that observe the rule. In addition, the point, that f x, y, z s value is 0, is the model s surface, and the points, that the value is bigger than 0, is the interior model. It is also possible to make several simple objects observing the rule into the complex model with Constructive Solid Geometry expression. Constructive Solid Geometry represents CSG. CSG is combining simple parts using set operation, and making a complex model. 3

HyperFun is described in detail in [8]. We also use OpenGL as a tool for implementation of proposed rendering algorithms. Our software first inputs and compiles a HyperFun model, then it traces rays emitted from each pixel of the image and looks for an intersection point between the ray and the model surface. If a ray-surface intersection point is found, we apply different shading algorithms for generating a non-photorealistic image element at this point. A shading algorithm starts from distinguishing between a model s silhouette and an inside points of the model projection to render them differently. An image is generated when ray tracing is completed. A number of approaches have been recently developed for the generation of artistic images of 3D models. An animation method for pen-and-ink illustration was proposed in [3]. The algorithm of [4] represents a non-photorealistic image using geograftals. The work [5] approaches the oriental paintings generation. Methods of extracting model s silhouettes are presented in [1, 6]. The method [2] uses Bezier functions to express pen-strokes. Thus, there is much research, and we consider that the big difference between these researches and our research is to generate non-photorealistic images of function-based 3D models using ray tracing. We combine the above techniques to generate artistic images of complex function-based models. In the following chapters, the algorithm of extracting model s silhouette is described. In chapter 3, the interior shading is described, first, using standard dithering, and then we introduce a random-dithering method. Finally, the strokes generation method, which is a feature of the pencil sketch style, is presented. Then, the images of complex models generated using the proposed algorithms are shown and briefly discussed in chapter 4 and processing time is described about a sphere model and a tiger model and conclusions and future work discussion are given in chapter 5, 6. 4

Chapter 2 Finding Silhouette To render non-photorealistic images, first of all, we find model s silhouette. To do this, we use the view vector v 0,0, 1 and the surface normal vector f at each point of the model s surface. We assume that the view vector is always perpendicular to the screen. The surface point where the view vector is perpendicular to the normal vector is a silhouette point. In other words, if the cosine value of the angle between the normal vector and the view vector is 0, this point belongs to the silhouette. F p f f pz 2 2 p f p f p x y 2 z 0 F(p) is a silhouette point. Please look at the above equation, we only find the point that Z-values of the surface normal vector is 0. This algorithm is shown in Fig. 1. If the cosine value is equal to 0, we consider the point a silhouette and put a black point to the corresponding pixel of the image. However, there is one problem. If only the points with the cosine value equal to 0 are selected, the generated silhouette curve is very thin and invisible the naked eye. Therefore, as a solution, we consider the points with the cosine value between 0 some given threshold (typically 0.2) as silhouette points, and we change the threshold according to the given shape. An image of the extracted silhouette curve of a 3D sphere model is shown in Fig.2. 5

Figure 1. When we reach a point p of the surface, we check the cosine value of the angle between the surface normal vector f and the view vector v. If the value is 0, the point is a silhouette point. (a) (b) Figure 2. A 3D sphere model and its extracted silhouette: (a) tested 3D model; (b) extracted silhouette points. 6

Chapter 3 Interior Shading The method for rendering interior shade is very important for the nonphotorealistic image generation. We test three possible interior shading techniques: 1. dithering 2. random dithering 3. generating strokes. Dithering and random-dithering are techniques for pen or pencil dots style, and generating strokes is techniques for pencil sketch style. We describe them. 3.1. Dithering Dithering is usually used for halftone image generation on black/white output devices like dot matrix printers. Here, we apply this technique for interior shading with pencil or pen dots. To apply dithering, we set the 64 values (from 0 to 63) to each elements of an 8 8 matrix without duplication. This matrix is called a dither matrix. The formula for generating the n n dither matrix is D ( n) ( 4D ( 4D D U 4D D U n/2) (2) ( n/2) ( n/2) (2) ( n/2) 00 01 n/2) (2) ( n/2) ( n/2) (2) ( n/2 ). D10U 4D D11U 7

(n) U defined as an n n matrix of 1s, that is, Furthermore, (2) D is Consequently, 8 8 dither matrix ( D 1 1 1 1 1 1 1 1 ( n ) U 1 1 1 1...... 1 1 1 1 0 3 2 1 (2) D. (8) ) generated from the formula is 63 31 55 23 61 29 53 21 15 47 7 39 13 45 5 37 51 19 59 27 49 17 57 25 3 35 11 43 1 33 9 41 (8) D. 60 28 52 20 62 30 54 22 12 44 4 36 14 46 6 38 48 16 56 24 50 18 58 26 0 32 8 40 2 34 10 42 [7] describes how to make dither matrix in detail. Then, the ray-tracing is applied to a central point of each 8 8 mega-pixel. For the ray-surface intersection point in the central point of the mega-pixel, we calculate the angle between the normal vector f and the light source vector l, and the cosine value E of the angle. For the E value changing from 0 to 1, an intensity in the mega-pixel will be set to 64E (changing from 0 to 64). We compare each element of a mega-pixel with each element of the dither matrix, and put a black point to the pixel, where the element of the dither matrix is bigger than the intensity. If not, we do nothing to the pixel. As an example, we introduce this algorithm in Fig. 3. 8

Figure 3. The case with 4 4 dithering. When the intensity is 8, black points are put to seven pixels where the element of dither matrix is bigger than the intensity. The dithering effect can be explained such that the number of rays emitted from the light source and hitting the surface is proportional to the cosine of the angle between the normal vector and light source vector. If the direction to the light source is perpendicular to the normal, the cosine becomes small and there is a little number of rays hitting the surface. If not, the cosine becomes large and there is much number of the rays. Therefore, for mega-pixels with low intensity, many black points are put to the 64 pixels. In addition, we applied a 4 4 matrix for dithering, but could not get a good result, because of the limited number of intensity levels. 4 4 dithering only has 16 intensity levels. On the other hand, 8 8 dithering has 64 intensity levels. An image generated using 8 8 dithering and 4 4 dithering is shown in Fig.4. The main problem with dithering is the regular shading patterns, visible in Fig. 4, which are not acceptable for artistic drawing simulation. In 4 4 dithering, the more regular shading patterns appear than 8 8 dithering. 9

(a) (b) (c) (d) Figure 4. An 600 600 image of a sphere model with 8 8 dithering. Then, main light source position is set to XYZ-coordinates is 60,60,100 and sub light source position is 60, 60,50 with Z-axis pointed to the viewer: (a) the entire image with 8 8 dithering; (b) enlarged part of the image and (c) the entire image with 4 4 dithering; (d) enlarged part of the image. 10

3.2. Random Dithering The difference is that we change the fixed dither matrix to the variable matrix. Whenever we apply dithering, each element of the dither matrix is randomly changed with a prescribed distribution. We assume a variation range of each element of the dither matrix and the uniform distribution of the values inside the variation range. Then, we compare each element of a mega-pixel with each element of dither matrix by using same procedure as 8 8 dithering. We tested various values of the variation range, could obtain best artistic images, when the value is 10 or 5. The pseudocode of the random dithering with the dither matrix D and the variation range is as follows: For each pixel of a mega-pixel excepting silhouette parts and background parts If 1 2 rand D 1.0 RAND _ MAX is bigger than intensity Put a black point on the pixel Else Don t put a black point on the pixel. An image generated using random dithering is shown in Fig. 5. As one can observe, the effect of regular shading patterns is significantly reduced, if compared with Fig. 4. Therefore, we can express more artistic shade for pen or pencil dots. 11

(a) (b) (c) (d) Figure 5. A sphere model image generated using 8 8 random dithering: (a) the entire image with the value of 10 of the variation range; (b) enlarged part of the image and (c) the entire image with the value of 20 of the variation range; (d) enlarged part of the image. 3.3. Generating strokes for interior shading To render shapes in the pencil sketch style, it is very important to generate strokes for interior shading. First, we must detect the direction of a stroke. We use the method proposed in [1] to calculate the direction of strokes. The stroke 12

direction vector s is calculated by a cross product of the view vector v 0,0, 1 and the normal vector f at each point of the surface: s v f This direction is tangent to the curve of constant illumination (isophote). We generate strokes for 8 8 mega-pixel, because we want to represent the intensity by the number of parallel strokes instead of dithering. To get the stroke direction vector, we calculate the cross product of the view vector and the normal vector at each central point of a mega-pixel. Then, we need to generate a straight line segment image from the point x, y to the point x s x, y s y going in the stroke direction. We can put a black point to any pixel, which is located on the line. Here, there is a problem. Since the coordinate values of each pixel are integer values, there is almost no pixel exactly located on the line. Therefore, an approximation is used in order to draw a stroke. If the directional distance between the line and the pixel point is less than 0.5, a black point is put to the pixel. This is the basic stroke generation procedure. In addition, since view vector always is 0,0, 1, the Z-coordinate value of the stroke direction vector is always 0. Therefore, we consider only with XY-coordinates value. We also have to decide the number of the strokes, which should be drawn in each mega-pixel. Then, we utilized effectively the dither matrix used before. The number of strokes is determined by the mega-pixel s intensity or the number of the pixel which a black point is put to. In addition, when two or more strokes exist in one mega-pixel, in order to prevent a stroke overlapping, the strokes are placed by applying parallel translation in the X and Y direction at random distance from the basic stroke. Furthermore, the more the intensity is bigger, the more stroke color is pale. The rule for determining the number of strokes drawn in each mega-pixel is shown in Table 1. This algorithm is shown in Fig.6 and an image with interior shading using strokes is shown in Fig.7. 13

Table 1. The rule for determining the number of strokes drawn in each mega-pixel. Black point 0-9 10-32 33-48 49-64 Intensity 64-55 54-32 31-16 15-0 Strokes 0 1 2 3 Figure 6. The algorithm of interior shading by strokes. (a) Figure 7. Rendering of a sphere model using interior shading with strokes: (a) the entire image; (b) enlarged part of the image. (b) 14

Chapter 4 Experiments In this chapter, we show and discuss the images of complex constructive shape models in HyperFun (Fig. 8). We could render all models using silhouette detection and strokes. However, small dots appear in the upper part of a chair image and there is also the part that is all black. The main problem of all images is that the width of silhouettes is loose. Especially, the silhouette rendering problem appears at the flat parts and on slightly curved parts of models. The reason is that silhouettes are extracted only according to the normal vector direction and other shape characteristics of the object are not taken into consideration. Usually, the threshold of the cosine value of the angle between each normal vector and the view vector is set up greatly in order to extract all silhouettes, and so there are thin line and thick line. In (a), (b) and (d) images, the range for the cosine is set from 0 to 0.3. For the (c) image, the range is set from 0 to 0.4. However, the range is set from 0 to 0.2 for a simple sphere. 15

(a) (b) (c) (d) Figure 8. Rendering complex models in the pencil sketch style: (a) chair, (b) ant, (c) tiger and (d) sample HyperFun model. 16

Chapter 5 Processing Time In this chapter, we inspect processing time of our rendering about a sphere model and a tiger model. Most of the processing time is to find ray-suface intersection points. An element giving great influence in the time is the Lipschitz value. 5.1. Lipschitz value The Lipschitz value is greatly related to ray casting step. If the Lipschitz value is big, ray casting step will be small. That is, ray-surface intersection point is found precisely, and the precision of an image is better. However, the processing time will be long. 5.2. The processing time according the Lipschitz value According to this value, the processing time and the precision of an image are changing. There is a trade off problem. If the Lipschitz value is big, the processing time is very long and the precision of an image is very good. If not, the time is short and the precision is bad. Then, we show the sphere's graph of the processing time and the Lipschitz value in Fig. 9. In a sphere model, the processing time is changing according to the Lipschitz value. However, the precision of the image isn't changing. Therefore, we consider that simple models as a sphere don't need to change the Lipschitz value. We also show the tiger's graph of the processing time and the Lipschitz value in Fig. 10. In a tiger model, the processing time is changing 17

according to the Lipschitz value and the precision of the image is changing a little. Therefore, we consider that complex models need to change the Lipschitz value in order to increase the precision of the image, although the processing time is too long. 300 The processing time of a sphere model Time(sec) Time(sec) 200 100 0 1 10 30 50 100 500 1000 2000 Time(sec) 0 12.16 19.77 27.73 44.9 134 194.1 238.9 Lipschitz Figure 9. The graph of the processing time of a sphere model. When the Lipschitz value is 1, the processing time is set to 0, because the model isn't render. The processing time of a tiger model Time(min) 140 120 100 80 60 40 20 0 1 3 7 10 50 100 Time(min) 38.141 104.74 126.05 128.74 128.75 128.77 Lipschitz Figure 10. The graph of the processing time of a tiger model. 18

Chapter 6 Conclusion and Future Work In this work, we studied artistic rendering of 3D function-based models in HyperFun using ray-tracing as the basic algorithm. In order to generate nonphotorealstic image for pen or pencil sketch style, we divided the model projection on a 2D plane into three parts: 1. silhouettes 2. interior 3. background. If a ray-surface intersection point is found, we apply a rendering technique for silhouettes (chapter 2) or for interior shading (chapter 3). If the ray does not hi the surface, the pixel is rendered as background. Clear silhouette can be extracted in our experiments from simple models such as a sphere and a cube. However, with complex models, there were parts, where the algorithm cannot extract silhouettes clearly with the width of the silhouette area changing uncontrollable. We consider that only normal vector direction analysis is not enough for extracting silhouettes. We have to improve the extraction of silhouettes by adding the curvature analysis, since this problem often occurred in flat and slightly curved regions of shapes. Let us discuss three techniques for interior shading brought to consideration in chapter 3. With standard 8 8 dithering, satisfying results cannot be obtained. The reason is that the use of the fixed dither matrix causes regular pattern appearance in the image. In other words, the intensity of the adjacent mega-pixels isn t changed, so there is the sequence of same patterns. With random dithering, since a variable dithering matrix was used, the problem with the regular patterns was solved. The reason is that each element of the dither matrix is changed, even if the adjacent intensity isn t changed. Therefore, there is almost no the sequence of same pattern. Although strokes are not generated with this technique, we can obtain images close to artistic rendering using pencil dots shading. Moreover, we 19

tested various values of the variation range of each element of the dither matrix. The value of 10 the variation results in the most convincing images with random dithering. The reason is that the image hardly changes if compared with the standard dithering, if the value of the variation range for the dither matrix is too small, and it will become a grayish image if the value is too large. However, satisfying results are not obtained with random dithering from the viewpoint of the pencil sketch style. With generating strokes for interior shading, we could render images similar to pencil sketches. The reason is that not only strokes can be generated, but also the correct direction of strokes can be reliably estimated. However, we could not generate acceptable strokes shading for models such as a cube only having flat parts with almost constant direction of the normal vector. This problem is the same as the silhouette s problem, because strokes are generated only according to the normal vector direction. We have to improve the generation technique of strokes by adding other algorithm, since this problem often occurred in flat part. We discussed the processing time of simple models and complex models in chapter 5. In simple models, we considered that simple models don't mean changing the Lipschitz value. The reason is that even if we raises the Lipschitz value, the precision of the image isn't chaniging and the processing time is only too long. In complex models, the Lipschitz value gives the procession of the image and the processing time great influence. It is because that when the value is raised, the processing time is too long, but the precision of the image is better. The thing that we have to improve is to shorten the processing time too. This work concerns only rendering in the pencil sketch style in two colors white and black. We consider it possible to render images in the color pencil style and in the watercolor painting style using ray tracing. Our future work is to render various stroke types such as brush strokes, and, in general, rendering of artistic images using various materials and artist tools. Thus, we have many works to do. 20

References [1] D. Bremer, J. Hughes, Rapid approximate silhouette rendering of implicit surfaces, Implicit Surfaces 98, University of Washington, Seattle, 1998, pp. 155-164. [2] T. Nishita, S. Takita, E. Nakamae, A display algorithm of brush strokes using Bezier function, Computer Graphics International, June 1993, pp.244-257. [3] T. Haga, H. Johan, T. Nishita, Animation method for pen-and-ink illustrations using stroke coherency, CADGraphics2001, August 2001, pp.333-343. [4] M. Kaplan, B. Gooch, E. Cohen, Interactive artistic rendering, NAPR 00, University of Utah, June 2000, pp. 67-74. [5] Y. Jung Yu, D. Hoon Lee, Interactive rendering technique for realistic oriental painting, Journal of WSCG 03, 2003, pp. 538-545. [6] A. Hertzmann, Introduction to 3D non-photorealistic rendering: silhouettes and outlines, SIGGRAPH 99 Course Notes, August 1999. [7] J. Foley, A. van Dan, S. Feiner, J. Hughes, Computer Graphics, Addison- Wesley Publishing Company, 1995, pp. 568-573. [8] V. Adzhiev, R. Cartwright, E. Fausett, A. Ossipov, A. Pasko, V. Savchenko, HyperFun project: a framework for collaborative multidimensional F-rep modeling, Eurographics/SIGGRAPH Implicit Surfaces 99, University of Bordeaux, 1999, pp. 59-69. 21