A model to blend renderings

Similar documents
Enhancing Information on Large Scenes by Mixing Renderings

MIX RENDERINGS TO FOCUS PLAYER'S ATTENTION

Art Based Rendering of Fur by Instancing Geometry

NPR. CS 334 Non-Photorealistic Rendering. Daniel G. Aliaga

GPU real time hatching

Daniel Keefe. Computer Science Department: Brown University. April 17, 2000

Real-Time Rendering of Watercolor Effects for Virtual Environments

Non-Photorealistic Experimentation Jhon Adams

12/3/2007. Non-Photorealistic Rendering (NPR) What is NPR exactly? What is NPR exactly? What is NPR exactly? What is NPR exactly?

Non photorealistic Rendering

Real-time non-photorealistic rendering

Advanced Computer Graphics: Non-Photorealistic Rendering

Hardware-Accelerated Interactive Illustrative Stipple Drawing of Polygonal Objects Vision, Modeling, and Visualization 2002

Real-Time Hatching. Abstract. 1 Introduction

Effectiveness of Silhouette Rendering Algorithms in Terrain Visualisation

View-Dependent Particles for Interactive Non-Photorealistic Rendering

A GPU-Based Approach to Non-Photorealistic Rendering in the Graphic Style of Mike Mignola

Real-Time Charcoal Rendering Using Contrast Enhancement Operators

Fine Tone Control in Hardware Hatching

Preprint draft: to appear in ACM SIGGRAPH do not distribute. Real-Time Hatching

INSPIRE: An Interactive Image Assisted Non-Photorealistic Rendering System

Artistic Silhouettes: A Hybrid Approach

Image Precision Silhouette Edges

Non photorealistic Rendering

View-Dependent Particles for Interactive Non-Photorealistic Rendering

Fast silhouette and crease edge synthesis with geometry shaders

Non-Photorealistic Rendering

Real-Time Halftoning: A Primitive For Non-Photorealistic Shading

3 NON-PHOTOREALISTIC RENDERING FOR OUTDOOR SCENE Irene Liew Suet Fun Mohd Shahrizal Sunar

CMSC 491A/691A Artistic Rendering. Announcements

A Hybrid Approach to Real-Time Abstraction

Real-Time Painterly Rendering for MR Applications

Real-Time Pencil Rendering

D animation. Advantages of 3-D3. Advantages of 2-D2. Related work. Key idea. Applications of Computer Graphics in Cel Animation.

3D Silhouette Rendering Algorithms using Vectorisation Technique from Kedah Topography Map

Non-photorealistic Rendering

Real-Time Non- Photorealistic Rendering

EFFICIENT STIPPLE RENDERING

Real-Time Pen-and-Ink Illustration of Landscapes

Interactive Volume Illustration

Art-based Rendering with Continuous Levels of Detail

Nonphotorealistic Virtual Environment Navigation from Images

Simple Silhouettes for Complex Surfaces

Nonphotorealism. Christian Miller CS Fall 2011

VIRTUAL PAINTING: MODEL AND RESULTS

AC : COMPUTER-BASED NON-PHOTOREALISTIC RENDERING. Marty Fitzgerald, East Tennessee State University

Single Pass GPU Stylized Edges

Rendering Silhouettes with Virtual Lights

WUCSE : Painting lighting and viewing effects

Geometry Shader - Silhouette edge rendering

Artistic Rendering of Function-based Shape Models

Seamless Integration of Stylized Renditions in Computer-Generated Landscape Visualization

Rendering Nonphotorealistic Strokes with Temporal and Arc-Length Coherence

Detail control in line drawings of 3D meshes

Art-Based Rendering of Fur, Grass, and Trees

WYSIWYG NPR: Drawing Strokes Directly on 3D Models

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim

Image Precision Silhouette Edges

Pen & Ink Illustration

Expressive rendering. Joëlle Thollot

A Survey of Pen-and-Ink Illustration in Non-photorealistic

SEVER INSTITUTE OF TECHNOLOGY MASTER OF SCIENCE DEGREE THESIS ACCEPTANCE. (To be the first page of each copy of the thesis)

CSE 167: Introduction to Computer Graphics. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR)

Hardware Support for Non-photorealistic Rendering

Technical Quake. 1 Introduction and Motivation. Abstract. Michael Batchelder Kacper Wysocki

Non-Photorealistic Rendering: From a general view to Suggestive contours

Computer-Generated Pen & Ink Illustration

Character animation creation using hand-drawn sketches

EEL Introduction to Computer Graphics. Team Essay: Fall Active learning project: non-photorealistic rendering. Submission deadlines:

Published Research on Line Drawings from 3D Data

Self-shadowing Bumpmap using 3D Texture Hardware

Highlight Lines for Conveying Shape

CMSC 491A/691A Artistic Rendering. Artistic Rendering

X 2 -Toon: An Extended X-Toon Shader

Sketchy Shade Strokes

White Paper. Solid Wireframe. February 2007 WP _v01

A Shadow Volume Algorithm for Opaque and Transparent Non-Manifold Casters

Hermite Curves. Hermite curves. Interpolation versus approximation Hermite curve interpolates the control points Piecewise cubic polynomials

A Bandwidth Effective Rendering Scheme for 3D Texture-based Volume Visualization on GPU

Image-Space Painterly Rendering

Hardware Support for Non-photorealistic Rendering

Hardware-Determined Feature Edges

Interactive Exploration of Remote Isosurfaces with Point-Based Non-Photorealistic Rendering

Real-Time Watercolor Illustrations of Plants Using a Blurred Depth Test

Non-Photo Realistic Rendering. Jian Huang

University of Groningen. NPR Lenses Neumann, Petra; Isenberg, Tobias; Carpendale, Sheelagh. Published in: EPRINTS-BOOK-TITLE

Stylistic Reuse of View-Dependent Animations

RYAN: Rendering Your Animation Nonlinearly projected

CS 418: Interactive Computer Graphics. Basic Shading in WebGL. Eric Shaffer

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0)

ITS 102: Visualize This! Lecture 7: Illustrative Visualization. Introduction

A Reconfigurable Architecture for Load-Balanced Rendering

Image-Based Deformation of Objects in Real Scenes

Sketchy Illustrations for Presenting the Design of Interactive CSG

Feature Lines on Surfaces

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

3D Rasterization II COS 426

Shading , Fall 2004 Nancy Pollard Mark Tomczak

Medical Visualization - Illustrative Visualization 2 (Summary) J.-Prof. Dr. Kai Lawonn

Transcription:

A model to blend renderings Vincent Boyer and Dominique Sobczyk L.I.A.S.D.-Universit Paris 8 September 15, 2006 Abstract. We propose a model to blend renderings. It consists in mixing different kind of rendering techniques in the same frame. This method is achieved in real-time during the rendering process using the GPU programming. Moreover rendering techniques used and key points defined by the user can be interactively changed. In this paper we present the model, a new non-photorealistic rendering technique and images produced by our method. Key words: Non-Photorealistic Rendering GPU 1. Introduction This paper presents a method to blend different renderings. The aims of this model are both a better looking and a more informative image. Rendering is the process of generating an image for a given model. This can be achieved using software programs (generally through a graphic programing library) or/and GPU programs. Generally renderings are classified in two main categories: photo-realistic and by opposition nonphotorealistic.photo-realistic renderings do not allow to enrich the image produced. By opposition non-photorealistic renderings try to enhance the transmission of information in a picture [10], [2]. Based on this idea that a new rendering will help the user to visualize 3D models, a lot of previous works have been done. We present some of these ones organized in different topics: deformation of the 3D mesh according to the viewpoint. Rademacher [8] has proposed to create a view dependent model. It can be used especially by cartoonists. A generalization of view-dependent deformations was presented by Martin et al. [7]. Other works have been presented and for example Wood et al. [12] proposed the generation of panoramas for a given 3D scene and a camera path. edges extraction. A lot of well-known works have been done to extract edge features like silhouettes, boundaries and creases from 3D models [6], [1], [4], [9]. Generally, a preprocessing operation should be done to convert a 3D model in another data structure in order to obtain edges more easily (for example half-edge data structure maintains edge adjacency). The main application of these techniques are the architectural and technical illustrations. artistic shading. Hatching [11] [3], cartoon shading [5] and celshading are examples of non-photorealistic shading. They replace Gouraud or Phong shading to convey Machine GRAPHICS & VISION vol., no., pp.

2 A model to blend renderings three-dimensional structure of objects in an image. Note that celshading includes often also outline shading. As one can see each of these techniques have its favorite applications, advantages and drawbacks. The motivation of this paper can be summarized by one question: why do we use one and only one rendering to produce an image? This paper presents a model to combine different renderings in the same frame. Each rendering and its influence on the scene will be chosen by the user and can be changed dynamically. In the following, we present the general algorithm of our model and detail the solutions to solve each problem. Then images produced by our system are detailed and in conclusion the limitations and future works are presented. 2. The model Our model mixes different renderings at the same time. The user should specify which and where renderings will be used. The main process of our model is realized on fragments. For a given fragment, different renderings can be applied simultaneously and should be blended. Due to the real-time constraint, the computation should be done on GPU. The specifications given by the user are: 1. key points which are points of the 3D space. In the following n is the number of key points given by the user; 2. renderings used and for each of them four distances: dr 0 : the minimal distance in order to use this rendering; dr 1 : the minimal distance where we maximize the use of this rendering. Between dr 0 and dr 1 the rendering will be shaded; dr 2 : the maximal distance where we maximize the use of this rendering; dr 3 : the maximal distance in order to use this rendering. Between dr 2 and dr 3 the rendering will be shaded. Left part of figure 1 presents four key points given by the user and their visualization on the scene and the right part shows the distances for a texture rendering given by the user on the interface and the result on the scene. The distances dr 0 and dr 1 are equal to 0.0, dr 2 is equal to 0.5 and dr 3 is equal to 0.8. Thus texture is applied in the range of 0.0 to 0.5 from the key points and in the range between 0.5 to 0.8 the texture is applied shaded with black. In this example we give four key points, one rendering and its associated distances. Note that the number of key points, the coordinates of key points, the number of rendering used, the renderings chosen and the distances used could be modified dynamically by the user. The general form of our algorithm is: For a given fragment F Compute the closest distance d to the key points 1 For each rendering Machine GRAPHICS & VISION vol., no., pp.

3 V.Boyer & D.Sobczyk Fig. 1. Example of key points and distance given by the user Compare the distances given by the user and distance d and compute a weighted value v 2 Compute the color and weight it by v 3 Sum the colors previously obtained and clamp it between 0 and 1 for each component. These computations are realized only in the GPU and we obtain real-time renderings. Remark that at the present time, fragment shaders do no permit to make loops on data which do not reference a texture. So even if we present the algorithms including loops for the reader, we unfold it. This limit should disappear with future versions of graphic cards and shaders. Distance to key points We propose three modes to consider key points: single points: the distance d is the minimal euclidean distance between fragment and each key point(s); points of a polyline: the key points form a polyline and d is the minimal distance between fragment and this polyline. The algorithm used to compute d for a fragment F is: For each segment Pi Pi+1 (i [0; n 2]) of the polyline If Pi P~i+1 P~i F < 0 then di is the euclidean distance between Pi and F Else if Pi P~i+1 P~i F > kpi P~i+1 k2 then di is the euclidean distance between Pi+1 and F ~ P~ if k Else di = kpikppi+1 P~ k i i+1 d = min di Machine GRAPHICS & VISION vol., no., pp.

4 A model to blend renderings Fig. 2. Example of polyline and polyline loop use where represents the dot product of two vectors, the cross product of two vectors and P i P i+1 the norm of the vector P i P i+1. points of a polyline loop: in this case, we consider the n key points as n points of a polyline but P n 1 P 0 is considered as a polyline segment. The distance d is computed using the algorithm described for the polyline mode including one more segment (i.e. P n 1 P 0 ). Figure 2 shows the influence of the mode used in the computation of the distance d. The polyline mode is presented at the left and the polyline loop mode is shown on the right while the single points mode is presented on the left of figure 1. Compute the weighted value for each rendering For each rendering used, we compute a coefficient v depending on its four associated distances (dr 0, dr 1, dr 2, dr 3 ). v is then applied to weight the rendering. The sum of weighted rendering on a fragment allows us to mix renderings on it. The algorithm is: For each rendering If (d [dr 0 ; dr 3 ]) then If (d < dr 1 ) v = d dr0 dr 1 dr 0 Else if (d dr 2 ) v = 1.0 Else v = dr3 d dr 3 dr 2 Else v = 0 Remark that for a given fragment f, it is possible to have zero, one or more renderings. Each rendering is weighted independently and v can be viewed as the alpha channel for a rendering on a fragment. Finally the fragment color is clamped between 0 and 1. We include the ability to blend the image with the background. Figure 3 illustrates Machine GRAPHICS & VISION vol., no., pp.

5 V.Boyer & D.Sobczyk Fig. 3. Compute the coefficients and render the scene this part: the top right screenshot shows the use of blending; on the bottom image, the distance dr3 of texture rendering has been changed and fragments are render with texture, wb dist and material when d = 0.55. Rendering Texture rendering, material rendering, toon shading and celshading (using texture and outline) have been implemented in GPU fragment shader. Vertex shader is used to transform vertex coordinates and to compute necessary data in the fragment shader. We implement a new rendering, named wb dist for white to black distance. This rendering is very simple and is based on the coefficient previously computed. The fragment color components received the coefficient v. Then, if the fragment is in [dr1 ; dr2 ], the color is white and alpha is 1, else the fragment color is an achromatic color and alpha is in[0; 1[ depending on the distance d. For example, it allows us to make a halo effect when dr1 is closest to dr2. This can be used to mark a key point or a path as shown in Machine GRAPHICS & VISION vol., no., pp.

6 A model to blend renderings Fig. 4. Halo and aerogel rendering figure 4. Also, other effects like NASA aerogel can be easily produced. 3. Images and Results We present images produced with our model. Those have been produced on a PC pentium IV 3.6Ghz with 1Go of memory and a nvidia graphic card Quadro FX 1400. The top of figure 5 presents a first part of the path described with key-points. As one can see, the halo rendering and the mix of renderings help the user to obtain the essential information needed. The bottom of the figure presents two other views where the used renderings have been changed. Indeed, in the first of these, textured rendering then material rendering are used according to the distance of the polyline and in the second one this is the opposite (i.e. material then textured rendering). The minimal frame rate obtained for this scene composed by 20 000 triangles is 27 frames per second with 4 renderings (textured, material, wd dist and celshading) used simultaneously and 4 keypoints. In fact, the number of key-points used has no influence on the frame rate (i.e time computation needed for the distance computation is negligible). 4. Conclusion We have proposed a model to blend renderings based on GPU programing. A collection of rendering shaders have been implemented and a new rendering has been included but other ones can be easily added. The user can apply particular renderings on each part of the scene. This permits to have only advantages of renderings and never their drawbacks. Moreover every parameters can be modified by the user dynamically and it is very intuitive. We can apply immediately, as shown in examples, this model to a car journey. Future works will be done to propose other interpolations in order to compute the coefficients (non-linear interpolations) and to automatically import other renderings. Machine GRAPHICS & VISION vol., no., pp.

7 V.Boyer & D.Sobczyk Fig. 5. Other views of san diego References [1] A. Gooch, B. Gooch, P. Shirley, and E. Cohen. A non-photorealistic lighting model for automatic technical illustration. SIGGRAPH 98, 1998. [2] B. Gooch and A. Gooch. Non-Photorealistic Rendering. A K Peters, 2001. [3] Aaron Hertzmann and Denis Zorin. Illustrating smooth surfaces. Proceedings of SIGGRAPH 2000, July 2000. Held in New Orleans, Louisianna. [4] Victoria Interrante, Henry Fuchs, and Stephen Pizer. Illustrating transparent surfaces with curvature-directed strokes. IEEE Visualization 96, pages 211 218, October 1996. ISBN 0-89791-864-9. [5] Adam Lake, Carl Marshall, Mark Harris, and Marc Blackstein. Stylized rendering techniques for scalable real-time 3d animation. In Non-Photorealistic Animation and Rendering 2000 (NPAR 00), Annecy, France, June 5-7,2000. [6] Lee Markosian, Michael A. Kowalski, Samuel J. Trychin, Lubomir D. Bourdev, Daniel Goldstein, and John F. Hughes. Real-time nonphotorealistic rendering. ProMachine GRAPHICS & VISION vol., no., pp.

8 A model to blend renderings ceedings of SIGGRAPH 97, pages 415 420, August 1997. ISBN 0-89791-896-7. Held in Los Angeles, California. [7] D. Martin, S. Garcia, and J. C. Torres. Observer dependent deformations in illustration. In Non-Photorealistic Animation and Rendering 2000 (NPAR 00), Annecy, France, June 5-7,2000. [8] Paul Rademacher. View-dependent geometry. Proceedings of SIGGRAPH 99, pages 439 446, August 1999. ISBN 0-20148-560-5. Held in Los Angeles, California. [9] Mario Costa Sousa and Przemyslaw Prusinkiewicz. A few good lines: Suggestive drawing of 3d models. Computer Graphics Forum, 22(3), September 2003. [10] Thomas Strothotte and Stefan Schlechtweg. Non-photorealistic computer graphics: modeling, rendering, and animation. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2002. [11] G. Winkenbach and D.H. Salesin. Rendering parametric surfaces in pen and ink. Proceedings of SIGGRAPH 96, pages 469 476, August 1996. [12] Daniel N. Wood, Adam Finkelstein, John F. Hughes, Craig E. Thayer, and David H. Salesin. Multiperspective panoramas for cel animation. Proceedings of SIGGRAPH 97, pages 243 250, August 1997. ISBN 0-89791-896-7. Held in Los Angeles, California. Machine GRAPHICS & VISION vol., no., pp.