Enhancing Information on Large Scenes by Mixing Renderings

Similar documents
A model to blend renderings

MIX RENDERINGS TO FOCUS PLAYER'S ATTENTION

GPU real time hatching

NPR. CS 334 Non-Photorealistic Rendering. Daniel G. Aliaga

Art Based Rendering of Fur by Instancing Geometry

Real-Time Rendering of Watercolor Effects for Virtual Environments

Non-Photorealistic Experimentation Jhon Adams

Effectiveness of Silhouette Rendering Algorithms in Terrain Visualisation

Advanced Computer Graphics: Non-Photorealistic Rendering

12/3/2007. Non-Photorealistic Rendering (NPR) What is NPR exactly? What is NPR exactly? What is NPR exactly? What is NPR exactly?

Real-time non-photorealistic rendering

Non photorealistic Rendering

Real-Time Hatching. Abstract. 1 Introduction

Hardware-Accelerated Interactive Illustrative Stipple Drawing of Polygonal Objects Vision, Modeling, and Visualization 2002

Fine Tone Control in Hardware Hatching

3D Silhouette Rendering Algorithms using Vectorisation Technique from Kedah Topography Map

Fast silhouette and crease edge synthesis with geometry shaders

INSPIRE: An Interactive Image Assisted Non-Photorealistic Rendering System

Preprint draft: to appear in ACM SIGGRAPH do not distribute. Real-Time Hatching

Single Pass GPU Stylized Edges

Non photorealistic Rendering

A Hybrid Approach to Real-Time Abstraction

Real-Time Painterly Rendering for MR Applications

Real-Time Pencil Rendering

3 NON-PHOTOREALISTIC RENDERING FOR OUTDOOR SCENE Irene Liew Suet Fun Mohd Shahrizal Sunar

Nonphotorealism. Christian Miller CS Fall 2011

Rendering Silhouettes with Virtual Lights

Non-Photorealistic Rendering

VIRTUAL PAINTING: MODEL AND RESULTS

Interactive Volume Illustration

Real-Time Non- Photorealistic Rendering

Real-Time Pen-and-Ink Illustration of Landscapes

CMSC 491A/691A Artistic Rendering. Announcements

A GPU-Based Approach to Non-Photorealistic Rendering in the Graphic Style of Mike Mignola

Seamless Integration of Stylized Renditions in Computer-Generated Landscape Visualization

Detail control in line drawings of 3D meshes

Image Precision Silhouette Edges

Artistic Silhouettes: A Hybrid Approach

Artistic Rendering of Function-based Shape Models

Simple Silhouettes for Complex Surfaces

CSE 167: Introduction to Computer Graphics. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

Real-Time Charcoal Rendering Using Contrast Enhancement Operators

Daniel Keefe. Computer Science Department: Brown University. April 17, 2000

Nonphotorealistic Virtual Environment Navigation from Images

EFFICIENT STIPPLE RENDERING

Non-photorealistic Rendering

Geometry Shader - Silhouette edge rendering

Technical Quake. 1 Introduction and Motivation. Abstract. Michael Batchelder Kacper Wysocki

View-Dependent Particles for Interactive Non-Photorealistic Rendering

Art-based Rendering with Continuous Levels of Detail

Pen & Ink Illustration

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim

Character animation creation using hand-drawn sketches

WYSIWYG NPR: Drawing Strokes Directly on 3D Models

View-Dependent Particles for Interactive Non-Photorealistic Rendering

AC : COMPUTER-BASED NON-PHOTOREALISTIC RENDERING. Marty Fitzgerald, East Tennessee State University

Real-Time Halftoning: A Primitive For Non-Photorealistic Shading

CMSC 491A/691A Artistic Rendering. Artistic Rendering

Rendering Nonphotorealistic Strokes with Temporal and Arc-Length Coherence

SEVER INSTITUTE OF TECHNOLOGY MASTER OF SCIENCE DEGREE THESIS ACCEPTANCE. (To be the first page of each copy of the thesis)

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR)

Highlight Lines for Conveying Shape

D animation. Advantages of 3-D3. Advantages of 2-D2. Related work. Key idea. Applications of Computer Graphics in Cel Animation.

Self-shadowing Bumpmap using 3D Texture Hardware

WUCSE : Painting lighting and viewing effects

Hardware Support for Non-photorealistic Rendering

Interactive Exploration of Remote Isosurfaces with Point-Based Non-Photorealistic Rendering

Information Coding / Computer Graphics, ISY, LiTH. Splines

Computer Graphics (CS 543) Lecture 10: Soft Shadows (Maps and Volumes), Normal and Bump Mapping

Medical Visualization - Illustrative Visualization 2 (Summary) J.-Prof. Dr. Kai Lawonn

University of Groningen. NPR Lenses Neumann, Petra; Isenberg, Tobias; Carpendale, Sheelagh. Published in: EPRINTS-BOOK-TITLE

Shading. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller/Fuhrmann

Published Research on Line Drawings from 3D Data

Hardware-Determined Feature Edges

Lahore University of Management Sciences. CS 452 Computer Graphics

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

Image Precision Silhouette Edges

Geometric Features for Non-photorealistiic Rendering

Introduction to Computer Graphics. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

Art-Based Rendering of Fur, Grass, and Trees

X 2 -Toon: An Extended X-Toon Shader

Non-Photo Realistic Rendering. Jian Huang

Sketchy Illustrations for Presenting the Design of Interactive CSG

EEL Introduction to Computer Graphics. Team Essay: Fall Active learning project: non-photorealistic rendering. Submission deadlines:

Computer Graphics Disciplines. Grading. Textbooks. Course Overview. Assignment Policies. Computer Graphics Goals I

Expressive rendering. Joëlle Thollot

Computer-Generated Pen & Ink Illustration

Feature Lines on Surfaces

Real-Time Watercolor Illustrations of Plants Using a Blurred Depth Test

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0)

The Rasterization Pipeline

Computer Graphics Introduction. Taku Komura

A Bandwidth Effective Rendering Scheme for 3D Texture-based Volume Visualization on GPU

CS5620 Intro to Computer Graphics

Introduction to Visualization and Computer Graphics

Hardware Support for Non-photorealistic Rendering

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

A Survey of Pen-and-Ink Illustration in Non-photorealistic

Real-Time Universal Capture Facial Animation with GPU Skin Rendering

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Transcription:

Enhancing Information on Large Scenes by Mixing Renderings Vincent Boyer & Dominique Sobczyk [boyer,dom]@ai.univ-paris8.fr L.I.A.S.D. - Université Paris 8 2 rue de la liberté 93526 Saint-Denis Cedex - FRANCE Abstract. We propose a new model for visualization of high scale scenes. It is designed to enhance pertinent informations that become quickly viewable on a large scene. It consists in mixing different kind of rendering techniques in the same frame. This method is achieved in real-time during the rendering process using GPU programming. Moreover rendering techniques used and key points defined by the user can be interactively changed. We present our model, and a new non-photorealistic rendering techniques. Images produced look better and provide more informations than traditional rendering techniques. 1 Introduction This paper presents a new method to render high scale 3D models. The aims of this model are both a better looking and a more informative image. Rendering is the process of generating an image from a given model. This can be achieved using software programs (generally through a graphic programing library) or/and GPU programs. Generally renderings are classified in two main categories: photo-realistic and by opposition non-photorealistic. Photo-realistic renderings of a high scale scene do not allow to enrich the image produced. By opposition non-photorealistic renderings try to enhance the transmission of information in a picture [1], [2]. Based on the idea that a new rendering will help the user to visualize 3D models, a lot of previous techniques have been developed. We present some of these ones organized in different topics: deformation of the 3D mesh according to the viewpoint. Rademacher [3] has proposed to create a view dependent model. It consists of a base model and a complete description of the model shape from key viewpoints. This can be used especially by cartoonists. A generalization of view-dependent deformations was presented by Martin et al. [4]. A control function to relate position and transformation is used. Other works have been presented and for example Wood et al. [5] proposed the generation of panoramas for a given 3D scene and a camera path.

2 Vincent Boyer & Dominique Sobczyk edges extraction. A lot of well-known works have been done to extract edge features like silhouettes, boundaries and creases from 3D models [6], [7], [8], [9]. Frontfacing and backfacing can be used to detect silhouette and normal maps is one of the real-time methods to find creases and boundaries. Generally, a preprocessing operation should be done to convert a 3D model in another data structure in order to obtain edges more easily (for example half-edge data structure maintains edge adjacency). The main applications of these techniques are architectural and technical illustrations. artistic shading. Hatching [10] [11], cartoon shading [12] and celshading are examples of non-photorealistic shading. They replace Gouraud or Phong shading to convey three-dimensional structure of objects in an image. Hatching is based on the curvature model while cartoon and celshading choose the fragment color in a one-dimensional texture using the dot product between light direction and the normal. Note that celshading often includes outline shading. As one can see, there is not one solution to enhance the transmission of informations and each technique has its favorite applications, advantages and drawbacks. The motivation of this paper can be summarized in one question: why do we use one and only one rendering to produce an image? This paper presents a model to combine different renderings in the same frame. Each rendering and its influence on the scene will be chosen by the user and can be changed interactively. In the following, we present the general algorithm of our model and detail the solutions to solve each problem. Then images produced by our system are detailed and in conclusion the limitations and future works are presented. 2 The model Our model mixes different renderings at the same time. The user should specify which and where renderings will be used. The GPU programming can be achieved in vertex or/and fragment shaders. The vertex shader is used to precompute values needed in the fragment shader. The main process of our model is realized on fragments. For a given fragment, different renderings can be applied simultaneously and should be blended. Due to the real-time constraint, the computation should be done on GPU. The specifications given by the user are: 1. key points which are points of the 3D space. In the following n is the number of key points given by the user; 2. renderings used and for each of them four distances dr 0, dr 1, dr 2 and dr 3 where [dr 0, dr 3 ] is the range within the rendering is used. dr 0 : the minimal distance in order to use this rendering;

Lecture Notes in Computer Science 3 dr1 : the minimal distance where we maximize the use of this rendering. Between dr0 and dr1 the rendering will be shaded; dr2 : the maximal distance where we maximize the use of this rendering; dr3 : the maximal distance in order to use this rendering. Between dr2 and dr3 the rendering will be shaded. Fig. 1. Example of key points and distance given by the user Left part of figure 1 presents four key points given by the user and their visualization on the scene and the right part shows the distances for a texture rendering given by the user on the interface and the result on the scene. The distances dr0 and dr1 are equal to 0.0, dr2 is equal to 0.5 and dr3 is equal to 0.8. Thus texture is applied in the range of 0.0 to 0.5 from the key points and in the range between 0.5 to 0.8 the texture is applied shaded with black. In this example we gave four key points, one rendering and its associated distances. Note that the number of key points, the coordinates of key points, the number of rendering used, the renderings chosen and the distances used could be modified dynamically by the user. The general form of our algorithm is: For a given fragment F Compute the closest distance d to the key points 1 For each rendering Compare the distances given by the user and distance d and compute a weighted value v 2 Compute the color and weight it by v 3 Sum the colors previously obtained and clamp it between 0 and 1 for each component. In the following we detail the methods used to compute the distance to the key points 1, the ratio used for each rendering 2 and present the computation of

4 Vincent Boyer & Dominique Sobczyk the fragment color using a rendering 3. We also present a new rendering. These computations are realized only in the GPU and we obtain real-time renderings. Remark that at the present time, fragment shaders do no permit to make loops on data which do not reference a texture. So even if we present the algorithms including loops for the reader, we unfold it. This limit should disappear with future versions of graphic cards and shaders. 2.1 Distance to key points The user defines key point(s). This section explain how to compute the distance d. We propose three modes to consider key points: single points: the distance d is the minimal euclidean distance between fragment and each key point(s); points of a polyline: the key points form a polyline and d is the minimal distance between fragment and this polyline. The algorithm used to compute d for a fragment F is: For each segment P i P i+1 (i [0; n 2]) of the polyline If P i P i+1 P i F < 0 then d i is the euclidean distance between P i and F Else if P i P i+1 P i F > P i P i+1 2 then d i is the euclidean distance between P i+1 and F Else d i = PiP i+1 P if P ip i+1 d = min d i where represents the dot product of two vectors, the cross product of two vectors and P i P i+1 the norm of the vector P i P i+1. Note that P i P i+1 2 is equal to P i P i+1 P i P i+1 and is computed using the dot function on the fragment shader; points of a polyline loop: in this case, we consider the n key points as n points of a polyline but P n 1 P 0 is considered as a polyline segment. The distance d is computed using the algorithm described for the polyline mode including one more segment (i.e. P n 1 P 0 ). Figure 2 shows the influence of the mode used in the computation of the distance d. The polyline mode is presented at the left and the polyline loop mode is shown on the right while the single points mode is presented on the left of figure 1. 2.2 Compute the weighted value for each rendering The user enters his choices for renderings used and for each of them four associated distances (dr 0, dr 1, dr 2, dr 3 ). We have previously computed the distance d between the fragment and the key points. For each rendering used, we compute a coefficient v. v is then applied to weight the rendering. The sum of weighted rendering on a fragment allows us to mix renderings on it. The algorithm is: For each rendering

Lecture Notes in Computer Science 5 Fig.2. Example of polyline and polyline loop use If (d [dr 0 ; dr 3 ]) then If (d < dr 1 ) v = d dr0 dr 1 dr 0 Else if (d dr 2 ) v = 1.0 Else v = dr3 d dr 3 dr 2 Else v = 0 Remark that for a given fragment f, it is possible to have zero, one, two and more renderings. Each rendering is weighted independently and v can be viewed as the alpha channel for a rendering on a fragment. Finally the fragment color is clamped between 0 and 1. We include the ability to blend the image with the background. Figure 3 illustrates this part: the top right screenshot shows the use of blending; on the bottom image, the distance dr 3 of texture rendering has been changed and fragments are rendered with texture, wb dist (see next section) and material when d = 0.55. 2.3 Rendering Texture rendering, material rendering, toon shading and celshading (using texture and outline) have been implemented in GPU fragment shader. Vertex shader is used to transform vertex coordinates and to compute necessary data in the fragment shader. These shaders are well-known and many books describe it, see for example[13]. We implement a new rendering, named wb dist for white to black distance. This rendering is very simple and is based on the coefficient previously computed. The fragment color components received the coefficient v. Then, if the fragment is in [dr 1 ; dr 2 ], the color is white and alpha is 1, else the fragment color is an achromatic color and alpha is in[0; 1[ depending on the distance d. For example, it allows us to make a halo effect when dr 1 is closest to dr 2. This can be used to mark a key point or a path as shown in figure 4. Also, other effects like NASA aerogel can be easily produced.

6 Vincent Boyer & Dominique Sobczyk Fig. 3. Compute the coefficients and render the scene Fig. 4. Halo and NASA aerogel rendering

Lecture Notes in Computer Science 3 7 Images and Results This section presents images produced with our model. Those have been produced on a PC pentium IV 3.6Ghz with 1Go of memory and a nvidia graphic card Quadro FX 1400. The top of figure 5 presents a first part of the path described with key-points. As one can see, the halo rendering and the mix of renderings help the user to obtain the essential information needed. The bottom of the figure presents two other views where the used renderings have been changed. Indeed, in the first of these, textured rendering then material rendering are used according to the distance of the polyline and in the second one this is the opposite (i.e. material then textured rendering). The minimal frame rate obtained for this scene composed by 20 000 triangles is 27 frames per second with 4 renderings (textured, material, wd dist and celshading) used simultaneously and 4 key-points. In fact, the number of key-points used has no influence on the frame rate (i.e time computation needed for the distance computation is negligible). Fig. 5. Other views of San Diego

8 Vincent Boyer & Dominique Sobczyk 4 Conclusion We have proposed the first model to mix renderings. Based on GPU programing, this model is real time for large scale scenes visualization. A collection of various rendering shaders have been implemented one of them being a new one and other ones can be easily added. The user can apply particular renderings on each part of the scene. This permits to have only advantages of renderings and never their drawbacks. Moreover every parameters can be modified by the user dynamically and it is very intuitive. This model allows us to focus on a point or a path in a large scene. We can apply immediately, as shown in examples, this model to a car journey. Other applications like for example to show influence of a construction on an environment, visualization of the same object in a place (fire extinguisher in a building, electric cables in an oil rig,... ), or educational courses can be realized. Future works will be done to propose other interpolations in order to compute the coefficients (non-linear interpolations) and to automatically import other renderings. References 1. Strothotte, T., Schlechtweg, S.: Non-photorealistic computer graphics: modeling, rendering, and animation. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (2002) 2. Gooch, B., Gooch, A.: Non-Photorealistic Rendering. A K Peters (2001) 3. Rademacher, P.: View-dependent geometry. Proceedings of SIGGRAPH 99 (August 1999) 439 446 ISBN 0-20148-560-5. Held in Los Angeles, California. 4. Martin, D., Garcia, S., Torres, J.C.: Observer dependent deformations in illustration. In: Non-Photorealistic Animation and Rendering 2000 (NPAR 00), Annecy, France (June 5-7,2000) 5. Wood, D.N., Finkelstein, A., Hughes, J.F., Thayer, C.E., Salesin, D.H.: Multiperspective panoramas for cel animation. Proceedings of SIGGRAPH 97 (August 1997) 243 250 ISBN 0-89791-896-7. Held in Los Angeles, California. 6. Markosian, L., Kowalski, M.A., Trychin, S.J., Bourdev, L.D., Goldstein, D., Hughes, J.F.: Real-time nonphotorealistic rendering. Proceedings of SIGGRAPH 97 (August 1997) 415 420 ISBN 0-89791-896-7. Held in Los Angeles, California. 7. Gooch, A., Gooch, B., Shirley, P., Cohen, E.: A non-photorealistic lighting model for automatic technical illustration, SIGGRAPH 98 (1998) 8. Interrante, V., Fuchs, H., Pizer, S.: Illustrating transparent surfaces with curvaturedirected strokes. IEEE Visualization 96 (1996) 211 218 ISBN 0-89791-864-9. 9. Sousa, M.C., Prusinkiewicz, P.: A few good lines: Suggestive drawing of 3d models. Computer Graphics Forum 22 (2003) 10. Winkenbach, G., Salesin, D.: Rendering parametric surfaces in pen and ink. Proceedings of SIGGRAPH 96 (August 1996) 469 476 11. Hertzmann, A., Zorin, D.: Illustrating smooth surfaces. Proceedings of SIGGRAPH 2000 (July 2000) Held in New Orleans, Louisianna. 12. Lake, A., Marshall, C., Harris, M., Blackstein, M.: Stylized rendering techniques for scalable real-time 3d animation. In: Non-Photorealistic Animation and Rendering 2000 (NPAR 00), Annecy, France (June 5-7,2000) 13. Rost, R.: OpenGL Shading Language. Pearson (2004)