Stroke-based Real-time. Non-Photorealistic Rendering
|
|
- Hugo Booth
- 6 years ago
- Views:
Transcription
1 Stroke-based Real-time Non-Photorealistic Rendering July 20th, 2011 Ben Ells
2 Introduction Research into non-photorealistic rendering (NPR) has often been very narrowly targeted at simulating particular artistic styles. More recently, generalized systems have emerged with much greater flexibility [1], even at interactive frame rates [2]. Commercial games are making increasing use of NPR, but only in very limited ways. Minor tweaks to shading models and texture style, with occasional delineation of silhouette edges is about as far as anyone goes. More extreme examples exist, but are few and far between. Much like early academic research in NPR, these use very specific, inflexible techniques. Implementing these techniques on a standard engine would require radical graphical modifications, if it were practical at all. Left: Borderlands [3] - a recent game employing typical cel-shading. Right: Okami [4] - a game well known for its stylized, sumi-e visuals. A stroke-based rendering system efficient enough to be used as part of a game engine but flexible enough to be easily adapted to different art-styles could massively boost graphical innovations in games. I present a system attempting to meet this pressing need by enabling fast, flexible, coherent stroke-based rendering of standard 3D models. Each frame, my system identifies interesting edgessilhouette edges, ridge edges and valley edgeswhich delineate the form of the model well. Note that by the term edge, I mean simply the geometric connection between two vertices. Contiguous interesting edges (edges sharing a vertex) are daisychained together into strokes in three dimensional space. Strokes are correlated across frames by their component vertices, and simplified to remove switchbacks (also called swallowtails ) and superfluous vertices, then split at hard angles into the straightest possible brush-paths. Strips of textured panels are laid down along these paths for rendering.
3 Related Work The problems of identifying silhouette edges and connecting them into clean strokes has been addressed before. Northrup and Markosian s approach [5] solves many of the same problems as my system. Edges are identified, simplified, and linked into paths, and geometry is generated and textured- broadly speaking, the same things are done. However, our specific approaches differ greatly. Unlike my system, they reject a brute-force edge-finding solution in favour of a randomized method exploiting temporal correspondences of silhouette edges between frames and spatial correspondences of silhouette edges within a mesh, presented by Markosian et al. [6]. This is very efficient, but cannot identify ridges and valleys. Linking and simplification is where our methods really diverge. Northrup and Markosian render an ID map, with each edge and face in a unique color. They analyze this image and construct strokes in two dimensions. This approach neatly bypasses the problems of simplification and visibility by only discovering visible edges, and linking based on pixel proximity in the final view. By contrast, my system links edges in world-space via shared vertices. Addressing problems of visibility and simplification at the vertices trades cheap but numerous pixel checks for a much smaller number of more expensive trigonometric tests. The WYSIWYG NPR system, produced by Kalnins et al. [2], expands on the earlier silhouette-stroke system by enabling non-silhouette strokes to be created. These can be created along existing edges or arbitrarily across the surface, without existing edges at all. These strokes must be defined by an artist, and cannot be generated on the fly. The system does not appear to support dynamic meshes. WYSIWYG uses vertices as control-points of a Catmull-Rom spline when generating stroke geometry, rather than simply extruding directly into strips of panels. The system can synthesize new random stroke textures from a set of examples. Furthermore, strokes can be parameterized to control things like thickness, color, or alpha. Jot [7], the expansion of WYSIWIG, addresses the issue of temporal coherence by placing sampling points every four pixels, and propagating them to the nearest corresponding stroke in the next frame. These techniques produce exceptional visual results. Ideally, many would eventually be incorporated into my system (performance allowing, of course).
4
5 Algorithms Pre-processing My system works with conventional 3D models, but they require some pre-processing. The result is stored in a.geo file, a simple extension of the.obj format. Geo files hold four main types of data- vertices, triangles, edges, and vertex-edges. As in.obj files, a vertex is a triplet of floats and a triangle is a triplet of vertex indices. An edge is two vertex indices and two triangle indices. A vertex-edge is a list of indices relating edges to that vertex. The added edge and vertex-edge data is needed to rapidly string connected edges together. Elevator Pitch At the highest level, my system iterates over the edges of the model and builds a list of the useful ones. Then it crawls the model from edge to contiguous edge, connecting them up into strokes. The strokes are extruded into strips of panels, textured, and rendered. Preliminaries Before anything can happen, the scene is rendered into a depth-map. This is a nontrivial fixed cost, and serves to cull hidden edges. Each vertex is transformed into screen-space and tested against the depth-map. Both results are stored for later. With the screen-space vertices, each triangle s normal in screen-space is calculated to determine whether the triangle is front-facing or back-facing. If the triangle is front-facing, a normal is calculated in the model s local coordinates as well, and stored for later. // Geometry::buildVertexScreenData for each vertex in model: calculate and store screen coordinates store visibility // Geometry::buildFaceNormalData for each triangle in model: calculate and store face direction if faces forwards: calculate and store normal Edge Assembly Now that we ve gathered what we need, we can build a list of useful edges. Edges are immediately discarded if they re between occluded vertices, or between two back-facing triangles. Edges between two front-facing triangles that are at a shallow angle to each other are discarded as well. Remaining edges, between a front-face and a back-face or between sharplyangled front-faces are considered useful. // Geometry::buildEdgeList
6 for each edge in model: if edge is hidden: continue if edge is between two front-faces: if angle between faces is shallow: continue add edge to usefuledges Each useful edge is examined one vertex at a time. The vertex in question is added to the current stroke. Then the vertex-edge data is consulted for a list of edges connecting to the vertex- if a useful edge is connected, the stroke-growing loop crawls into it and pushes its next vertex. This continues until there are no more interesting edges in that direction, then repeat with the other vertex of the initial edge. Useful edges are only examined once- a crawled edge is marked as used so it can be skipped when it comes up in the rotation. // Geometry::Strokify for each edge in usefuledges: if edge has been visited: continue mark edge as visited // crawl the model in one direction leadingedge = edge while leadingedge.vertexa has a usefuledge: pass vertexa in the front of the StrokeBuilder mark usefuledge as visited leadingedge = usefuledge // then crawl the other direction leadingedge = edge while leadingedge.vertexb has a usefuledge: pass vertexb in the back of the StrokeBuilder mark usefuledge as visited leadingedge = usefuledge for each stroke in StrokeBuilder: add stroke to strokelist reset StrokeBuilder The edge-assembly process produces a double-ended stream of vertices. These are
7 pushed into the StrokeBuilder, which oversees simplification and matching. Vertices can be pushed to either the front or the back- the process is the same either way. When a vertex is pushed, the model-space and screen-space versions are both sent. The model-space is there to root the actual geometry, while the screen-space is used to calculate the lengths and angles of the segments at various points. Note that the two ends of the StrokeBuilder are distinct from each other. Because strokes can be split, each end may well serve a different stroke. In the following discussion, references to previously pushed vertices mean specifically vertices that where previously pushed into the same end. Stroke Simplification Simplification is fairly straight-forward. Whenever a new vertex is pushed, the last four vertices of the stroke are examined. Several tests are applied to the three segments between them. If the newest segment is uselessly short or uselessly shallow, the new vertex simply overwrites the previous one. If both of the inside angles are sharp, it s a switchback, and the second-last and third-last vertices are averaged together to collapse the switch. If only the angle at the third-last vertex is sharp, the stroke is ended and a new one begun. // StrokeBuilder::simplifyFront / simplifyback // note: all operations deal with screen-space vertices length = distance between last and second-last vertices angle1 = angle at second-last vertex angle2 = angle at third-last vertex if length is too short OR angle1 is too shallow: remove second-last vertex from stroke else if angle1 and angle2 are both too sharp: average second-last and third-last vertices together else if angle2 is too sharp: split the stroke at the third-last vertex and carry on Stroke Matching Matching occurs after simplification. Every frame, StrokeBuilder remembers what Stroke each vertex was part of. Then, if that vertex appears again next frame, it can match the Stroke. The trick here is, if a new vertex appears that wasn t part of any Stroke last frame, my system needs to be able to indicate which Stroke it was part of in this frame, without revisiting it once a match is made. To accomplish this, Stroke IDs are reassigned every frame. The StrokeBuilder has two vectors for strokes- one for last frame, and one for this frame. When a match is made, it copies the stroke s data from last frame s vector, where it was stored under its old ID, across to this
8 frame s vector, where it s stored under its new ID. This way, every vertex knows which stroke it s part of, even if the stroke hasn t been matched yet. This saves the StrokeBuilder from having to backtrack to relabel previously unknown vertices when a match is finally made. If the stroke hasn t matched by the time it s extracted, a new stroke is created and stored in its place, ready to be matched next frame. When a new vertex is pushed, any of a number of things could happen, depending on whether the current stroke has been identified yet or not, and whether the last frame s stroke ID is a special value or an actual stroke identifier. There are three special values the stroke ID might take: unvisited, overloaded, and split-point. Unvisited is the null case. If there s no record of a vertex last frame, it s added to the end of the current stroke and marked as part of the current stroke for next frame. Overloaded is pretty simple as well. It signifies a vertex that was part of more than one stroke last frame. If a vertex was overloaded last frame, mark it overloaded this frame too. If a vertex is already marked as part of a stroke for the current frame, it s been visited before, and is overloaded. The existing stroke ID at the vertex is overwritten with overloaded to reflect this. Split-point is the fiddliest of the special values. If a vertex was a split-point last frame, it ll likely be a split-point again this frame. However, there cannot be two consecutive split-points in a stroke, because the segment between them would be effectively orphaned- unidentifiable because both vertices records have been overwritten with special values. Therefore, split-point vertices are only honoured on the next vertex push. If the next vertex is also a split-point, the previous split-point is overwritten with the current stroke ID for next frame. Otherwise, the stroke is split and things proceed as usual. In general, the last frame s stroke ID will be an actual stroke ID. If the current stroke hasn t been associated yet, the stroke data is copied across from last frame s list into this frame s list (at the newly established index). If the current stroke has already been matched and the new vertex corroborates the match, everything proceeds as usual. If the new vertex conflicts with the existing match, the stroke is split. // StrokeBuilder::matchStroke if previous vertex is a split-point: if new vertex is NOT a split-point: split the stroke at the previous vertex mark new vertex as part of the newly split-off stroke else if new vertex IS a split-point: overwrite prev vert s split-point entry with current stroke if new vertex is overloaded: overwrite existing record with overloaded return
9 if new vertex is unvisited: mark new vertex s entry with current stroke return if new vertex has a valid stroke ID from last frame: if current stroke hasn t been matched yet: copy strokedata from previous frame into new frame else if current stroke is already matched: if current stroke does NOT match new vertex s stroke ID: split stroke return mark new vertex s entry with current stroke Geometry Generation Stroke geometry generation is exactly as you d expect. A strip of panels is extruded perpendicular to both the stroke segment and the camera. The only trick involved is to scale the texture coordinates along the length to compensate for panels tilted away from the camera- the screen-space vertices are used to calculate the length of the panel on the screen and allocate a suitable length of texture.
10 Results Performance All testing was done on a 2 GHz Intel Core 2 Duo, with a GeForce 9500 GT and 2 GB RAM. All speeds are in milliseconds, and generally waver +/- a tenth of a millisecond in either direction. Render Process retrieve depth-map, 900 x 900 window Approx. Time 7.5 ms, fixed cost prep camera transformations 1 complete render of model vertex depth tests build triangle normals select useful edges assemble strokes 0-9 ms, fixed cost 3.6 ms each 0.4 ms 0.5 ms 0.5 ms 1.3 ms To render at 60 fps, each frame has to finish in 17 ms. Additional models have very predictable performance costs on any given run, though variance between runs is fairly high. The demo model is about 4000 triangles, 6000 edges, 2000 vertices. Depending on viewing angle, it ll produce between 130 and 220 strokes. models strokes ms to render frames / second The core functionality is very solid. Edge selection is accurate, reliable, and and efficient. Assembling contiguous edges into strokes is likewise very dependable. 1 This is exactly as nonsensical as it seems. The cost appears and disappears erratically. Could very plausibly be a bug in the profiling code, but the frame-rates are consistent with the reports.
11 Visual Results Using textures to lay down simulated brush strokes works quite well. Here we have three samples- a rendering in sharpie, in pencil, and in colorful stars.
12 This is a piece of lineart [8] that I tried to replicate. I thought the simple lines would work well, but lineweight was very important to the effectiveness of the drawing. Without parameterized strokes to give a measure of width control, the recreation was largely unsuccessful.
13 Oustanding Issues The stroke matching system is very flaky. Larger strokes are stable, but short, 2-4 vertex strokes can often be seen flickering. Using vertices to carry match data was convenient, but since each can only carry one match, data is lost from vertices that were used by more than one stroke. Longer strokes survive this, but shorter ones are very vulnerable. The system mostly stabilizes, but it fails often enough that at any given time there s usually at least one unstable stroke. Complex models strokes are also very prone to fragmentation as the camera pans around. Once a split-point is established, it ll survive until the vertex passes out of view- this accumulation causes strokes to get shorter with age. Hardy split-points are a boon to temporal stability- without them, strokes will fight for intersection vertices, growing and shrinking between frames. The shorter strokes that result can be visually unappealing, though. Transitions between strokes are sharp, and as they become more numerous the result is visually displeasing. Stroke stability is exacerbated by rasterization problems in the depth map. If a triangle is at a sharp enough angle to the screen, it can occlude its own vertices and cause false negatives. False negatives on vertex depth-tests cause dropped edges. I mitigate the problem by checking the four adjacent pixels as well. This is completely effective at the silhouette and on ridges, but fails completely in valleys.
14 Conclusions It s clear that the system is not fast enough to render large-scale scenes while leaving enough resources for a game engine. Roughly half the cost of the model render is in the stroke assembly; stroke matching and geometry generation in particular are very immature. I expect that their performance could easily be improved a great deal. The entire stroke assembly procedure calculates a great many lengths and angles that could likely be cached. It also seems likely that stroke geometry could be generated with a vertex shader. Even if the cost of stroke assembly were driven to zero, however, I have my doubts that the system as it stands would run well alongside other components of a game engine. In hindsight, it seems that while I cart both three-dimensional and two-dimensional vertices through the entire stroke assembly process, the three-dimensional vertices play very little role in the construction. It s possible that I could do away with the three-dimensional vertices entirely and construct everything in screen-space. Future Work The system as a whole is far from complete. Possibly the most important missing feature is stroke parameterization. Weighted vertices bound to stroke width or alpha will go a long way towards acheiving impressive visual results. Randomized texture synthesis from style samples as demonstrated by WYSIWYG [2], would be very nice, but is hardly required. Each of the textures here uses only a single sample, and there s no obvious repetition.
15 References 1. GRABLI S., TURQUIN E., DURAND F., SILLION F.: Programmable style for NPR line drawing. In Proc. EGSR (2004), pp KALNINS R. D., MARKOSIAN L., MEIER B. J., KOWALSKI M. A., LEE J. C., DAVIDSON P. L., WEBB M., HUGHES J. F., FINKELSTEIN A.: WYSIWYG NPR: drawing strokes directly on 3D models. In Proc. SIGGRAPH, ACM TOG (2002), pp Borderlands. Computer software. Vers K Games, 20 Oct Ōkami. Computer software. Capcom, 20 Apr NORTHRUP J. D., MARKOSIAN L.: Artistic silhouettes: a hybrid approach. In Proc. NPAR (2000), pp MARKOSIAN L., KOWALSKI M. A., TRYCHIN S.J., BOURDEV L.D., GOLDSTEIN D., HUGHES J.F.: Real-time nonphotorealistic rendering. In Proc. of SIGGRAPH 97, pages , August KALNINS R. D., "WYSIWYG NPR: Interactive Stylization for Stroke-Based Rendering of 3D Animation." Thesis. Princeton University, Delaney, Luka. Dyke-o-licious. Digital image. DeviantArt. 31 May Web. 16 July <
NPR. CS 334 Non-Photorealistic Rendering. Daniel G. Aliaga
NPR CS 334 Non-Photorealistic Rendering Daniel G. Aliaga 3D Computer Graphics Today Miraculous performance leaps Stunning price cuts Curiously low impact Games Movies * Slides courtesy of Lee Markosian
More informationLet s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render
1 There are two major classes of algorithms for extracting most kinds of lines from 3D meshes. First, there are image-space algorithms that render something (such as a depth map or cosine-shaded model),
More informationNonphotorealism. Christian Miller CS Fall 2011
Nonphotorealism Christian Miller CS 354 - Fall 2011 Different goals Everything we ve done so far has been working (more or less) towards photorealism But, you might not want realism as a stylistic choice
More informationAdvanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim
Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim Cel shading, also known as toon shading, is a non- photorealistic rending technique that has been used in many animations and
More informationWed, October 12, 2011
Practical Occlusion Culling in Killzone 3 Michal Valient Lead Tech, Guerrilla B.V. Talk takeaway Occlusion culling system used in Killzone 3 The reasons why to use software rasterization (Some) technical
More informationRendering Nonphotorealistic Strokes with Temporal and Arc-Length Coherence
-.,., Rendering Nonphotorealistic Strokes with Temporal and Arc-Length Coherence Lubomir Bourdev Department of Computer Science Brown University Submitted in partial fulfillment of the requirements for
More informationNon photorealistic Rendering
EECS 487 Non photorealistic Rendering Lee Markosian December 11, 2006 Whether to use photorealism depends on the purpose of the image: Documentation Illustration Story telling Expression Whether to use
More informationI d like to start this section with a quote from David Byrne in an article for Utne. In the article he was mostly talking about 2D design and
1 I d like to start this section with a quote from David Byrne in an article for Utne. In the article he was mostly talking about 2D design and illustration, but I believe his point translates to renderings
More informationNon-Photorealistic Experimentation Jhon Adams
Non-Photorealistic Experimentation Jhon Adams Danny Coretti Abstract Photo-realistic rendering techniques provide an excellent method for integrating stylized rendering into an otherwise dominated field
More informationSimple Silhouettes for Complex Surfaces
Eurographics Symposium on Geometry Processing(2003) L. Kobbelt, P. Schröder, H. Hoppe (Editors) Simple Silhouettes for Complex Surfaces D. Kirsanov, P. V. Sander, and S. J. Gortler Harvard University Abstract
More informationFast silhouette and crease edge synthesis with geometry shaders
Fast silhouette and crease edge synthesis with geometry shaders Balázs Hajagos László Szécsi Balázs Csébfalvi Budapest University of Technology and Economics Figure 1: Outline rendering Abstract We describe
More informationArt Based Rendering of Fur by Instancing Geometry
Art Based Rendering of Fur by Instancing Geometry Abstract Richie Steigerwald In this paper, I describe a non- photorealistic rendering system that uses strokes to render fur and grass in a stylized manner
More informationEnhancing Information on Large Scenes by Mixing Renderings
Enhancing Information on Large Scenes by Mixing Renderings Vincent Boyer & Dominique Sobczyk [boyer,dom]@ai.univ-paris8.fr L.I.A.S.D. - Université Paris 8 2 rue de la liberté 93526 Saint-Denis Cedex -
More informationNon photorealistic Rendering
EECS 487 Non photorealistic Rendering Lee Markosian April 9, 2007 Whether to use photorealism depends on the purpose of the image: Training/simulation Documentation Illustration Story telling Expression
More informationPoint based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural
1 Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural to consider using it in video games too. 2 I hope that
More informationReal-Time Non- Photorealistic Rendering
Real-Time Non- Photorealistic Rendering Presented by: Qing Hu LIAO SOCS, McGill Feb 1, 2005 Index Introduction Motivation Appel s Algorithm Improving Schema Rendering Result Economy of line A great deal
More informationA model to blend renderings
A model to blend renderings Vincent Boyer and Dominique Sobczyk L.I.A.S.D.-Universit Paris 8 September 15, 2006 Abstract. We propose a model to blend renderings. It consists in mixing different kind of
More informationDrawing Fast The Graphics Pipeline
Drawing Fast The Graphics Pipeline CS559 Fall 2015 Lecture 9 October 1, 2015 What I was going to say last time How are the ideas we ve learned about implemented in hardware so they are fast. Important:
More informationSpring 2009 Prof. Hyesoon Kim
Spring 2009 Prof. Hyesoon Kim Application Geometry Rasterizer CPU Each stage cane be also pipelined The slowest of the pipeline stage determines the rendering speed. Frames per second (fps) Executes on
More informationRendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane
Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world
More informationImage Precision Silhouette Edges
Image Precision Silhouette Edges by Ramesh Raskar and Michael Cohen Presented at I3D 1999 Presented by Melanie Coggan Outline Motivation Previous Work Method Results Conclusions Outline Motivation Previous
More informationCSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012
CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 Announcements Homework project #2 due this Friday, October
More informationCould you make the XNA functions yourself?
1 Could you make the XNA functions yourself? For the second and especially the third assignment, you need to globally understand what s going on inside the graphics hardware. You will write shaders, which
More informationReal-time non-photorealistic rendering
Real-time non-photorealistic rendering Lauri Siljamäki HUT Lauri.Siljamaki@hut.fi Abstract This paper summarizes techniques used for real-time non-photorealistic rendering (NPR). Currently most NPR images
More informationCulling. Computer Graphics CSE 167 Lecture 12
Culling Computer Graphics CSE 167 Lecture 12 CSE 167: Computer graphics Culling Definition: selecting from a large quantity In computer graphics: selecting primitives (or batches of primitives) that are
More informationRSX Best Practices. Mark Cerny, Cerny Games David Simpson, Naughty Dog Jon Olick, Naughty Dog
RSX Best Practices Mark Cerny, Cerny Games David Simpson, Naughty Dog Jon Olick, Naughty Dog RSX Best Practices About libgcm Using the SPUs with the RSX Brief overview of GCM Replay December 7 th, 2004
More informationStreaming Massive Environments From Zero to 200MPH
FORZA MOTORSPORT From Zero to 200MPH Chris Tector (Software Architect Turn 10 Studios) Turn 10 Internal studio at Microsoft Game Studios - we make Forza Motorsport Around 70 full time staff 2 Why am I
More informationTSBK03 Screen-Space Ambient Occlusion
TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm
More informationLOD and Occlusion Christian Miller CS Fall 2011
LOD and Occlusion Christian Miller CS 354 - Fall 2011 Problem You want to render an enormous island covered in dense vegetation in realtime [Crysis] Scene complexity Many billions of triangles Many gigabytes
More informationSpring 2011 Prof. Hyesoon Kim
Spring 2011 Prof. Hyesoon Kim Application Geometry Rasterizer CPU Each stage cane be also pipelined The slowest of the pipeline stage determines the rendering speed. Frames per second (fps) Executes on
More informationImage Precision Silhouette Edges
Image Precision Silhouette Edges Ramesh Raskar * Michael Cohen + * University of North Carolina at Chapel Hill + Microsoft Research Abstract inding and displaying silhouette edges is important in applications
More informationDiFi: Distance Fields - Fast Computation Using Graphics Hardware
DiFi: Distance Fields - Fast Computation Using Graphics Hardware Avneesh Sud Dinesh Manocha UNC-Chapel Hill http://gamma.cs.unc.edu/difi Distance Fields Distance Function For a site a scalar function f:r
More informationGame Programming Lab 25th April 2016 Team 7: Luca Ardüser, Benjamin Bürgisser, Rastislav Starkov
Game Programming Lab 25th April 2016 Team 7: Luca Ardüser, Benjamin Bürgisser, Rastislav Starkov Interim Report 1. Development Stage Currently, Team 7 has fully implemented functional minimum and nearly
More informationStylized strokes for coherent line drawings
Computational Visual Media DOI 10.1007/s41095-015-0009-1 Vol. 1, No. 1, March 2015, 79 89 Research Article Stylized strokes for coherent line drawings Liming Lou 1, Lu Wang 1 ( ), and Xiangxu Meng 1 c
More informationCS4620/5620: Lecture 14 Pipeline
CS4620/5620: Lecture 14 Pipeline 1 Rasterizing triangles Summary 1! evaluation of linear functions on pixel grid 2! functions defined by parameter values at vertices 3! using extra parameters to determine
More informationPowerVR Hardware. Architecture Overview for Developers
Public Imagination Technologies PowerVR Hardware Public. This publication contains proprietary information which is subject to change without notice and is supplied 'as is' without warranty of any kind.
More informationPipeline Operations. CS 4620 Lecture 14
Pipeline Operations CS 4620 Lecture 14 2014 Steve Marschner 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives
More informationHardware-driven visibility culling
Hardware-driven visibility culling I. Introduction 20073114 김정현 The goal of the 3D graphics is to generate a realistic and accurate 3D image. To achieve this, it needs to process not only large amount
More informationView-Dependent Particles for Interactive Non-Photorealistic Rendering
View-Dependent Particles for Interactive Non-Photorealistic Rendering Research Paper 1 Abstract We present a novel framework for non-photorealistic rendering based on view-dependent geometric simplification
More informationCSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling
CSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017 Announcements Project 3 due Monday Nov 13 th at
More informationApplications of Explicit Early-Z Culling
Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of
More informationINSPIRE: An Interactive Image Assisted Non-Photorealistic Rendering System
INSPIRE: An Interactive Image Assisted Non-Photorealistic Rendering System Minh X. Nguyen Hui Xu Xiaoru Yuan Baoquan Chen Department of Computer Science and Engineering University of Minnesota at Twin
More informationNon-photorealistic Rendering
Non-photorealistic Rendering Art Rendering 1 From: ATI Radeon 9700 Real-Time Demos A Brief History of (Western) Painting Prehistoric Egyptian Medieval Renaissance A peak in realism Impressionism Modernism
More informationArtistic Rendering of Function-based Shape Models
Artistic Rendering of Function-based Shape Models by Shunsuke Suzuki Faculty of Computer and Information Science Hosei University n00k1021@k.hosei.ac.jp Supervisor: Alexander Pasko March 2004 1 Abstract
More informationProgressive Mesh. Reddy Sambavaram Insomniac Games
Progressive Mesh Reddy Sambavaram Insomniac Games LOD Schemes Artist made LODs (time consuming, old but effective way) ViewDependentMesh (usually used for very large complicated meshes. CAD apps. Probably
More informationPipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11
Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION
More informationCSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018
CSE 167: Introduction to Computer Graphics Lecture #9: Visibility Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018 Announcements Midterm Scores are on TritonEd Exams to be
More informationReal-Time Universal Capture Facial Animation with GPU Skin Rendering
Real-Time Universal Capture Facial Animation with GPU Skin Rendering Meng Yang mengyang@seas.upenn.edu PROJECT ABSTRACT The project implements the real-time skin rendering algorithm presented in [1], and
More informationNon-Photo Realistic Rendering. Jian Huang
Non-Photo Realistic Rendering Jian Huang P and NP Photo realistic has been stated as the goal of graphics during the course of the semester However, there are cases where certain types of non-photo realistic
More informationShadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions.
1 2 Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions. 3 We aim for something like the numbers of lights
More informationCS427 Multicore Architecture and Parallel Computing
CS427 Multicore Architecture and Parallel Computing Lecture 6 GPU Architecture Li Jiang 2014/10/9 1 GPU Scaling A quiet revolution and potential build-up Calculation: 936 GFLOPS vs. 102 GFLOPS Memory Bandwidth:
More informationMany rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters.
1 2 Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters. Crowd rendering in large environments presents a number of challenges,
More information3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0)
Acceleration Techniques V1.2 Anthony Steed Based on slides from Celine Loscos (v1.0) Goals Although processor can now deal with many polygons (millions), the size of the models for application keeps on
More informationRasterization and Graphics Hardware. Not just about fancy 3D! Rendering/Rasterization. The simplest case: Points. When do we care?
Where does a picture come from? Rasterization and Graphics Hardware CS559 Course Notes Not for Projection November 2007, Mike Gleicher Result: image (raster) Input 2D/3D model of the world Rendering term
More informationPOWERVR MBX. Technology Overview
POWERVR MBX Technology Overview Copyright 2009, Imagination Technologies Ltd. All Rights Reserved. This publication contains proprietary information which is subject to change without notice and is supplied
More informationMath Dr. Miller - Constructing in Sketchpad (tm) - Due via by Friday, Mar. 18, 2016
Math 304 - Dr. Miller - Constructing in Sketchpad (tm) - Due via email by Friday, Mar. 18, 2016 As with our second GSP activity for this course, you will email the assignment at the end of this tutorial
More informationHello, Thanks for the introduction
Hello, Thanks for the introduction 1 In this paper we suggest an efficient data-structure for precomputed shadows from point light or directional light-sources. Because, in fact, after more than four decades
More informationView-Dependent Particles for Interactive Non-Photorealistic Rendering
View-Dependent Particles for Interactive Non-Photorealistic Rendering Derek Cornish 1, Andrea Rowan 2, David Luebke 2 1 2 Intrinsic Graphics University of Virginia Abstract We present a novel framework
More informationA GPU-Based Approach to Non-Photorealistic Rendering in the Graphic Style of Mike Mignola
A GPU-Based Approach to Non-Photorealistic Rendering in the Graphic Style of Mike Mignola Abstract The subject of Non-Photorealistic Rendering (NPR) is one which tends towards a certain, small set of targeted
More informationSynthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons. Abstract. 2. Related studies. 1.
Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons sub047 Abstract BTF has been studied extensively and much progress has been done for measurements, compression
More informationCS 465 Program 4: Modeller
CS 465 Program 4: Modeller out: 30 October 2004 due: 16 November 2004 1 Introduction In this assignment you will work on a simple 3D modelling system that uses simple primitives and curved surfaces organized
More informationNon-Photorealistic Rendering (NPR) Christian Richardt, Rainbow Group
Non-Photorealistic Rendering (NPR) Christian Richardt, Rainbow Group Structure in six parts 1. Definition of non-photorealistic rendering (NPR) 2. History of computer graphics: from 1970s to 1995 3. Overview
More informationDrawing Fast The Graphics Pipeline
Drawing Fast The Graphics Pipeline CS559 Spring 2016 Lecture 10 February 25, 2016 1. Put a 3D primitive in the World Modeling Get triangles 2. Figure out what color it should be Do ligh/ng 3. Position
More informationCMSC 491A/691A Artistic Rendering. Artistic Rendering
CMSC 491A/691A Artistic Rendering Penny Rheingans UMBC Artistic Rendering Computer-generated images in a style similar to some artistic media or style Also called non-photorealistic rendering (NPR) Different
More informationPipeline Operations. CS 4620 Lecture 10
Pipeline Operations CS 4620 Lecture 10 2008 Steve Marschner 1 Hidden surface elimination Goal is to figure out which color to make the pixels based on what s in front of what. Hidden surface elimination
More informationHere s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and
1 Here s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and the light. 2 To visualize this problem, consider the
More informationICS RESEARCH TECHNICAL TALK DRAKE TETREAULT, ICS H197 FALL 2013
ICS RESEARCH TECHNICAL TALK DRAKE TETREAULT, ICS H197 FALL 2013 TOPIC: RESEARCH PAPER Title: Data Management for SSDs for Large-Scale Interactive Graphics Applications Authors: M. Gopi, Behzad Sajadi,
More informationDrawing Fast The Graphics Pipeline
Drawing Fast The Graphics Pipeline CS559 Fall 2016 Lectures 10 & 11 October 10th & 12th, 2016 1. Put a 3D primitive in the World Modeling 2. Figure out what color it should be 3. Position relative to the
More informationCMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker
CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection
More informationUniversiteit Leiden Computer Science
Universiteit Leiden Computer Science Optimizing octree updates for visibility determination on dynamic scenes Name: Hans Wortel Student-no: 0607940 Date: 28/07/2011 1st supervisor: Dr. Michael Lew 2nd
More informationA Bandwidth Effective Rendering Scheme for 3D Texture-based Volume Visualization on GPU
for 3D Texture-based Volume Visualization on GPU Won-Jong Lee, Tack-Don Han Media System Laboratory (http://msl.yonsei.ac.k) Dept. of Computer Science, Yonsei University, Seoul, Korea Contents Background
More informationThere are two lights in the scene: one infinite (directional) light, and one spotlight casting from the lighthouse.
Sample Tweaker Ocean Fog Overview This paper will discuss how we successfully optimized an existing graphics demo, named Ocean Fog, for our latest processors with Intel Integrated Graphics. We achieved
More information2.5D Cartoon Models. Abstract. 1 Introduction. 2 Related Work. Takeo Igarashi The University of Tokyo. Frédo Durand MIT CSAIL. Alec Rivers MIT CSAIL
Alec Rivers MIT CSAIL 2.5D Cartoon Models Takeo Igarashi The University of Tokyo Frédo Durand MIT CSAIL (a) (b) (c) Figure 1: A 2.5D Cartoon: We take vector art drawings of a cartoon from different views
More informationComputergrafik. Matthias Zwicker. Herbst 2010
Computergrafik Matthias Zwicker Universität Bern Herbst 2010 Today Bump mapping Shadows Shadow mapping Shadow mapping in OpenGL Bump mapping Surface detail is often the result of small perturbations in
More informationWhite Paper. Solid Wireframe. February 2007 WP _v01
White Paper Solid Wireframe February 2007 WP-03014-001_v01 White Paper Document Change History Version Date Responsible Reason for Change _v01 SG, TS Initial release Go to sdkfeedback@nvidia.com to provide
More informationSoft Particles. Tristan Lorach
Soft Particles Tristan Lorach tlorach@nvidia.com January 2007 Document Change History Version Date Responsible Reason for Change 1 01/17/07 Tristan Lorach Initial release January 2007 ii Abstract Before:
More informationAdaptive Point Cloud Rendering
1 Adaptive Point Cloud Rendering Project Plan Final Group: May13-11 Christopher Jeffers Eric Jensen Joel Rausch Client: Siemens PLM Software Client Contact: Michael Carter Adviser: Simanta Mitra 4/29/13
More informationAbstract. Introduction. Kevin Todisco
- Kevin Todisco Figure 1: A large scale example of the simulation. The leftmost image shows the beginning of the test case, and shows how the fluid refracts the environment around it. The middle image
More informationDeferred Rendering Due: Wednesday November 15 at 10pm
CMSC 23700 Autumn 2017 Introduction to Computer Graphics Project 4 November 2, 2017 Deferred Rendering Due: Wednesday November 15 at 10pm 1 Summary This assignment uses the same application architecture
More informationRobust Stencil Shadow Volumes. CEDEC 2001 Tokyo, Japan
Robust Stencil Shadow Volumes CEDEC 2001 Tokyo, Japan Mark J. Kilgard Graphics Software Engineer NVIDIA Corporation 2 Games Begin to Embrace Robust Shadows 3 John Carmack s new Doom engine leads the way
More informationHermite Curves. Hermite curves. Interpolation versus approximation Hermite curve interpolates the control points Piecewise cubic polynomials
Hermite Curves Hermite curves Interpolation versus approximation Hermite curve interpolates the control points Piecewise cubic polynomials Focus on one segment T1 P0 Q(t) T0 Control points of Bezier curve
More informationCSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling
CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015 Announcements Project 4 due tomorrow Project
More informationReading on the Accumulation Buffer: Motion Blur, Anti-Aliasing, and Depth of Field
Reading on the Accumulation Buffer: Motion Blur, Anti-Aliasing, and Depth of Field 1 The Accumulation Buffer There are a number of effects that can be achieved if you can draw a scene more than once. You
More informationSeamless Integration of Stylized Renditions in Computer-Generated Landscape Visualization
Seamless Integration of Stylized Renditions in Computer-Generated Landscape Visualization Liviu Coconu 1, Carsten Colditz 2, Hans-Christian Hege 1 and Oliver Deussen 2 Abstract We propose enhancements
More informationSoftware Occlusion Culling
Software Occlusion Culling Abstract This article details an algorithm and associated sample code for software occlusion culling which is available for download. The technique divides scene objects into
More informationAtmospheric Reentry Geometry Shader
Atmospheric Reentry Geometry Shader Robert Lindner Introduction In order to simulate the effect of an object be it an asteroid, UFO or spacecraft entering the atmosphere of a planet, I created a geometry
More information3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models
3D Programming Concepts Outline 3D Concepts Displaying 3D Models 3D Programming CS 4390 3D Computer 1 2 3D Concepts 3D Model is a 3D simulation of an object. Coordinate Systems 3D Models 3D Shapes 3D Concepts
More informationCOMPUTER GRAPHICS COURSE. Rendering Pipelines
COMPUTER GRAPHICS COURSE Rendering Pipelines Georgios Papaioannou - 2014 A Rendering Pipeline Rendering or Graphics Pipeline is the sequence of steps that we use to create the final image Many graphics/rendering
More informationApplications of Explicit Early-Z Z Culling. Jason Mitchell ATI Research
Applications of Explicit Early-Z Z Culling Jason Mitchell ATI Research Outline Architecture Hardware depth culling Applications Volume Ray Casting Skin Shading Fluid Flow Deferred Shading Early-Z In past
More informationGame Architecture. 2/19/16: Rasterization
Game Architecture 2/19/16: Rasterization Viewing To render a scene, need to know Where am I and What am I looking at The view transform is the matrix that does this Maps a standard view space into world
More informationOverview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality.
Overview A real-time shadow approach for an Augmented Reality application using shadow volumes Introduction of Concepts Standard Stenciled Shadow Volumes Method Proposed Approach in AR Application Experimental
More informationTechnical Quake. 1 Introduction and Motivation. Abstract. Michael Batchelder Kacper Wysocki
Technical Quake Michael Batchelder mbatch@cs.mcgill.ca Kacper Wysocki kacper@cs.mcgill.ca creases and silhouettes with distance. These ideas have not yet been mentioned in literature to date that we are
More informationPhotorealistic 3D Rendering for VW in Mobile Devices
Abstract University of Arkansas CSCE Department Advanced Virtual Worlds Spring 2013 Photorealistic 3D Rendering for VW in Mobile Devices Rafael Aroxa In the past few years, the demand for high performance
More informationTable of Contents. Questions or problems?
1 Introduction Overview Setting Up Occluders Shadows and Occlusion LODs Creating LODs LOD Selection Optimization Basics Controlling the Hierarchy MultiThreading Multiple Active Culling Cameras Umbra Comparison
More informationOptimizing DirectX Graphics. Richard Huddy European Developer Relations Manager
Optimizing DirectX Graphics Richard Huddy European Developer Relations Manager Some early observations Bear in mind that graphics performance problems are both commoner and rarer than you d think The most
More informationBack to Flat Producing 2D Output from 3D Models
Back to Flat Producing 2D Output from 3D Models David Cohn Modeling in 3D is fine, but eventually, you need to produce 2D drawings. In this class, you ll learn about tools in AutoCAD that let you quickly
More informationOptimisation. CS7GV3 Real-time Rendering
Optimisation CS7GV3 Real-time Rendering Introduction Talk about lower-level optimization Higher-level optimization is better algorithms Example: not using a spatial data structure vs. using one After that
More informationgraphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1
graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time
More informationMention driver developers in the room. Because of time this will be fairly high level, feel free to come talk to us afterwards
1 Introduce Mark, Michael Poll: Who is a software developer or works for a software company? Who s in management? Who knows what the OpenGL ARB standards body is? Mention driver developers in the room.
More informationArt-based Rendering with Graftals
: Interactive Computer Graphics Art-based Rendering with Graftals 1 Introduction Due: 3/13/10, 11:59 PM Between Ray in 123 and Photon Mapping that you just did, you ve now had a good deal of exposure to
More information