Hardware Displacement Mapping

Similar documents
Per-Pixel Lighting and Bump Mapping with the NVIDIA Shading Rasterizer

CS451Real-time Rendering Pipeline

3D Rasterization II COS 426

Motivation MGB Agenda. Compression. Scalability. Scalability. Motivation. Tessellation Basics. DX11 Tessellation Pipeline

Approximate Catmull-Clark Patches. Scott Schaefer Charles Loop

Self-shadowing Bumpmap using 3D Texture Hardware

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters.

CSE528 Computer Graphics: Theory, Algorithms, and Applications

NVIDIA nfinitefx Engine: Programmable Pixel Shaders

Displacement Mapping

Craig Peeper Software Architect Windows Graphics & Gaming Technologies Microsoft Corporation


RSX Best Practices. Mark Cerny, Cerny Games David Simpson, Naughty Dog Jon Olick, Naughty Dog

Mattan Erez. The University of Texas at Austin

Programming Graphics Hardware

Textures. Texture coordinates. Introduce one more component to geometry

Evolution of GPUs Chris Seitz

Enabling immersive gaming experiences Intro to Ray Tracing

TSBK03 Screen-Space Ambient Occlusion

Terrain rendering (part 1) Due: Monday, March 10, 10pm

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming

The Application Stage. The Game Loop, Resource Management and Renderer Design

The Terrain Rendering Pipeline. Stefan Roettger, Ingo Frick. VIS Group, University of Stuttgart. Massive Development, Mannheim

Performance OpenGL Programming (for whatever reason)

Soft Particles. Tristan Lorach

Rationale for Non-Programmable Additions to OpenGL 2.0

Computergrafik. Matthias Zwicker. Herbst 2010

Texture mapping. Computer Graphics CSE 167 Lecture 9

Rendering Subdivision Surfaces Efficiently on the GPU

Why modern versions of OpenGL should be used Some useful API commands and extensions

Crack-Free Tessellation Displacement

Surface Rendering. Surface Rendering

4.6 Normal Mapping Simplifying Meshes for Normal Mapped Objects. CHAPTER 4 Surfacing

A Trip Down The (2011) Rasterization Pipeline

A SIMD-efficient 14 Instruction Shader Program for High-Throughput Microtriangle Rasterization

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

D3D11 Tessellation. Sarah Tariq NVIDIA

Case 1:17-cv SLR Document 1-3 Filed 01/23/17 Page 1 of 33 PageID #: 60 EXHIBIT C

Next-Generation Graphics on Larrabee. Tim Foley Intel Corp

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Adaptive Normal Map Compression for 3D Video Games

2.11 Particle Systems

Spring 2009 Prof. Hyesoon Kim

Rendering Grass with Instancing in DirectX* 10

CSE 167: Introduction to Computer Graphics Lecture #8: Textures. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2016

Real - Time Rendering. Graphics pipeline. Michal Červeňanský Juraj Starinský

Spring 2011 Prof. Hyesoon Kim

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development

Shadow Techniques. Sim Dietrich NVIDIA Corporation

Terrain Rendering (Part 1) Due: Thursday November 30 at 10pm

A Conceptual and Practical Look into Spherical Curvilinear Projection By Danny Oros

Could you make the XNA functions yourself?

CS 4620 Program 3: Pipeline

Case 1:17-cv SLR Document 1-4 Filed 01/23/17 Page 1 of 30 PageID #: 75 EXHIBIT D

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2014

Practical Parallax Occlusion Mapping For Highly Detailed Surface Rendering

Normal Maps and Cube Maps. What are they and what do they mean?

POWERVR MBX. Technology Overview

5 Subdivision Surfaces

Multi-View Soft Shadows. Louis Bavoil

More Texture Mapping. Texture Mapping 1/46

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions.

LOD and Occlusion Christian Miller CS Fall 2011

CS GAME PROGRAMMING Question bank

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Hardware-Assisted Relief Texture Mapping

CGS 3220 Lecture 17 Subdivision Surfaces

Attention to Detail! Creating Next Generation Content For Radeon X1800 and beyond

Spring 2010 Prof. Hyesoon Kim. AMD presentations from Richard Huddy and Michael Doggett

White Paper. Solid Wireframe. February 2007 WP _v01

Drawing a Crowd. David Gosselin Pedro V. Sander Jason L. Mitchell. 3D Application Research Group ATI Research

Morphological: Sub-pixel Morhpological Anti-Aliasing [Jimenez 11] Fast AproXimatte Anti Aliasing [Lottes 09]

CS 428: Fall Introduction to. Texture mapping and filtering. Andrew Nealen, Rutgers, /18/2010 1

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

Hello, Thanks for the introduction

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

White Paper. Texture Arrays Terrain Rendering. February 2007 WP _v01

INTERNATIONAL ORGANIZATION FOR STANDARDIZATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO-IEC JTC1/SC29/WG11

Graphics and Interaction Rendering pipeline & object modelling

Images from 3D Creative Magazine. 3D Modelling Systems

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Today. Texture mapping in OpenGL. Texture mapping. Basic shaders for texturing. Today. Computergrafik

Input Nodes. Surface Input. Surface Input Nodal Motion Nodal Displacement Instance Generator Light Flocking

Near-Optimum Adaptive Tessellation of General Catmull-Clark Subdivision Surfaces

Introduction to Computer Graphics with WebGL

Shadows in the graphics pipeline

INF3320 Computer Graphics and Discrete Geometry

1 Overview. EPFL 14 th Apr, /6. Michaël Defferrard Pierre Fechting Vu Hiep Doan

An Algorithm of 3D Mesh Reconstructing Based on the Rendering Pipeline

Advanced Lighting Techniques Due: Monday November 2 at 10pm

Pipeline Operations. CS 4620 Lecture 14

PVRTC & Texture Compression. User Guide

Computer Graphics CS 543 Lecture 13a Curves, Tesselation/Geometry Shaders & Level of Detail

PowerVR Performance Recommendations. The Golden Rules

Texture Mapping and Special Effects

Direct Rendering of Trimmed NURBS Surfaces

Transcription:

Matrox's revolutionary new surface generation technology, (HDM), equates a giant leap in the pursuit of 3D realism. Matrox is the first to develop a hardware implementation of displacement mapping and has contributed elements of this technology for inclusion as a standard feature in Microsoft 's DirectX 9 API. Introduced in the Matrox Parhelia -512 GPU, enables developers of 3D applications, such as games, to provide consumers with a more realistic and immersive 3D experience. Matrox's is a unique and innovative new technology that delivers increased realism for 3D games," said Kenny Mitchell, director 3D graphics software engineering, Westwood Studios."The fact that it only took me a few hours to get up and running with the technology to attain excellent results was really encouraging and I look forward to integrating the technology more widely in future Westwood titles." Introduction The challenge of creating highly realistic 3D scenes is fuelled by our desire to experience compelling virtual environments whether they form part of 3D-rendered movies or interactive 3D games. How well a 3D-rendered scene reflects reality depends on the accuracy of its shapes, or geometry, in addition to the fidelity of its colors and textures. (HDM), a powerful surface generation technology, combines two Matrox initiatives Depth- Adaptive Tessellation and Vertex Texturing to deliver extraordinary 3D realism through increased geometry detail, while providing the most simple and compact representation for high-resolution geometric data. The geometry of objects in a 3D scene is defined by 3D meshes, which are sets of connected triangles that represent the surface of these objects. To create 3D scenes with greater detail and realism, the number of triangles needed to represent objects within the scene must increase at an exponential rate. Consequently, rendering scenes of such complexity requires a technology that not only enables the dynamic generation of complex geometry, but also allows the high-resolution detail of such geometry to be encoded and stored using a simple and compact data representation. Unlike many other tessellation or higher-order surface generation techniques, does more than simply create smoother curves; it actually extrudes real geometric detail. can be applied to static environments, such as terrains, as well as to dynamic skinned objects, such as realistic 3D characters. Additionally, HDM provides a unique and powerful method of generating geometry with dynamic level of detail (LOD), which maximizes overall scene detail by increasing it in some areas and reducing unnecessary detail in others. + Displacement map Without With Final render Figure 1 provides higher detail 3D rendering. Matrox / 2

What is displacement mapping? Although the theoretical concept of displacement mapping is well-known in the graphics industry, it is important to provide some background information on this concept before explaining its first-ever hardware implementation. Theoretically, displacement mapping is a technique that uses displacement maps to displace the vertices of a mesh. The resulting displaced mesh takes the shape represented by the displacement map. Similar to texture maps, displacement maps are 2D bitmaps; however, the individual values of a displacement map represent height information instead of texture colors (texels). Collectively, these height values define the shape of the displaced mesh surface. When mapped onto the vertices of the initial mesh, the displacement map displaces the vertices to form the displaced mesh. The concept of displacement mapping is similar to the effect produced by a bed of nails. A face pressed against a bed of nails acts like a Flat bed of nails Object Displaced bed of nails displacement map, providing height data for each nail. When the face is mapped onto the nails, the nails are displaced, and mimic the shape of the face. Source mesh Displacement map Displaced mesh Matrox / 3

Enhancing displacement mapping using tessellation The concept of displacement mapping and its benefits can be extended by using displacement mapping in conjunction with tessellation a process that converts a mesh with a low triangle count into a mesh with a higher triangle count. While tessellation does not significantly alter the underlying shape of the object, it sometimes helps to smooth the surface of the object, much the same way that N-Patch tessellation does (figure 2). Unlike tessellation, displacement mapping can alter the shape of the underlying mesh considerably, based on data in the displacement map. By mapping the values of a displacement map onto the vertices of a tessellated mesh, the displaced mesh has both the desired shape and a much smoother surface. Matrox has developed and defined the hardware implementation of displacement mapping, which combines unique tessellation technology with texture mapping on geometry, to raise the bar for 3D realism. a) b) Figure 2 A mesh with a low triangle count (a) looks significantly smoother when tessellated (b). Matrox / 4

The process of As illustrated in figure 3, uses a low-triangle mesh, or a base mesh, in conjunction with a displacement map to produce a smooth and detailed mesh that can subsequently be textured or shaded using vertex and pixel shaders. This process can be divided into three stages: Base mesh Stage 1: Base mesh tessellation A base mesh and its associated displacement map are sent to the GPU over the AGP bus. The GPU tessellates the base mesh to increase its triangle count (figure 3a). The base mesh can be of any level of detail and is compatible with any existing or legacy hardware. Tessellated mesh 4 KB Displacement map (64 X 64 X 8 bit) Displaced mesh Texture map a) Tessellation increases the triangle count of the base mesh. b) The displacement map displaces the surface of the tessellated mesh into the desired shape. c) Vertex shaders, texture maps and/or pixel shaders are applied to the displaced mesh to create the final rendered image. As part of its technique, Matrox has implemented an innovative approach to tessellation called Depth-Adaptive Tessellation. This patent-pending tessellation scheme allows for varying levels of detail across a mesh and maintains smooth transitions between the levels of detail (LODs) without any geometry popping or cracking artifacts. Depth- Adaptive Tessellation is explained in detail on page 6. Stage 2: Mapping textures onto geometry Once the base mesh has been tessellated, the position of each vertex of the tessellated mesh is displaced (moved) in the direction of its vertex normal (figure 3b). The amount of displacement is determined as a function of the individual pixel values of the displacement map. The new position of each vertex is determined by mapping an associated value from the displacement map onto the vertex using a new concept by Matrox called Vertex Texturing. Vertex Texturing, explained on page 8, is similar to texture mapping, where a texture value, or texel, is mapped from a texture map onto each pixel of a triangle to determine the colors of its interior pixels. Displacement maps can be applied on both static base meshes and dynamic, skinned base meshes. Rendered image Figure 3 The process of. Stage 3: Rendering Once the displaced mesh has been created, it is treated and rendered like any other vertex stream. Therefore, vertex shaders, textures and pixel shaders can be applied to the mesh to form the final rendered image (figure 3c). Matrox / 5

Depth-Adaptive Tessellation Depth-Adaptive Tessellation is an advanced tessellation scheme that tessellates meshes using multiple levels of detail (LODs) to maximize the geometry detail of a 3D scene while maintaining high levels of performance (figure 4). By using LOD-based tessellation, the graphics processor avoids unnecessarily processing triangles that would otherwise not contribute significantly to the visual quality of the final rendered image. Lower levels of detail are acceptable when the object being rendered is further back in the scene. Because it appears smaller, it is rendered using a lower number of screen pixels. In fact, depending on the distance, increasing the number of triangles beyond a certain point may have little or no effect on an object's appearance (figure 4c). The ability to reduce the LOD for distant objects provides considerable savings on transformation, lighting, setup and rasterization, leaving a higher triangle budget for objects that are up-close. LODs can also be applied to an object whose mesh spans a significant portion of the scene's depth on the display. A good example of such an object is a terrain. For such objects, Depth-Adaptive Tessellation ensures that no cracks are seen on the seams where changes in LODs occur. a) Top view (wireframe) b) Camera view (wireframe) c) Camera view (rendered) Scene with Depth-Adaptive Tessellation (17,794 triangles) Low Far LOD Distance High Objects in the distance need less detail Camera Near Scene without Depth-Adaptive Tessellation (165,150 triangles) No LOD Distance Far Near Camera Figure 4 a) Top view of the entire terrain with varying LODs with respect to the camera. b) The portion of the terrain in the distance is rendered using a lower level of detail. Depth-Adaptive Tessellation dramatically reduces the number of triangles required in the scene while maintaining high geometry detail during rendering. c) The difference in rendering between the two scenes is not noticeable. Matrox / 6

Depth-Adaptive Tessellation also allows for the dynamic variation of an object's level of detail (LOD) as it moves closer to, or further away from, the foreground. The LOD determines the tessellation rate or amount of tessellation for each triangle in the mesh, based on one or a combination of several factors, such as distance. Each triangle is tessellated independently into the appropriate number of sub-triangles using its tessellation rate. As the position of the object changes onscreen, the tessellation rate of each triangle, or groups of triangles, may change frequently. By supporting floating-point (non-integer) tessellation rates (1.2, 2.5, etc.), Depth-Adaptive Tessellation ensures that each triangle's tessellation rate changes smoothly (figure 4a). Without support for floating-point tessellation rates, the appearance of the object can change abruptly as portions of the object move from one level of detail to another this results in artifacts referred to as popping. By providing smooth transitions between LODs, Depth-Adaptive Tessellation ensures that adjacent triangles with different tessellation rates do not have cracks or exhibit undesirable popping artifacts during dynamic tessellation (figure 4b). Highly flexible and scalable, Depth-Adaptive Tessellation is compatible with most higher-order surface types, including N-Patches. Moreover, Depth-Adaptive Tessellation has the added advantage of providing the best tessellation environment for displacement mapping, as explained in the next section. A patent-pending technology, Depth-Adaptive Tessellation offers the highest quality while providing maximum scalability, efficiency and performance. Matrox / 7

Mapping textures to geometry using Vertex Texturing Vertex Texturing is a key aspect of displacement mapping. While a) Base mesh b) Linear tessellated mesh the tessellation of a base mesh generates new vertices and their normals, the position of these new vertices can be calculated only by interpolating between the positions of the base vertices. Therefore, tessellation on its own cannot alter the underlying shape of the base mesh (figure 5b). Some higher-order surface descriptors, such as N-Patches (also known as PN triangles), can add curvature to a tessellated mesh to smooth the surface of an object. However, these descriptors cannot add any arbitrary detail to the mesh (figure 5c). In contrast, Vertex Texturing has no such limitations. By mapping textures onto geometry, the positions of the new vertices can be defined by the values or different functions of the values in the texture (figure 5d). In fact, these values can even be mapped onto the vertices after an N-Patch has been applied. c) N-Patch tessellated mesh is one of the most compelling features enabled by Vertex Texturing. In the case of Hardware Displacement Mapping, the texture is referred to as a displacement map. Similar to texture mapping, each vertex (pixel) is associated to the displacement map (texture map) by means of a displacement texture coordinate (texture coordinate). Displacement texture coordinates, like texture coordinates, are assigned using a "u,v" coordinate system. Figure 5 Vertex Texturing generates real geometric extrusions. d) Vertex Textured mesh Displacement maps and filtering Like texture mapping, a one-to-one correspondence between a vertex (screen pixel) and a displacement value (texel) is uncommon. Therefore, the displacement map needs to be up-sampled (up-scaled) or down-sampled (down-scaled) when it is mapped onto its associated vertices. When scaling textures for texture mapping, the use of various filtering techniques, such as bilinear and trilinear filtering, is common. Similarly, these filtering methods are used with displacement maps to calculate the most appropriate displacement value for each vertex. Matrox / 8

Displacement maps and mip-mapping When mapping the same texture onto triangles with different resolutions (numbers of pixels), it is common to have that texture available in different resolutions, or mip-maps. Mip-maps are pre-filtered, downscaled versions of the original texture. During texture mapping, texels are fetched from the mip-map with the nearest resolution, so that very little up-sampling or down-sampling of the texture is required. This process is referred to as mip-mapping. Similarly, as shown in figure 6, displacement mapping accommodates dynamic level of detail (LOD) tessellation of meshes by mip-mapping the displacement maps: a mesh or the portion of a mesh with low level of detail will use a lower mip-map of the displacement map and vice versa. Mip-mapping of displacement maps is especially useful for Depth- Adaptive Tessellation, where the level of detail of a mesh can vary across different portions of that mesh. When this is the case, trilinear filtering can be applied on the two adjacent mip-maps of the displacement map when mapping the displacements. This ensures that no surface jittering occurs on the mesh where changes in mip-map levels occur. The ability to use mip-mapping and advanced filtering methods with displacement maps makes one of the most powerful and scalable surface generation techniques available. Moreover, this technique provides a convenient way of generating procedural geometry by simply modifying a displacement map at runtime (procedural displacement mapping). Displacement maps: Lower LODs use lower mips Texture maps: Objects in the distance use lower mips Low Far LOD Distance High Near Camera Camera Figure 6 Mip-mapping of displacement maps is similar to mip-mapping of texture maps. Matrox / 9

Under NDA until May 14, 2002 Versatility of I works with both static and skinned meshes. Figure 7 shows how displacement mapping can be used with skinned characters to provide high levels of detail on characters. Moreover, different displacement maps can be used on the same base mesh to render distinctly different objects. fits seamlessly into the 3D graphics pipeline and is fully compatible with vertex shaders and pixel shaders. Effects using vertex and pixel shaders can easily be layered onto displacement-mapped geometry. + = + = + = Base mesh Displacement map Displaced mesh Final render Figure 7 Displacement mapping is also suitable for objects with dynamic skinned meshes such as characters. Matrox / 10

Under NDA until May 14, 2002 Due to the fact that displacement mapping uses 2D bitmaps to specify the shapes of objects, a large amount of readily available data can be used to generate new 3D objects and scenes in applications especially games. As an example, U.S. Government geological survey data is stored as elevation information that can be converted directly to displacement maps. This geological survey data represents the topography of regions and terrains around the world and can be used to create virtually identical 3D terrains. Figure 8 shows an actual geological map of the Grand Canyon and its 3D version generated using. Figure 8 Grand Canyon rendered using. Displacement map for Grand Canyon Compact representation of high detail geometry with I is one of the best surface descriptors for high-resolution geometry. It provides the most compact and simple way of representing data for the dynamic generation of high detail geometry. With, high-resolution geometrical data is encoded into a compact displacement map. Unlike raw vertex data, this encoded data does not need to include the direction of displacement, connectivity information, colors or texture coordinates these can be interpolated from the base mesh. Encoding geometry data into displacement maps also reduces the storage requirements on media that hold the models and lessens the burden on system memory and local video memory during runtime. Furthermore, developers can create multiple displacement maps for the same base mesh to produce models that differ significantly from one another (see figure 7). As seen in figure 3, a 4 KB displacement map (64 x 64 x 8 bit), can produce a detailed terrain from a flat base mesh of just 32 triangles. In fact, high levels of detail can also be obtained from a base mesh of as little as two triangles (see figure 5). As 3D scenes become more complex, the amount of data required to represent the increased geometry can be quite significant. This data needs to be transferred from system memory to the graphics hardware. Additionally, geometry data competes with other data, such as textures and instructions, for storage and bandwidth. Both system memory bandwidth and AGP bandwidth are limited and can often be a bottleneck for performance. By compressing 3D models into a low detail base mesh and a displacement map, also helps reduce the effect of such bandwidth bottlenecks while actually increasing the amount of detail when an object is rendered. Matrox / 11

Under NDA until May 14, 2002 versus bump mapping I Displacement mapping, while conceptually sharing certain similarities with bump mapping, is a far superior technique and produces results that cannot be obtained using bump mapping. First implemented in graphics hardware by Matrox in 1999, bump mapping uses a bump map on textures to produce an illusion of depth and detail on the surface of objects. The values of the bump map are used to perturb the fetches into texture and environment maps during texture mapping and environment mapping. Even though bump mapping can produce certain compelling 3D effects, it merely creates an illusion and does not actually generate new geometry (see figure 9). Other bump mapping techniques produce similar results and are, therefore, not comparable to. Figure 9 While bump mapping creates the illusion of depth, generates real new geometry. Triangle mesh Bump mapping Displacement mapping as an industry standard I As mentioned previously, Microsoft has included (HDM) as an industry standard in its next-generation DirectX API. This helps define a common HDM interface for all developers and facilitates the adoption of this technology. Additionally, Matrox has developed tools, which enable developers to author their content for. A thorough software development kit (SDK) is also available for qualified software developers to help them implement displacement mapping into their engines. To obtain a copy of the SDK, contact devrel@matrox.com. Matrox / 12

Conclusion (HDM) is an extremely powerful technique that can add an enormous amount of detail to any type of mesh by generating additional geometry. HDM encodes high-resolution data into compact 2D displacement textures, thereby providing a simple and compact representation for high detail geometrical data. The combination of Depth-Adaptive Tessellation and Vertex Texturing provides the most flexible technique for implementing level of detail tessellation in a 3D scene, while maximizing quality, efficiency and performance. Many developers are excited about using this technology to provide the most realistic 3D experience to end users. Benefits of include: Unprecedented detail and realism in 3D scenes with real geometric extrusion Simple and compact representation of high-resolution geometry data Maximum scalability and efficiency through Depth-Adaptive Tessellation and mip-mapping of displacement maps Ideal for a wide variety of 3D meshes including static and dynamic meshes Seamless integration into the 3D pipeline along with vertex and pixel shaders Defined as an industry standard through future versions of Microsoft's DirectX API Matrox / 13