Parallax Occlusion in Direct3D 11

Similar documents
Double-click on CS470_Lab09.zip and extract the contents of the archive into a subdirectory called CS470_Lab09

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Advanced Lighting Techniques Due: Monday November 2 at 10pm

CS 130 Final. Fall 2015

Interactive Methods in Scientific Visualization

Programming Graphics Hardware. GPU Applications. Randy Fernando, Cyril Zeller

Shading Shades. Frank Jargstorff. June 1, Abstract

Surface Detail Maps with Soft Self- Shadowing. Chris Green, VALVE

Input Nodes. Surface Input. Surface Input Nodal Motion Nodal Displacement Instance Generator Light Flocking

CS 354R: Computer Game Technology

Parallax Bumpmapping. Whitepaper

Soft Particles. Tristan Lorach

On-GPU ray traced terrain

Deferred Rendering Due: Wednesday November 15 at 10pm

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Per-Pixel Displacement Mapping with Distance Functions

Real - Time Rendering. Pipeline optimization. Michal Červeňanský Juraj Starinský

Real-Time Shadows. Last Time? Textures can Alias. Schedule. Questions? Quiz 1: Tuesday October 26 th, in class (1 week from today!

Those Delicious Texels or Dynamic Image-Space Per-Pixel

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

Shaders in Eve Online Páll Ragnar Pálsson

Deus Ex is in the Details

Advanced Shading and Texturing

Module Contact: Dr Stephen Laycock, CMP Copyright of the University of East Anglia Version 1

CS451 Texturing 4. Bump mapping continued. Jyh-Ming Lien Department of Computer SCience George Mason University. From Martin Mittring,

Relief Mapping in a Pixel Shader using Binary Search Policarpo, Fabio

Drawing a Crowd. David Gosselin Pedro V. Sander Jason L. Mitchell. 3D Application Research Group ATI Research

All the Polygons You Can Eat. Doug Rogers Developer Relations

Self-shadowing Bumpmap using 3D Texture Hardware

Computer Graphics (CS 543) Lecture 10: Soft Shadows (Maps and Volumes), Normal and Bump Mapping

CHAPTER 1 Graphics Systems and Models 3

AGDC Per-Pixel Shading. Sim Dietrich

Other Rendering Techniques CSE 872 Fall Intro You have seen Scanline converter (+z-buffer) Painter s algorithm Radiosity CSE 872 Fall

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

Terrain rendering (part 1) Due: Monday, March 10, 10pm

Texture Mapping II. Light maps Environment Maps Projective Textures Bump Maps Displacement Maps Solid Textures Mipmaps Shadows 1. 7.

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Real-Time Rendering. Tomas Möller Eric Haines. A K Peters Natick, Massachusetts

Shadows & Decals: D3D10 techniques from Frostbite. Johan Andersson Daniel Johansson

Game Architecture. 2/19/16: Rasterization

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Interactive Cloth Simulation. Matthias Wloka NVIDIA Corporation

Vertex Shaders for Geometry Compression

Applications of Explicit Early-Z Culling

CS 450: COMPUTER GRAPHICS TEXTURE MAPPING SPRING 2015 DR. MICHAEL J. REALE

Render-To-Texture Caching. D. Sim Dietrich Jr.

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Graphics Hardware. Instructor Stephen J. Guy

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Pipeline Operations. CS 4620 Lecture 10

Gettin Procedural. Jeremy Shopf 3D Application Research Group

1.2.3 The Graphics Hardware Pipeline

Practical Parallax Occlusion Mapping For Highly Detailed Surface Rendering

Pipeline Operations. CS 4620 Lecture 14

Computer Graphics 10 - Shadows

Direct Rendering of Trimmed NURBS Surfaces

Hardware-Assisted Relief Texture Mapping

LEVEL 1 ANIMATION ACADEMY2010

Acknowledgement: Images and many slides from presentations by Mark J. Kilgard and other Nvidia folks, from slides on developer.nvidia.


CS452/552; EE465/505. Clipping & Scan Conversion

Order Independent Transparency with Dual Depth Peeling. Louis Bavoil, Kevin Myers

CS451Real-time Rendering Pipeline

TSBK03 Screen-Space Ambient Occlusion

A Conceptual and Practical Look into Spherical Curvilinear Projection By Danny Oros

Per-pixel Rendering of Terrain Data

Computergrafik. Matthias Zwicker. Herbst 2010

Graphics Performance Optimisation. John Spitzer Director of European Developer Technology

Approximate Catmull-Clark Patches. Scott Schaefer Charles Loop

CS 325 Computer Graphics

INF3320 Computer Graphics and Discrete Geometry

CS 498 VR. Lecture 19-4/9/18. go.illinois.edu/vrlect19

Projective Shadows. D. Sim Dietrich Jr.

Texture Mapping. Michael Kazhdan ( /467) HB Ch. 14.8,14.9 FvDFH Ch. 16.3, , 16.6

Computer Graphics. Shadows

Computer Graphics CS 543 Lecture 13a Curves, Tesselation/Geometry Shaders & Level of Detail

Computer Graphics I Lecture 11

Honours Project Multi-Material Procedural Terrain Texturing. Author: Eric Billingsley Supervisor: Dr. David Mould, School of Computer Science

Terrain Rendering (Part 1) Due: Thursday November 30 at 10pm

CS230 : Computer Graphics Lecture 4. Tamar Shinar Computer Science & Engineering UC Riverside

Clipping. CSC 7443: Scientific Information Visualization

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

CS 498 VR. Lecture 18-4/4/18. go.illinois.edu/vrlect18

Beginning Direct3D Game Programming: 1. The History of Direct3D Graphics

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

CS4620/5620: Lecture 14 Pipeline

INFOGR Computer Graphics

Lighting and Shading

Programming Graphics Hardware

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

CS GPU and GPGPU Programming Lecture 2: Introduction; GPU Architecture 1. Markus Hadwiger, KAUST

POWERVR MBX. Technology Overview

Fast Third-Order Texture Filtering

Graphics Processing Unit Architecture (GPU Arch)

CS5620 Intro to Computer Graphics

The feature set you are required to implement in your ray tracer is as follows (by order from easy to hard):

Transcription:

Parallax Occlusion in Direct3D 11 Frank Luna February 11, 2012 www.d3dcoder.net Introduction to 3D Game Programming with DirectX 11 has a chapter explaining normal mapping and displacement mapping (implemented with hardware tessellation). We saw that we could improve upon normal mapping by tessellating the geometry and displacing it with a heightmap to geometrically model the bumps and crevices of the surface. However, an application may be required to support DirectX 10 for systems that do not have DirectX 11 capable hardware; therefore, we need a fallback technique for when we do not have access to hardware tessellation. Parallax Occlusion Mapping is a technique that gives results similar to tessellation displacement mapping, and can be implemented in DirectX 10; see Figure 1 for a comparison of techniques. Figure 1: Parallax Occlusion Mapping is a pixel technique that does not require a high triangle count, but gives results comparable to tessellation based displacement mapping. Page 1 of 12

1 Conceptual Overview The basic idea is illustrated in Figure 2. If the surface we are modeling is not truly planar, but defined by a heightmap, then the point the eye sees is not the point on the polygon plane, but instead the point where the view ray intersects the heightmap. Therefore, when lighting and texturing the polygon point we should not use the texture coordinates and normal at, but instead use the texture coordinates and normal associated with the intersection point. If we are using normal mapping, then we just need the texture coordinates at from which we can sample the normal map to get the normal vector at. Thus, the crux of the algorithm is to find the parallax offset vector that tells us how much to offset the texture coordinates at to obtain the texture coordinates at. As we will see in later sections, this is accomplished by tracing the view ray through the heightmap. Figure 2: The point the eye sees is not the point on the polygon plane, but instead the point where the view ray intersects the heightmap. The parallax offset vector that tells us how much to offset the texture coordinates at to obtain the texture coordinates at. 2 Computing the Max Parallax Offset Vector We need an upper limit on how large the parallax offset vector could possibly be. As with our tessellation displacement mapping demo, we use the convention that a value coincides with the polygon surface, and represents the lowest height (most inward displacement). (Recall that the heightmap values are given in a normalized range [0, 1], which are then scaled by to world coordinate values.) Figure 3 shows that for a view ray with starting position at the polygon surface with, it must intersect the heightmap by the time the ray reaches. Assuming the view vector is normalized, there exists a such that: Page 2 of 12

In normalized height coordinates, the difference coordinates, the difference is. Therefore,, but scaled in world So, the view ray reaching a height yields the maximum parallax offset: Figure 3: The maximum parallax offset vector corresponds to the point possible height. where the view ray pierces the lowest 3 Tex-Coord per World Unit So far we have made the assumption that we can use as a texture coordinate offset. However, if we rotate the normalized view ray from world space to tangent space, the view ray scale is still in the world scale. Moreover, in 2 we computed the intersection of with the world scaled height. Therefore, is also in the world scale and so is. But to use as a texture coordinate offset, we need it in the texture scale. So we need to convert units: That is to say, we need to know how many world space units (ws) equals one texture space unit. Suppose we have a grid in world units, and we map a texture onto the grid with no tiling. Then 1 texture space unit equals 20 world space units and Page 3 of 12

As a second example, suppose we have a grid in world units, and we map a texture onto the grid with tiling. Then 5 texture space unit equals 20 world space units or 1 texture space unit equals 4 world space units and So to get everything in texture space coordinates so that we can use coordinate offset, we need to revise our equation: as a texture In code, calculation needs to be computed per-pixel. It is unnecessary to perform the per-pixel. So we can define: compute in the application code, and set it as a constant buffer property. Then our pixel-shader code can continue to use the original formula: In practice, it might not be easy to determine. An easier way would be to just expose a slider to an artist and let them tweak interactively. 4 Intersecting the Heightmap To determine the point at which the ray intersects the heightmap, we linearly march along the ray with uniform step size and sample the heightmap along the ray (see Figure 4). When the sampled heightmap value is greater than the ray s height value (in tangent space the z-axis is up ), it means we have crossed the heightmap. Page 4 of 12

Figure 4: Ray marching with uniform step size. We cross the heightmap the first time. Note that we ray march in the normalized heightmap space [0, 1]. So if we are taking samples, we must step a vertical distance with each step so that the ray travels a vertical distance of 1 by the sample. Once we find where the ray crosses the heightmap, we approximate the heightmap as a piecewise linear function and do a ray/line intersection test to find the intersection point. Note that this is a 2D problem because we are looking at the cross plane that is orthogonal to the polygon plane that contains the view ray. Figure 5 shows that we can represent the two rays by the equations: Figure 5: The rays intersect at parameter. Observe that in this particular construction, the rays intersect at the same parameter (in general this is not true). Therefore, we can set the rays equal to each other and solve for : Page 5 of 12

In particular, the equation of the second coordinate implies: The parameter gives us the percentage through the interval where the intersection occurs. If the interval direction and length is described by then the parallax offset vector is given by: 5 Adaptive Iteration Count Figure 6 shows that that glancing view rays require more samples than head-on view rays since the distance traveled is longer. We estimate the ray sample by interpolating between minimum and maximum sample count constants based on the angle between the vector from the surface to the eye and surface normal vector: int gminsamplecount = 8; int gmaxsamplecount = 64; int samplecount = (int)lerp(gmaxsamplecount, gminsamplecount, dot(toeye, pin.normalw)); For head-on angles, the dot product will be 1, and the number of samples will be gminsamplecount. For parallel angles, the dot product will be 0, and the number of samples will be gmaxsamplecount. Figure 6: Glancing view rays travel a further distance than head-on view rays. Therefore, we require more samples at glancing angles to get an accurate intersection test. Page 6 of 12

6 Code Our parallax occlusion effect file is given below: #include "LightHelper.fx" cbuffer cbperframe DirectionalLight gdirlights[3]; float3 geyeposw; ; float gfogstart; float gfogrange; float4 gfogcolor; cbuffer cbperobject float4x4 gworld; float4x4 gworldinvtranspose; float4x4 gworldviewproj; float4x4 gtextransform; Material gmaterial; ; float gheightscale; int gminsamplecount; int gmaxsamplecount; Nonnumeric values cannot be added to a cbuffer. Texture2D gdiffusemap; Texture2D gnormalmap; TextureCube gcubemap; SamplerState samlinear Filter = MIN_MAG_MIP_LINEAR; AddressU = WRAP; AddressV = WRAP; ; struct VertexIn float3 PosL : POSITION; float3 NormalL : NORMAL; float2 Tex : TEXCOORD; float3 TangentL : TANGENT; ; struct VertexOut float4 PosH : SV_POSITION; float3 PosW : POSITION; float3 NormalW : NORMAL; float3 TangentW : TANGENT; float2 Tex : TEXCOORD; ; VertexOut VS(VertexIn vin) Page 7 of 12

VertexOut vout; Transform to world space space. vout.posw = mul(float4(vin.posl, 1.0f), gworld).xyz; vout.normalw = mul(vin.normall, (float3x3)gworldinvtranspose); vout.tangentw = mul(vin.tangentl, (float3x3)gworld); Transform to homogeneous clip space. vout.posh = mul(float4(vin.posl, 1.0f), gworldviewproj); Output vertex attributes for interpolation across triangle. vout.tex = mul(float4(vin.tex, 0.0f, 1.0f), gtextransform).xy; return vout; float4 PS(VertexOut pin, uniform int glightcount, uniform bool gusetexure, uniform bool galphaclip, uniform bool gfogenabled, uniform bool greflectionenabled) : SV_Target Interpolating normal can unnormalize it, so normalize it. pin.normalw = normalize(pin.normalw); The toeye vector is used in lighting. float3 toeye = geyeposw - pin.posw; Cache the distance to the eye from this surface point. float disttoeye = length(toeye); Normalize. toeye /= disttoeye; Parallax Occlusion calculations to find the texture coords to use. float3 viewdirw = -toeye; Build orthonormal basis. float3 N = pin.normalw; float3 T = normalize(pin.tangentw - dot(pin.tangentw, N)*N); float3 B = cross(n, T); float3x3 totangent = transpose( float3x3(t, B, N) ); float3 viewdirts = mul(viewdirw, totangent); float2 maxparallaxoffset = -viewdirts.xy*gheightscale/viewdirts.z; Vary number of samples based on view angle between the eye and the surface normal. (Head-on angles require less samples than glancing angles.) int samplecount = (int)lerp(gmaxsamplecount, gminsamplecount, dot(toeye, pin.normalw)); float zstep = 1.0f / (float)samplecount; Page 8 of 12

float2 texstep = maxparallaxoffset * zstep; Precompute texture gradients since we cannot compute texture gradients in a loop. Texture gradients are used to select the right mipmap level when sampling textures. Then we use Texture2D.SampleGrad() instead of Texture2D.Sample(). float2 dx = ddx( pin.tex ); float2 dy = ddy( pin.tex ); int sampleindex = 0; float2 currtexoffset = 0; float2 prevtexoffset = 0; float2 finaltexoffset = 0; float currrayz = 1.0f - zstep; float prevrayz = 1.0f; float currheight = 0.0f; float prevheight = 0.0f; Ray trace the heightfield. while( sampleindex < samplecount + 1 ) currheight = gnormalmap.samplegrad(samlinear, pin.tex + currtexoffset, dx, dy).a; Did we cross the height profile? if(currheight > currrayz) Do ray/line segment intersection and compute final tex offset. float t = (prevheight - prevrayz) / (prevheight - currheight + currrayz - prevrayz); finaltexoffset = prevtexoffset + t * texstep; else Exit loop. sampleindex = samplecount + 1; ++sampleindex; prevtexoffset = currtexoffset; prevrayz = currrayz; prevheight = currheight; currtexoffset += texstep; Negative because we are going "deeper" into the surface. currrayz -= zstep; Use these texture coordinates for subsequent texture fetches (color map, normal map, etc.). float2 parallaxtex = pin.tex + finaltexoffset; Texturing Page 9 of 12

Default to multiplicative identity. float4 texcolor = float4(1, 1, 1, 1); if(gusetexure) Sample texture. texcolor = gdiffusemap.sample( samlinear, parallaxtex ); if(galphaclip) Discard pixel if texture alpha < 0.1. Note that we do this test as soon as possible so that we can potentially exit the shader early, thereby skipping the rest of the shader code. clip(texcolor.a - 0.1f); Normal mapping float3 normalmapsample = gnormalmap.sample(samlinear, parallaxtex).rgb; float3 bumpednormalw = NormalSampleToWorldSpace(normalMapSample, pin.normalw, pin.tangentw); Lighting. float4 litcolor = texcolor; if( glightcount > 0 ) Start with a sum of zero. float4 ambient = float4(0.0f, 0.0f, 0.0f, 0.0f); float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f); float4 spec = float4(0.0f, 0.0f, 0.0f, 0.0f); Sum the light contribution from each light source. [unroll] for(int i = 0; i < glightcount; ++i) float4 A, D, S; ComputeDirectionalLight(gMaterial, gdirlights[i], bumpednormalw, toeye, A, D, S); ambient += A; diffuse += D; spec += S; litcolor = texcolor*(ambient + diffuse) + spec; if( greflectionenabled ) float3 incident = -toeye; float3 reflectionvector = reflect(incident, bumpednormalw); float4 reflectioncolor = gcubemap.sample( samlinear, reflectionvector); litcolor += gmaterial.reflect*reflectioncolor; Page 10 of 12

Fogging if( gfogenabled ) float foglerp = saturate( (disttoeye - gfogstart) / gfogrange ); Blend the fog color and the lit color. litcolor = lerp(litcolor, gfogcolor, foglerp); Common to take alpha from diffuse material and texture. litcolor.a = gmaterial.diffuse.a * texcolor.a; return litcolor; 7 Aliasing Issues Parallax occlusion mapping has aliasing problems as Figure 7 shows. Due to the finite step size, the ray sometimes misses peaks of the heightmap. Increasing the sampling rate (i.e., decreasing the step size) helps, but is prohibitively expensive. We note that tessellation based displacement mapping does not have this problem. Figure 7: Observe the sampling artifacts. 8 Extensions and References Page 11 of 12

[Tatarchuk06] has a detailed description of the algorithm, extends it to support soft shadows and an LOD system, and gives advice for artists authoring heightmaps. [Watt05] devotes a chapter in his book to the algorithm. [Drobot09] describes generating a quadtree structure of the heightmap stored in a texture to improve the performance of the ray/heightmap intersection test. [Drobot09] Drobot, Michal. Quadtree Displacement Mapping with Height Blending, GDC Europe 2009. (http:www.drobot.org/pub/m_drobot_programming_quadtree%20displacement%20m apping.pdf) [Tatarchuk06] Tatarchuk, Natalya. Practical Parallax Occlusion Mapping with Approximate Soft Shadows for Detailed Surface Rendering, ShaderX5: Advanced Rendering Techniques. Charles River Media, Inc., 2006. [Watt05] Watt, Alan, and Fabio Policarpo, Advanced Game Development with Programmable Graphics Hardware, AK Peters, Ltd., 2005. Page 12 of 12