Terrain rendering (part 1) Due: Monday, March 10, 10pm

Similar documents
Terrain Rendering (Part 1) Due: Thursday November 30 at 10pm

Advanced Lighting Techniques Due: Monday November 2 at 10pm

Deferred Rendering Due: Wednesday November 15 at 10pm

Adaptive Point Cloud Rendering

LOD and Occlusion Christian Miller CS Fall 2011

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development

A simple OpenGL animation Due: Wednesday, January 27 at 4pm

Illumination and Geometry Techniques. Karljohan Lundin Palmerius

Rendering Grass Terrains in Real-Time with Dynamic Lighting. Kévin Boulanger, Sumanta Pattanaik, Kadi Bouatouch August 1st 2006

Terrain rendering (cont.) Terrain rendering. Terrain rendering (cont.) Terrain rendering (cont.)

Scalar Field Visualization I

Scalar Field Visualization I

Shading and Texturing Due: Friday, Feb 17 at 6:00pm

Hardware Displacement Mapping

Procedural modeling and shadow mapping. Computer Graphics CSE 167 Lecture 15

Shading Shades. Frank Jargstorff. June 1, Abstract

Game Architecture. 2/19/16: Rasterization

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

The Traditional Graphics Pipeline

CS535 Fall Department of Computer Science Purdue University


S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Computer Graphics Fundamentals. Jon Macey

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03

CS 4620 Midterm, March 21, 2017

Topic 10: Scene Management, Particle Systems and Normal Mapping. CITS4242: Game Design and Multimedia

Topics and things to know about them:

Real-time rendering of large terrains using algorithms for continuous level of detail (HS-IDA-EA )

DMESH: FAST DEPTH-IMAGE MESHING AND WARPING

OpenGL Transformations

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions.

Motivation. Culling Don t draw what you can t see! What can t we see? Low-level Culling

OUTLINE. Tessellation in OpenGL Tessellation of Bezier Surfaces Tessellation for Terrain/Height Maps Level of Detail

The Traditional Graphics Pipeline

CHAPTER 1 Graphics Systems and Models 3

Open Game Engine Exchange Specification

CS 130 Final. Fall 2015

G P U E N H A N C E D G L O B A L T E R R A I N R E N D E R I N G

A stratum is a pair of surfaces. When defining a stratum, you are prompted to select Surface1 and Surface2.

Spatial Data Structures and Speed-Up Techniques. Ulf Assarsson Department of Computer Science and Engineering Chalmers University of Technology

Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts

Visibility and Occlusion Culling

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

TSBK03 Screen-Space Ambient Occlusion

CS 4620 Program 3: Pipeline

Speeding up your game

CS451Real-time Rendering Pipeline

CSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling

Applications of Explicit Early-Z Culling

Robust Stencil Shadow Volumes. CEDEC 2001 Tokyo, Japan

RASTERISED RENDERING

Lecture 12: Mid-term test solution & project. CITS 3003 Graphics & Animation

How shapes are represented in 3D Graphics. Aims and objectives By the end of the lecture you will be able to describe

Tutorial 4: Texture Mapping Techniques

Flowmap Generator Reference

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

CS452/552; EE465/505. Clipping & Scan Conversion

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters.

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

Earth Rendering With Qt3D. Paul Lemire

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

Computer Graphics Ray Casting. Matthias Teschner

A Conceptual and Practical Look into Spherical Curvilinear Projection By Danny Oros

Rasterization Overview

Under the Hood of Virtual Globes

The Traditional Graphics Pipeline

E.Order of Operations

The Terrain Rendering Pipeline. Stefan Roettger, Ingo Frick. VIS Group, University of Stuttgart. Massive Development, Mannheim

Particle systems, collision detection, and ray tracing. Computer Graphics CSE 167 Lecture 17

Tutorial 19: VFX Workflows with Alembic

Shadows in the graphics pipeline

Computer Graphics (CS 543) Lecture 10: Soft Shadows (Maps and Volumes), Normal and Bump Mapping

Scalar Algorithms: Contouring

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

CSE 167: Introduction to Computer Graphics Lecture 11: Scene Graph 2. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

Visualizer An implicit surface rendering application

Sculpting 3D Models. Glossary

Midterm Exam CS 184: Foundations of Computer Graphics page 1 of 11

TDA361/DIT220 Computer Graphics, January 15 th 2016

View-Dependent Selective Refinement for Fast Rendering

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project

Practical Shadow Mapping

Graphics Hardware and Display Devices

3D Engine Design for Virtual Globes. Patrick Cozzi and Kevin Ring

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

XTviz Perth Demo User Manual. What we do

Interactive OpenGL Animation

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Scalar Field Visualization. Some slices used by Prof. Mike Bailey

Computer Graphics. Bing-Yu Chen National Taiwan University

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

VISIBILITY & CULLING. Don t draw what you can t see. Thomas Larsson, Afshin Ameri DVA338, Spring 2018, MDH

CS 354R: Computer Game Technology

CHAPTER 3. Visualization of Vector Data. 3.1 Motivation

CS130 : Computer Graphics Lecture 2: Graphics Pipeline. Tamar Shinar Computer Science & Engineering UC Riverside

Overview of Quadtree-based Terrain Triangulation and Visualization

Real-Time Graphics Architecture

Transcription:

CMSC 3700 Winter 014 Introduction to Computer Graphics Project 4 February 5 Terrain rendering (part 1) Due: Monday, March 10, 10pm 1 Summary The final two projects involves rendering large-scale outdoor scenes. It will be split into two parts. For the first part (Project 4) you will implement basic terrain rendering using the so-called chunkedlevel-of-detail algorithm. For the second part (Project 5), you will be responsible enhancing your implementation various special effects of your choosing. You will also be allowed to work in a group of two or three students for this project (note that CMSC 33700 students must work individually). There is a web interface for creating groups at https://work-groups.cs.uchicago.edu/pick_assignment/15334 It works by one person logging in and then sending invites to the other group members. After login, the user will see a page where they can choose the assignment (there is only one), and then their partner(s). All invitations to group must be acknowledged on the site before the group s repository is created. Any issues should be reported to techstaff@cs.uchicago.edu. Heightfields Heightfields are a special case of mesh data structure, in which only one number, the height, is stored per vertex (heightfield data is often stored as a greyscale image). The other two coordinates are implicit in the grid position. If s h is the horizontal scale, s v the vertical scale, and H a height field, then the 3D coordinate of the vertex in row i and column j is s h j, s v H i,j, s h i (assuming that the upper-left corner of the heightfield has X and Z coordinates of 0). By convention, the top of the heightfield is north; thus, assuming a right-handed coordinate system, the positive X-axis points east, the positive Y axis points up, and the positive Z-axis points south. Because of their regular structure, heightfields are trivial to triangulate; for example, Figure 4 gives two possible triangulations of a 5 5 grid. Since naïve rendering of a heightfield mesh requires excessive resources (a relatively small 049 049 grid has over eight-million triangles), many algorithms have been developed to reduce the number of triangles that are rendered in any given frame. The goal is to pick a set of triangles that both minimizes rendering requirements and minimizes the screen-space error. Screen-space error is a metric that approximates the difference between the screen-space projection of the higher-detail geometry and the screen-space projection of the lower-detail geometry. For this project, you will implement the rendering part of a positiondependent algorithm called Chunked LOD.

0 1 3 4 0 0 1 3 4 0 1 1 3 3 4 4 Figure 1: Possible heightfield triangulations Level Level 1 Level 0 3 Chunked LOD Figure : Chunking of a heightfield The Chunked Level-of-Detail (or Chunked LOD) algorithm was developed by Thatcher Ulrich. This algorithm uses precomputed meshes (called chunks) at varying levels of detail. For example, Figure shows a 33 33 heightfield with three levels of detail (note that level 0 is the least detailed level). The chunks of a heightmap form a complete quad tree Each chunk has an associated geometric error, which is used to determine the screen-space error. Figure 3 shows how the chunks at different levels of detail are combined to render the heightfield. Because chunks are different levels of detail are not guaranteed to match where they meet, the resulting rendering is likely to suffer from T-junctions (these are circled in red in Figure 3) that can cause visible cracks. To avoid rendering artifacts that might be caused because of T-junctions, each chunk has a skirt, which is a strip of triangles around the edge of the chunk extending down below the surface. The skirts are part of the precomputed triangle meshes.

Figure 3: Rendering the heightfield from Figure As mentioned above, which level of detail to render for a chunk is determined by the screenspace error of the chunk. For each chunk, we have precomputed the maximum geometric error (call it δ). Then, the screen-space error ρ can be conservatively approximated by the equation ρ = viewport-wid ) δ tan D where D is the distance from the viewer to the bounding box of the chunk and fov is the horizontal field of view. Better approximations can be had by taking into account the viewing angle (for example, if you are looking down on the terrain from above, then the projection of the vertical error will be smaller). ( fov 4 Map format The maps for this project are not restricted to being square. A map covers a w m h n unit rectangular area, which is represented by a w m + 1 h n + 1 array of height samples. This array is divided into a grid of square cells, each of which covers k + 1 k + 1 height samples. For example, Figure 4 shows a map that is 3 11 1 units in area and is divided into 3 square cells, each of which is 11 units wide. The cells of the grid are indexed by (row, column) pairs, with the north-west cell having index (0, 0). A map is represented in the file system as a directory. In the directory is a JSON file map.json that documents various properties of the map. For example, here is the map.json file from a small example map: { "name" : "Grand Canyon", "h-scale" : 60, "v-scale" : 10.004, "base-elev" : 84, 3

6144 048 048 4096 (0,0) (0,1) (0,) (1,0) (1,1) (1,) Figure 4: Map grid } "min-elev" : 414.05, "max-elev" : 154.75, "width" : 4096, "height" : 048, "cell-size" : 048, "color-map" : true, "normal-map" : true, "water-map" : false, "grid" : [ "00_00", "00_01" ] The map information includes the horizontal and vertical scales (i.e., meters per unit); the base, minimum, and maximum elevation in meters; the total width and height of the map in height-field samples; 1 the width of a cell; information about what additional data is available (color-map texture, normal-map texture, and water mask); and an array of the cell-cell names in row-major order. Each cell has its own subdirectory, which contains the data files for that cell. These include: hf.png the cell s raw heightfield data represented as a 16-bit grayscale PNG file. hf.cell the cell s heightfield organized as a level-of-detail chunks. color.tqt a texture quadtree for the cell s color-map texture. norm.tqt a texture quadtree for the cell s normal-map texture. water.png a 8-bit black and white PNG file that marks which of the cell s vertices are water (black) and which are land (white). 1 Note that the width and height are one less than the number of vertices; i.e., in this example, the number of vertices are 4097 049. 4

Of these files, only the first two are guaranteed to be present. The sample code includes support for reading the map.json file, as well as the other data formats (.tqt and.cell files). Because the map datasets are quite large (in the 100 s of megabytes), we have arranged for them to be available on the CSIL Macs in the /data directory. You can also download the datasets from the course webpage to use on your own machine. 4.1 Map cell files The.cell files provide the key geometric information about the terrain being rendered. Each file consists of a complete quadtree of level-of-detail chunks, where each chunk is contains a triangle mesh for that part of the cell s terrain, which is represented as a vertex array and an array of indices. The triangle meshes have been organized so that they can be rendered as triangle strips (GL_TRIANGLE_STRIP). The mesh is actually five separate meshes, one for the terrain and four skirts, and we use OpenGL s primitive restart mechanism to split them (the primitive restart value is 0xffff). Vertices in the mesh are represented as struct Vertex { int16_t _x; // x coordinate relative to Cell s NW // corner (in hscale units) int16_t _y; // y coordinate relative to Cell s base // elevation (in vscale units) int16_t _z; // z coordinate relative to Cell s NW // corner (in hscale units) int16_t _morphdelta; // y morph target relative to _y (in // vscale units) }; Note that the vertices x and z coordinates are relative the the cell s coordinate system, not the world coordinate system. The fourth component of the vertex is used to compute the vertex s morph target, which is the position of the vertex in the next-lower level of detail. Morphing is discussed in more detail in the next section. The chunk data structure also contains the chunk s geometric error in the Y direction, which is used to compute screen-space error, and the chunk s minimum and maximum Y values, which can be used to determine the chunk s AABB. 5 Rendering The first part of this project is to implement a basic renderer on top of the Chunked-LOD implementation. You solution should address the remainder of this section, as well as in Section 6. Your fragment shader should use the color and normal textures when they are available. For large terrains, using single-precision floating point values for world coordinates could cause loss of precision when the viewer is far from the world origin. Once can avoid these problems by splitting out the translation from model (i.e., cell) coordinates to eye coordinates from the rest of the model-view-projection transform. 5

5.1 Basic rendering Your implementation should support basic rendering of the heightfield in both wireframe and shaded modes. You will need to compute the screen-space error of chunks and to choose which chunks to render based on screen-space error. 5. View frustum culling Your implementation should support view-frustum culling. You can implement this feature by testing for intersection between the view frustum with the chunk s axis-aligned bounding box. 5.3 Morphing between levels of detail To avoid popping when transitioning between levels of detail, your implementation should morph vertex positions over several frames (the morphing can be done in the vertex shader). Each vertex in a chunk is represented by four 16-bit integers. The first three represent the coordinates of the vertex (before scaling) and the fourth hold the difference between the vertex s Y position and its morph target. If the vertex is present in the next-lower level of detail, then the morph target is just its Y value, but if the vertex is omitted, then it is the Y-value of the projection of the vertex onto the edge in the next LOD as is shown in the following diagram: Morph delta Morph target 5.4 Lighting You should support a single directional light that represents the sun and diffuse shading of the heightfield. The map file format may optionally include a specification of the direction of the light. 5.5 Fog Fog adds realism to outdoor scenes. It can also provide a way to reduce the amount of rendering by allowing the far plane to be set closer to the view. The map file format may optionally include a specification of the fog density and color. 6 User interface We leave the details of your camera control unspecified. The main requirements is that it be possible using either the keyboard or mouse to change the direction and position of the viewpoint. 6

Furthermore, you should support the following keyboard commands: f toggle fog l toggle lighting w toggle wireframe mode + increase screen-space error tolerance (by ) - decrease screen-space error tolerance (by ) q quit the viewer 6.1 Submission Project 4 is due on Monday, March 10. As part of your submission, you should include a text file named PROJECT-5 that describes your plans for Project 5 (more information about Project 5 will be handed out soon). History 014-03-04 Added more details about the.cell file format and corrected how morph targets are represented. 014-0-5 Original version. 7