GPU-based visualisation of viewshed from roads or areas in a 3D environment

Size: px
Start display at page:

Download "GPU-based visualisation of viewshed from roads or areas in a 3D environment"

Transcription

1 Master of Science Thesis in Electrical Engineering Department of Electrical Engineering, Linköping University, 2016 GPU-based visualisation of viewshed from roads or areas in a 3D environment Christoph Heilmair

2 Master of Science Thesis in Electrical Engineering GPU-based visualisation of viewshed from roads or areas in a 3D environment Christoph Heilmair LiTH-ISY-EX--16/4951--SE Supervisor: Examiner: Harald Nautsch isy, Linköping University Daniel Mellergård Vricon Filip Thordarson Vricon Ingemar Ragnemalm isy, Linköping University Information Coding Department of Electrical Engineering Linköping University SE Linköping, Sweden Copyright 2016 Christoph Heilmair

3

4

5 Abstract Viewshed refers to the calculation and visualisation of what part of a terrain is visible from a given observer point. It is used within many fields, such as military planning or telecommunication tower placement. So far, no general fast methods exist for calculating the viewshed for multiple observers that may for instance represent a road within the terrain. Additionally, if the terrain contains overlapping structures such as man-made constructions like bridges, most current viewshed algorithms fail. This report describes two novel methods for viewshed calculation using multiple observers for terrain that may contain overlapping structures. The methods have been developed at Vricon in Linköping as a Master s Thesis project. Both methods are implemented using the graphics programming unit and the OpenGL graphics library, using a computer graphics approach. Results are presented in the form of figures and images, as well as running time tables using two different test setups. Lastly, future possible improvements are also discussed. The results show that the first method is a viable real-time solution and that the second method requires some additional work. v

6

7 Acknowledgments I would like to thank my girlfriend Emma for helping and supporting me with this thesis. Also, I would like to especially thank my supervisors at Vricon, Filip Thordarson and Daniel Mellergård, for always taking their time to help me when I needed it. Lastly, Ingemar Ragnemalm at Linköping University has been a great help when discussing new ideas and methods. Linköping, May 2016 Christoph Heilmair vii

8

9 Contents Notation List of Figures List of Tables xi xiii xv 1 Introduction Motivation Goal Problem formulation Method Theory Brief overview of 3D rendering The OpenGL rendering pipeline Projection matrices Depth buffers Framebuffers Digital elevation models Heightmap representation Triangulated irregular networks Summary Visibility Related work Direct LOS based viewshed algorithms R3 algorithm R2 algorithm Xdraw algorithm Summary Computer graphics based algorithms Shadowmap approach Occlusive volumes ix

10 x Contents Summary Implementation Test environment Terrain generation Observer placement system Method 1: Arrays of shadowmaps Framebuffer depth attachment modification Choice of projection matrix Optimizations Final overview Method 2: Scene voxelization and ray traversal Idea Scene voxelization Ray traversal Sparse voxel octree representation Final overview Results Method 1: Arrays of shadowmaps Timings and memory consumption Artifacts Method 2: Scene voxelization and ray traversal Voxelization Timings and memory consumption Artifacts Discussion Result Array of shadowmap method Scene voxelization and ray traversal method Comparison of the methods Future work Conclusion 51 Bibliography 53 Index 55

11 Notation Abbreviations Abbreviation gpu los dem tin api glsl ndc cpu gis fbm svo Meaning Graphics processing unit Line of sight Digital elevation model Triangulated irregular network Application programming interface OpenGL Shading Language Normalized device coordinates Central processing unit Geographic information system Fractional brownian motion Sparse voxel octree xi

12

13 List of Figures 1.1 An example of a viewshed in a triangulated irregular network. The observer is represented by the white quad close to the center of the image. The red areas correspond to the terrain that is not visible from the observer s point of view (in every direction) A terrain scene rendered from a camera s perspective, using a perspective projection matrix. Note how this looks similar to if a real life camera would have taken this picture, i.e. an intuitive sense of distance is present The same terrain as in figure 2.1 rendered with an orthographic projection matrix. Note that it is harder to get a sense of distance in this figure, compared to in The depth buffer of a rendered scene. The brightness of each pixel corresponds to how far away that pixel is from the camera. Completely black represents pixels exactly at the camera and completely white is at the far plane of the camera Subfigure 2.4a shows how a terrain heightmap might look. The subfigure 2.4b represents a triangulation of this heightmap. Each pixel in the heightmap has been converted to 3D coordinates of a vertex of a triangle. These triangles then form a complete mesh and are rendered, here in wireframe A visualisation of how the R2 algorithm can determine visibility using accumulated slope, here simplified to a 2D case. If the point to be determined visible or not is u, then the maximum accumulated slope along its LOS is a. Since the slope at u is less than a, the maximum accumulated slope so far, we know that u must be invisible to the observer A theoretical cross section of a terrain surface consisting of a flat ground plane and some overlapping structure above it, for instance a bridge. The R3 algorithm would deem the point u invisible although it clearly is not Mapping of terrain points to a logical sphere, by normalizing direction vectors xiii

14 xiv LIST OF FIGURES 3.4 Stereographic projection, from 3D coordinates to 2D. The point P is projected from the north pole N to point P in the 2D plane Terrain generated from a heightmap using FBM, with increasing detail due to increasing number of octaves, 4.1a: 0 octaves, 4.1b: 1 octave, 4.1c: 2 octaves, 4.1d: 5 octaves A colour attachment texture of the framebuffer used in the observer placement system. Each fragment s world coordinate has been mapped to a colour by the fragment shader Issue with the stereographic projection from 3D sphere to 2D plane beneath it. The point P is projected through the north pole N. The closer the point P is to N, the further away the point will be projected An observer on terrain, with its shadowmap in the upper right corner of the image. The shadowmap is generated by calculating all LOS directions to all terrain points, mapping them to a unit sphere around the observer, and lastly mapping that unit sphere to the 2D texture using a simple projection that uses the polar coordinates for each LOS Viewshed calculated for the observer in the middle of this image, with the associated shadowmap in the upper right corner. Artifacts are introduced by triangles that cover the entire shadowmap The double stereographic projection used in the method. If a LOS on the unit sphere has its coordinates in the southern hemisphere, the left projection is used. If its coordinates lie in the northern hemisphere, the right projection is used. This ensures that the resulting texture sizes are deterministic A case where the simple ray traversal algorithm described in algorithm 4 fails. Empty voxels are represented by the white squares, and filled voxels by red squares. If the fixed increment used to advance the ray falls at for instance the two points indicated by circles, the filled voxel in between them may be missed. This results in the terrain point falsely being classified as visible although it is not A quadtree that demonstrates the principle of how the sparse voxel octree is constructed. Filled voxels are illustrated as red squares and empty voxels are painted white. The main quadrants A, B, C and D, counted in a counter-clockwise fashion starting from the upper left, are each sub-divided into four additional child nodes since at least one of them contains a voxel. This process is repeated until no child node contains any filled voxels. The backslashes represent leaf nodes that are empty, whereas the red leaf nodes represent a filled voxel

15 4.9 On the right, an illustration of how the SVO is represented in GPU memory using a 3D texture, using the quadtree on the left as its basis. The x and y texture indices each represent a tree node. Each z layer then corresponds to information about every child in that node. The alpha channel of each child indicates whether the node is empty (alpha = 1), contains a filled voxel (alpha = 2), or has children of its own (alpha = 0). The red and green channels hold information about the texture index at which a child node resides Result of the array of shadowmap method. 5.1a: terrain before any viewshed calculations. 5.1b: Viewshed calculated and simultaneously visualized for the 7 observers visible on the mountain top Single triangle-shaped artifact in viewshed, with the northern stereographic projection shadowmap Several triangular artifacts on a mountain side, with the northern stereographic projection shadowmap Result of the voxelization and ray traversal method. 5.4a: terrain before any viewshed calculations. 5.4b: Viewshed calculated and simultaneously visualized for the 3 observers visible on the mountain top SVO structure after voxelization of terrain Result of the voxelization. 5.6a: terrain before voxelization. 5.6b: Terrain after voxelization Holes in voxelized terrain Artifacts induced by voxel size (red) and the voxelization itself (green) List of Tables 5.1 Setup rig used for timing measurements Timings for the shadowmap method. S.m. = shadowmap construction, V.s. = viewshed calculation Timings for the voxelization and ray traversal method xv

16 xvi LIST OF TABLES

17 1 Introduction This report serves as the documentation of a Master s Thesis project conducted at Linköping University and Vricon. The work was done at Vricon s offices in Linköping. This chapter begins to describe some related background and motivation for the project, and ends with presenting the investigated problem formulation and associated delimitations. All algorithms mentioned in this chapter will be described more in-depth in chapter Motivation Viewshed analysis refers to the computation of visibility of (usually) terrain from a given observer point, i.e. what parts of the terrain is visible to the observer if he is situated at a particular point on this terrain. These calculations are done within the context of some digital representation of the terrain, for instance a set of vertices connected together to form triangles. These types of models are referred to as triangulated irregular networks (TIN). See figure 1.1 for an example of this. Viewshed analysis is an interesting topic in many fields, for instance siting problems to ensure that telecommunication towers get the maximum coverage [6], [11] or impacts of new buildings on tourism [13]. Another proposed application is within military operations. In this field, given that an accurate TIN or other terrain model exists, accurate viewshed calculations potentially could help in planning the approach of a hostage situation to minimize exposure, or the setup of an ambush. The reverse viewshed could also aid in various military scenarios. For instance, if the position of an enemy sniper is known, the viewshed could be used to remain out of sight. Traditionally, viewshed calculations are done in an offline manner, usually on 1

18 2 1 Introduction Figure 1.1: An example of a viewshed in a triangulated irregular network. The observer is represented by the white quad close to the center of the image. The red areas correspond to the terrain that is not visible from the observer s point of view (in every direction). central processing units (CPU). However, with the increasing parallel processing power of graphics processing units (GPU), it is becoming possible to do these calculations in real-time [3], [5]. This opens up new possibilities for geographic information systems (GIS) such as the Vricon Explorer. There are however still challenges to overcome when it comes to calculating the viewshed for overlapping terrain structures, or for novel applications such as calculating the viewshed along a road within some terrain representation. 1.2 Goal This thesis aims to fill in the most important gaps presented above and propose a method for viewshed analysis visualisation that works in real time in 3D terrain models. In addition to this, the observer will not be a single point, but a segment of the terrain such as a road or a building roof top.

19 1.3 Problem formulation Problem formulation Given the background in section 1.1, an algorithm that solves these issues must meet the following criteria. Work in real time Be able to handle overlapping structures in the terrain Calculate viewshed from a road (i.e. a line segment) or area, and not just a single observer In addition to this, an implementation of this algorithm also needs to be able to visualize the results in a meaningful way, i.e. from how much of the road is this particular terrain point visible. This thesis aims to implement an algorithm with the above mentioned properties. 1.4 Method In order to solve the presented issues, a thorough literature study on the subject of viewshed analysis will be conducted. Several databases will be used, such as Google Scholar and UniSearch, provided by Linköping University s library. Algorithms and methods found during this study will be evaluated based on the requirements presented in section 1.3. If any algorithms exist that fulfil each requirement, it will be implemented and further improved upon. If however no suitable method exist to solve the issue at hand, a suitable starting point will be found and built upon so that it does solve the issue.

20

21 2Theory This chapter provides some underlying knowledge needed in order to understand the problem at hand. 2.1 Brief overview of 3D rendering In order to fully understand the computer graphics based algorithms presented in 3, as well as the implemented novel algorithms that this thesis presents, a brief overview of modern 3D rendering is required The OpenGL rendering pipeline This thesis utilizes the OpenGL graphics application programming interface (API) as its 3D renderer and for algorithm execution. When rendering meshes, as a terrain TIN can be seen as, there is a series of stages that the raw vertex data goes through until it forms triangles and lastly rasterized pixels on the screen. Since OpenGL version 2.0 these stages are programmable via shaders. Shaders are small programmes written in the OpenGL Shading Language (GLSL) that are compiled and executed on the GPU. These programmes act on an either pervertex, per-primitive or per-fragment level, and can do so in massive parallelism. It is also important to note that there exists no concept of a world coordinate system in OpenGL itself. The only coordinate system that OpenGL knows about and uses, is the normalized device coordinates (NDC). These range from -1 to 1 in all three axes, with the origin at the center of the screen. The positive z-axis is oriented towards the viewer, meaning that the negative z-axis points into the screen. This means that all input data needs to be transformed from whatever coordinate system it is defined in, to the NDC system. This is particularly important to consider when defining custom projections. 5

22 6 2 Theory Vertex shader stage The vertex shader acts on a per-vertex level. In a typical OpenGL application, the vertex shader takes care of transforming the input vertices, such as the terrain vertices that will later form triangles, from their own coordinate system to NDC. It does this using a transformation matrix, typically built from a model, view and projection matrix. The model matrix takes the vertex coordinates from whatever model space they were defined in to world coordinates. The view matrix transforms the world coordinates to the camera, or view, coordinates. The projection matrix lastly transforms the view coordinates to NDC. Note that if the model is defined in world coordinates, no model matrix is needed. Since the shaders allow full programmability, any operations on the vertex positions can be conducted. Any other calculations that might be interesting for the next stage can also be calculated. Geometry shader stage This stage is invoked per-primitive, after the vertex shader. That is, usually per-triangle. Any calculations that require the entire triangle can be done here. The output of the geometry shader is then zero or more primitives, which can be specified. Note that the use of a geometry shader is not required, as the other two are. Fragment shader stage This is the last shader stage and acts per-fragment. A fragment is a part of a primitive that might become a pixel. The programmer can decide whether to render a particular fragment to the output raster, or to discard it. Typical usage includes per-pixel lighting calculations and texturing. It is however, as with the other shader stages, up to the programmer to decide what happens in the shader. It is as mentioned possible to send any information calculated in a previous shader stage to the next one. For instance, it is possible to send some per-vertex attribute calculated in the vertex shader stage to the fragment shader. Since the fragment shader operates on fragments and not vertices, some interpolation is in this case required between the stages. OpenGL supports hardware interpolation, but the programmer may also choose to not interpolate the data at all. If so, the value passed to the fragment shader will be the value calculated in whatever vertex invoked that particular fragment. Additionally, information from the central processing unit (CPU) that is not vertex data can be uploaded to the shaders by using uniform variables. These are declared in whatever shader stage they are needed, and set by the CPU prior to rendering. Texturing is handled by samplers in OpenGL. These are declared as uniforms in the shader that wants to use them, usually the fragment shader. The programmer then sets these texture samplers as active, and specify a certain texture unit from which they will actually sample data. Textures in OpenGL are not limited to 2D textures, but may be 1D textures, arrays of 2D textures, 3D textures or 4D textures, amongst others. The texture data that is set to be sampled in the shaders is of course also fully controllable by the programmer.

23 2.1 Brief overview of 3D rendering Projection matrices The projection matrix, used generally in the vertex shader, may use different types of transformations depending on what is rendered. For instance, if the rendered scene is to look as if a camera took it, a perspective projection is used. This is a type of projection transformation that preserves distances, i.e. objects further away from the camera appear smaller. Another type of projection is an orthographic projection, that does not preserve distances. For a difference between these types of projections, see figures 2.2 and 2.1. Figure 2.1: A terrain scene rendered from a camera s perspective, using a perspective projection matrix. Note how this looks similar to if a real life camera would have taken this picture, i.e. an intuitive sense of distance is present. It is of course again completely up to the programmer what type of projection is used, or if any at all is used Depth buffers When primitives are rendered to the screen in the pipeline described in subsection 2.1.1, it will be important in what order the primitives are rendered in to get a correct-looking result. If for instance a cube that is defined by 6 quads, each of which is defined by 2 triangles, is rendered, it is important that the triangles at the back are rendered first. If the triangles representing the side that faces

24 8 2 Theory Figure 2.2: The same terrain as in figure 2.1 rendered with an orthographic projection matrix. Note that it is harder to get a sense of distance in this figure, compared to in 2.1. the camera are rendered first, and after that the ones at the back, it would look unnatural and wrong. A way to solving this problem is by using depth buffers. This stores the distance from the viewing plane to each rendered pixel in normalized values on the interval [0, 1]. A normalized value of 0 means that the pixel is directly in front of the camera and 1 means that the pixel is at the camera s far plane, i.e. just visible. OpenGL supports this type of depth buffer and the accompanying depth test as an option. The depth test, if turned on, is conducted after the fragment shader stage. Each fragment s depth is compared to that of the same screen position, and if that value is less than that of the current fragment, the fragment is discarded. This efficiently solves the problem of ordering when rendering. See figure 2.3 for an example of how that depth buffer might look Framebuffers An important concept in 3D rendering is framebuffers. In normal rendering, the pipeline processes the vertex data and any other data, as described in subsection and ultimately renders the pixels directly to the screen. It is however pos-

25 2.2 Digital elevation models 9 Figure 2.3: The depth buffer of a rendered scene. The brightness of each pixel corresponds to how far away that pixel is from the camera. Completely black represents pixels exactly at the camera and completely white is at the far plane of the camera. sible to do the exact same data processing and calculations, and render to an offscreen raster, which is what framebuffers are used for. A framebuffer may have a depth buffer attachment. This attachment works much like a regular texture and simply contains the depth buffer of the scene that is rendered to the framebuffer. In addition, a framebuffer may or may not have colour attachments. These also work like textures and, if used, contain the actual rendered scene as it would have looked if it would have been rendered to the screen instead of the framebuffer. It is also possible to only use a depth attachment without any colour attachments. 2.2 Digital elevation models Since viewshed calculations need some form of terrain data in order to produce any results or be meaningful, and may produce different results depending on

26 10 2 Theory how the data is represented, it is important to define such terrain models. A digital elevation model (DEM) is such a digital representation of terrain. The two main categories that are relevant in the case of viewshed analysis are vector-based meshes such as triangulated irregular networks, and raster-based models such as heightmaps. Both of these have their own advantages and disadvantages when it comes to performing viewshed algorithms on them Heightmap representation A heightmap terrain model relies on elevation measurements performed on the real terrain at some interval. This data can then, for instance, be mapped to a grayscale image for visualisation or used directly for various calculations. A drawback to this type of DEM is that there only exists height information at whatever grid points we sampled the height at. This could of course be enough, but in some cases the resolution might simply not be good enough. One way to alleviate this problem is to use triangulated irregular networks Triangulated irregular networks This representation uses vertices connected together to form triangles. These triangles in turn form a complete terrain model, much like regular meshes in video games and other types of digital visualisation. A TIN can of course be created from a heightmap, by simply setting nearby sampled elevation points to the vertices of the triangles. Likewise, by sampling the TIN at the vertices again, a heightmap can be generated back. See figure 2.4 for an example of this Summary While these types of terrain are commonly used, where TIN s might be generated from some heightmap representation, it will prove to not be quite enough for the task to be solved by this thesis. The heightmap representation (and inevitably the TIN representation of that heightmap) inherently lacks the ability to represent overlapping terrain or structures, such as bridges or cliff overhangs. This is because the heightmap representation is mathematically a parametric surface with bijective mapping between the planar parametric space and the surface, i.e. for each grid point in the heightmap there exists exactly one 3D point in the resulting surface. This can technically be solved by using multiple heightmaps for layering terrain. However, typically only a single heightmap is used. For this thesis, the assumption is made that a mesh representation of the terrain exists that is proper 3D. That is, any viewshed algorithms need to be able to handle overlapping structures. This assumption is made since that is the type of data that Vricon is using.

27 2.2 Digital elevation models 11 (a) A 2D heightmap where each grayscale value represents height. The brighter a pixel, the higher the corresponding height value. (b) A triangulated irregular network rendered in wireframe. Figure 2.4: Subfigure 2.4a shows how a terrain heightmap might look. The subfigure 2.4b represents a triangulation of this heightmap. Each pixel in the heightmap has been converted to 3D coordinates of a vertex of a triangle. These triangles then form a complete mesh and are rendered, here in wireframe.

28 12 2 Theory 2.3 Visibility In order to have some common ground for all these algorithms, we need to define what is actually meant by visibility. This thesis chooses to use a similar definition as is used in [8]. Definition 2.1. A terrain point u, defined by three coordinates x u, y u and z u, is said to be visible by the observer v, defined by three coordinates x v, y v and z v, iff the line of sight between the points is not intersected by any other terrain point.

29 3 Related work Some previous work done in the field of viewshed calculations is presented in this chapter. The algorithms are split up into two main categories, direct LOS based algorithms and computer graphics based algorithms. 3.1 Direct LOS based viewshed algorithms This section provides some overview of the more traditional viewshed algorithms found in literature. They are referred to as direct LOS-based since they all use a more mathematical approach to utilizing the LOS in the calculations. In contrast, the GPU-based algorithms described later use more of a computer graphics approach R3 algorithm This algorithm is a brute-force approach to solving the viewshed problem, and is described in [7]. It works by going through each terrain point and checking whether the LOS to the observer is clear, by comparing each elevation along the LOS to that of the current terrain point. If all points along this LOS have an elevation lower than that of the current terrain point, that point is deemed visible. The algorithm is seen as a reference for other algorithms and in general considered to be the exact viewshed [8] R2 algorithm A more refined version of the R3 algorithm is the R2 algorithm, also described in more detail in [8], [7]. It uses a similar approach as its predecessor. However, 13

30 14 3 Related work instead of calculating the elevation of every point along every LOS, the R2 algorithm calculates what is referred to as the slope of the LOS. The slope of a LOS is defined as the angle between some ground plane and the LOS itself. The algorithm itself works in two passes. First, the maximum slope along the LOS to all boundary points of the terrain grid are calculated, by simply accumulating the value as you move away from the observer. Next, the algorithm iterates through all terrain points as before, but this time simply compares the current point s slope to the maximum along that LOS. If the exact LOS to that particular point does not exist, it approximates the maximum slope by that of the closest LOS. See figure 3.1 for an illustration. The R2 algorithm is therefore considerably faster than the R3 algorithm, at the cost of some accuracy due to the approximation involved. Figure 3.1: A visualisation of how the R2 algorithm can determine visibility using accumulated slope, here simplified to a 2D case. If the point to be determined visible or not is u, then the maximum accumulated slope along its LOS is a. Since the slope at u is less than a, the maximum accumulated slope so far, we know that u must be invisible to the observer Xdraw algorithm A variant of the same principle that R2 works on, is the Xdraw algorithm. It is found often in viewshed literature [12], [8], [2]. It works in layers of grid points. The first layer corresponds to the 8 grid points closest to the observer, the second layer to the points closest to the first layer and so on. It then works in a similar way as R2. It is noted that the visibility of a grid point depends on whether its slope is greater than the maximum slope along the current LOS. Instead of precalculating the maximum slopes for each LOS, the maximum value is approximated from the two closest neighbours in the previous layer. This gives an even better performance, running time-wise, than the R2 algorithm, but at an even more increased cost of accuracy Summary A clear pattern of LOS traversal with varying degrees of approximation is present in these methods. Many methods that are used in geographic information sys-

31 3.2 Computer graphics based algorithms 15 tems (GIS) use some variant of this. Efforts have been made to improve the approximations or running time of these algorithms by for instance developing GPU parallelizations [10], [15], by optimizing LOS traversal [14], [9], or by a combination of these improvements [2]. The key point is that these types of algorithms do not work with a terrain representation that allows overlapping structures, which the data that Vricon uses does. In fact, it is trivial to prove that the reference algorithm R3, as described in 3.1.1, fails on these types of terrains. Consider figure 3.2. The R3 algorithm would simply traverse the line of sight between the observer and point u and compare elevation values. This would falsely classify u as invisible since its elevation is lower than that of points that lie on the overlapping terrain (illustrated in the figure as an ellipse). Figure 3.2: A theoretical cross section of a terrain surface consisting of a flat ground plane and some overlapping structure above it, for instance a bridge. The R3 algorithm would deem the point u invisible although it clearly is not. It is safe to say, that all algorithms that work on the same principle as R3 would fail in this case. It should also be noted that most existing viewshed algorithms found in literature are in fact of this type. The next section describes instead computer graphics based algorithms, where the overlapping terrain case can be handled. 3.2 Computer graphics based algorithms In contrast to the direct LOS based algorithms that primarily work from a mathematical and data driven viewpoint, the computer graphics based approaches in general recognize that the viewshed problem is very closely related to shadow effects. In fact, if one considers the observer to be a point light source, and what is invisible to the observer as shadows, it is identical. This opens up a different category of algorithms, many of which are inspired by computer graphics techniques for shadowing. The following subsections describe two of these Shadowmap approach This approach, presented in [3], is based on traditional shadowmapping used extensively in video games to create shadows. Its implementation is somewhat more complex than the previous LOS based approaches described earlier.

32 16 3 Related work The general shadowmapping technique works in two stages. In the first stage, a framebuffer with only a depth attachment is setup and prepared for use. The scene is then rendered to this framebuffer. However, the scene is not rendered from the camera s perspective as usual, but rather from the perspective of whatever light source one wishes to cast the shadows. In the case of directional light sources, such as the sun, this means using an orthographic projection matrix, since that is how the sun sees the world. After this render pass, the depth buffer of the framebuffer now contains the normalized distance from every visible fragment to the light source. The second stage is a normal render pass, where the scene is rendered as usual with the usual projection matrix. However, the texture that contains the depth information from the previous stage is uploaded to the fragment shader. Every fragment is then transformed to the coordinate system that was used in the first render pass, using whatever projection matrix was used then. With the fragment in the light s coordinate system, its distance to the sun is calculated and normalized to a value on the interval [0, 1]. This distance value is then compared to what distance value is stored in the depth buffer, from the first render pass, at whatever position this particular fragment ends up in. If the current fragment s distance is greater than what is stored at its position in the depth buffer texture, the fragment is not visible to the sun and therefore in shadow. If it is less or equal than the depth buffer value, it is visible by the sun and should be lit. The authors of [3] propose a similar approach for viewshed calculations. The observer is seen as the light source, and what would be in shadow is simply not visible to the observer. However, a simple orthographic projection cannot be used since an observer needs 360 visibility around it. The authors propose using a somewhat alternate approach. Each fragment is mapped to a virtual sphere by drawing a vector from the observer to the current fragment position, and normalizing it. Figure 3.3 shows an illustration of this. This gives cartesian sphere coordinates of each fragment. Since the depth buffer cannot operate on 3D coordinates, the sphere coordinates are mapped to 2D using a stereographic projection. See figure 3.4 for an illustration of how the stereographic projection works. The resulting 2D mapping then becomes the shadow map. In principle, this is normal shadow mapping with a stereographic projection matrix. Figure 3.3: Mapping of terrain points to a logical sphere, by normalizing direction vectors.

33 3.2 Computer graphics based algorithms 17 Figure 3.4: Stereographic projection, from 3D coordinates to 2D. The point P is projected from the north pole N to point P in the 2D plane Occlusive volumes In [5] a somewhat different algorithm is presented. Occlusive volumes are generated from geometric meshes close to the observer, by extending their back faces a large distance in a direction pointing away from the observer. A check is then made to see if any terrain lies within these volumes, which means that they are invisible from the observer. Since the method for generating the occlusive volumes may result in broken up volumes, i.e. when a triangle faces the observer and its neighbour faces away from the observer, special processing to prevent this must be done beforehand Summary The occlusive volumes method is more complex than the shadowmap approach, since pre-processing of the vertex data is required. This includes fixing broken up volumes that may be created as a result of the generation algorithm. The shadowmap approach requires some finite resolution texture to be used as the shadowmap, which may result in artifacts in the resulting viewshed. This is something that the occlusive volumes method avoids. The shadowmap algorithm has other advantages, one of which is that it is in essence a widely used technique, albeit in a different field than GIS. The only real difference is the projection matrix used to map terrain point depths to the shadowmap. It is therefore easier to implement in an existing visualization system than the occlusive volumes approach. In terms of scalability, that is how well the algorithms handle multiple observers, the shadowmap method performs better, since potentially, at least without modifications, each observer would require to go through the vertex preprocessing and fixing in the occlusive volume approach. With the shadowmap approach, it would suffice to create shadowmaps for each observer, which can be done at a relative low cost using the existing pipeline. All in all, the shadowmap approach seems like a viable starting candidate to solve the issues that this thesis presents.

34

35 4 Implementation This chapter describes the implementations of two proposed solutions to the problems presented in 1.3. It starts with a description of the test environment used to implement and evaluate these methods. 4.1 Test environment In order to be able to thoroughly test any methods, an environment to do this in is required. The one used in this thesis consists of a procedural terrain generator, terrain shaders, an observer placement system and a simple way to change the viewshed algorithm Terrain generation The terrain is generated using a mathematical algorithm referred to as Fractional Brownian Motion (FBM). In essence, this generates a heightmap, as described in section 2.2.1, for a grid of a predefined size with all height values initially set to 0. The terrain generator then loops over all grid points and calculates a height value using FBM. FBM works by sampling a 2D noise function, for each grid point, octaves times at increasing frequency and decreasing amplitude in order to get a height value for the current grid point. See algorithm 1 for pseudocode demonstrating this principle for a single height value. The noise function may be any 2D noise function. In the case of the test environment, a coherent noise function is used. This means that for any 2D input, it always returns the same output, i.e. it is mathematically surjective. This is of course necessary if any viewshed algorithm comparisons should be done within the terrain. 19

36 20 4 Implementation Algorithm 1 Pseudocode for fractional brownian motion octaves = 5 total = 0 f requency = 1 amplitude = 1 for i = 1 : octaves do total amplitude noise(x f requency, y f requency) f requency f requency 2 amplitude amplitude 0.5 end for map[x][y] = total What makes this terrain generation very useful is the possibility to get arbitrary detail in the terrain, by simply increasing the number of octaves. Figure 4.1 demonstrates how the detail in the terrain is increased with the number of octaves. For visualisation purposes, normals are calculated using adjacent vertices. The terrain shaders then calculate a basic colour and lighting using the normals Observer placement system For testing and convenience purposes an observer placement system was created within the test environment. This allows the viewshed observers to be placed on the terrain using the mouse. In order to do this, some way of mapping the 2D screen pixel coordinates, in which mouse callbacks are registered, to the world coordinates of the terrain is required. This issue is solved by using a framebuffer in which the terrain is rendered to, using the usual projective transformations. The fragment shader used when rendering to this framebuffer maps the x, y and z components of the world coordinates to r, g and b values respectively of the output colour. Since this is a framebuffer, it is not rendered to the screen but rather to a texture in memory. This is done for each frame, as often as the terrain is rendered to the screen. See figure 4.2 for an example of the framebuffer texture. The texture now, via a simple mapping, contains information about world coordinates. In particular, there is a direct relation between screen space coordinates and world coordinates. Since the terrain was rendered to the framebuffer in exactly the same manner as when it is rendered to the screen, i.e. with the same transformations and projection matrix, a 2D point on this texture corresponds exactly to the same 2D point on the screen. This means that whenever a mouse click is registered on the screen, the world coordinates of that point are retrievable by sampling the framebuffer texture in the same point and inversely mapping the three colour components to coordinate components. Using this method, the test environment allows direct placement of observers on the terrain using intuitive mouse clicks. Furthermore, a system exists where multiple observers can be placed along for instance a road. The world coordinates

37 4.2 Method 1: Arrays of shadowmaps 21 (a) (b) (c) (d) Figure 4.1: Terrain generated from a heightmap using FBM, with increasing detail due to increasing number of octaves, 4.1a: 0 octaves, 4.1b: 1 octave, 4.1c: 2 octaves, 4.1d: 5 octaves of all observers are then stored in CPU memory and are accessible to the viewshed algorithms. 4.2 Method 1: Arrays of shadowmaps The first method is an extension of the work presented in section This section motivates the need to modify the original method and also describes the implementation of those modifications. The section ends with an overview of the complete method.

38 22 4 Implementation Figure 4.2: A colour attachment texture of the framebuffer used in the observer placement system. Each fragment s world coordinate has been mapped to a colour by the fragment shader Framebuffer depth attachment modification The original method does not allow the viewshed to be calculated for multiple observers, which is a requirement. It is the shadowmap that enables the viewshed calculation of one particular observer, or light source, which means that several shadowmaps are needed to fulfil the requirements of multiple observers. To do this, the framebuffer that is used when calculating the shadowmaps is modified to hold an array of depth buffers. Each observer, with all necessary information about it stored in CPU memory, is looped through and has its own shadowmap calculated as one particular slice of the framebuffers array of depth buffers. In the second step of the shadowmap approach, the step that performs the actual computation of the viewshed, the shadowmap array is uploaded to the GPU shader program as a regular texture along with an array that holds each observer s world coordinate. The observers are then looped through, where their loop index corresponds to a slice of the shadowmap array, and the usual shadowmap calculations are done as described in section

39 4.2 Method 1: Arrays of shadowmaps 23 This method in theory ultimately gives access to the viewshed of each individual observer. However, there are issues related to the projection matrix used, scalability and performance that require further modification. The implementations of these modifications are presented in the following sections Choice of projection matrix The array of depth buffers allows the viewshed to be calculated for any number of observers, at least in theory. There are however still issues related to the original approach upon which this method builds, such as the projection matrix used. Consider once again figure 3.4 on page 17, which depicts the stereographic projection that was originally used in the method described in section This projection from a sphere to a 2D plane is a necessity, since the method yields 3D coordinates of LOS rays that must be mapped to a 2D texture. There is an apparent, and in fact well-known, issue with the stereographic projection; it does not allow the 3D point situated exactly at the north pole to be projected on to any point on the 2D surface. These points are instead projected to points at infinity. In the context of the viewshed calculations, this has the implication that no terrain point may ever be directly above the observer, which is unacceptable. Furthermore, the closer any LOS comes to the north pole, the further out in the resulting 2D shadowmap the point will be projected to. See figure 4.3 for an illustration of this. Figure 4.3: Issue with the stereographic projection from 3D sphere to 2D plane beneath it. The point P is projected through the north pole N. The closer the point P is to N, the further away the point will be projected. An infinite sized shadowmap would have to be allocated to facilitate every possible terrain point which obviously is practically impossible. Instead, a different type of projection can be used. In essence, the shadowmaps hold information about the closest terrain point for every LOS in every direction around every observer. Since the method operates by mapping each LOS to a unit sphere, a projection from the sphere to a 2D plane is required, as described earlier. Theoretically, the only requirement for this projection is that it uniquely maps each 3D unit sphere coordinate to a 2D coordinate on the plane. One such unique mapping would be to simply convert the 3D sphere coordinates to polar coordinates, and using the φ and θ angles directly as the x and y coordinates in the depth buffer texture. This also solves the issues that the stereographic projection gave rise to. Algorithm 2 mathematically

40 24 4 Implementation describes the mapping. Algorithm 2 Simple unique 3D to 2D mapping for every LOS mapped to 3D unit sphere do φ atan(los.y, LOS.x) θ acos(los.z) end for Note that using this type of projection removes the intuitive nature of the shadowmap. Compare for instance figure 2.3 on page 9 with figure 4.4. In the first figure, the depth buffer contains a clear image of the terrain. Since the depth buffer shown in the second figure maps polar coordinate angles to the x and y axis, it is more difficult to interpret. Figure 4.4: An observer on terrain, with its shadowmap in the upper right corner of the image. The shadowmap is generated by calculating all LOS directions to all terrain points, mapping them to a unit sphere around the observer, and lastly mapping that unit sphere to the 2D texture using a simple projection that uses the polar coordinates for each LOS.

41 4.2 Method 1: Arrays of shadowmaps 25 The choice of projection does solve the issues it was designed to solve, but brings new issues. Primarily, this is related to how the shadowmap is generated within OpenGL. Since the φ angle is calculated as the arctangent of the x and y components of the spherical LOS coordinates, and since the transformed vertices will be connected to form triangles, the possibility exists that triangles will be created that stretch over the entire width of the shadowmap. This specifically happens when vertices of a triangle are transformed to opposite sides of the arctangent gap, e.g. when one vertex is transformed close to π and the others are transformed close to π. Since this angle is mapped as-is to the 2D texture, the connected vertices form a triangle that possibly spans the entire texture. See figure 4.5 for an example of this issue as well as the induced artifacts in the viewshed. Figure 4.5: Viewshed calculated for the observer in the middle of this image, with the associated shadowmap in the upper right corner. Artifacts are introduced by triangles that cover the entire shadowmap. To solve both the issues of the originally used stereographic projection, and that of the simple polar coordinate mapping, a compromise of using two stereographic projections per observer is implemented. By splitting the unit sphere

42 26 4 Implementation into two hemispheres and using the appropriate stereographic projection depending on whether a LOS coordinate lies in the upper or lower hemisphere, the issue of having to allocate an infinite texture is solved, as shown in figure 4.6. Figure 4.6: The double stereographic projection used in the method. If a LOS on the unit sphere has its coordinates in the southern hemisphere, the left projection is used. If its coordinates lie in the northern hemisphere, the right projection is used. This ensures that the resulting texture sizes are deterministic. The issue of triangles being stretched along the entire texture is also nonexistent for this method. Since the opposite hemisphere of a projection pole is always chosen, the issue of an undefined projection at the poles is also solved. The mathematics for this approach are shown in algorithm 3. Algorithm 3 Hemisphere stereographic projection from unit sphere to 2D textures for every observer do for every LOS mapped to 3D unit sphere do if LOS.y 0 then X LOS.x 1+LOS.y 1+LOS.y LOS.z Y LOS.TerrainPoint gives the world coordinates of the associated terrain point that generated that particular LOS SouthernT exture[x, Y ] dist(los.t errainp oint, observer) else X Y 1 LOS.y LOS.x 1 LOS.y LOS.z N orthernt exture[x, Y ] dist(los.t errainp oint, observer) end if end for end for Note that these projections of course work within the shadowmapping pipeline, that is they are executed within shader programs on the GPU in great parallel. This type of projection works well, with minor artifacts that will be discussed later.

43 4.2 Method 1: Arrays of shadowmaps Optimizations Since the viewshed algorithms presented in this thesis are meant to be used in a real-time application, it is important to consider certain optimizations. For this method, it is for instance sufficient to calculate the shadowmaps once each time new observers have been set, in contrast to usual shadowmapping in for instance video games where shadowmaps may have to be recalculated due to dynamic objects changing positions. Since however the number of observers may be too many to calculate every shadowmap within one frame without noticeable hickups in the application, a system to improve the user experience has been implemented. This system works by having a set maximum frame time, e.g. 33ms for a target framerate of 30Hz, and a set approximation of how long one shadowmap generation will take. It then simply calculates how many shadowmaps may safely be calculated to still lie within the maximum frame time, and stores the remaining number of calculations in CPU memory to be executed in the next frame. This is repeated until all shadowmaps have been calculated. Additionally, the shadowmaps that have already been calculated are used immediately in the viewshed calculations. Furthermore, it would be sufficient to do the viewshed calculations themselves only once for each new set of observers. This implementation does however not contain such an optimization. This will be discussed later Final overview The complete method works in two major steps. The first step, the shadowmap generation, is done once and the second viewshed pass is done continuously. The following description aims to provide an overview of both steps. Shadowmap pass Generate a shadowmap for each observer by rendering the terrain to an offscreen framebuffer with an array of depth buffers as its depth attachment. The shaders for this render pass calculate LOS to every vertex of the terrain, by normalizing the vector between the current observer and the current vertex. This maps each terrain point to a unit sphere around the observer. Map this unit sphere to two different depth buffers in the depth buffer array, using algorithm 3. Set the depth value of each entry in the depth buffer to the distance between the current observer and current vertex. The depth testing of the framebuffer will ensure that only the closest terrain points get written to the depth buffer textures. Viewshed pass Render the terrain as usual, but also upload the array of depth buffers as well as each observer s position. In the fragment shader, loop over all observers and map each fragment s world coordinate to the unit sphere around each observer, as it was done in the previous pass. Convert the unit sphere to 2D texture coordinates again using algorithm 3. Now compare the current distance to the observer to the depth value stored at that position in the correct depth buffer texture. If the current distance is greater than the

44 28 4 Implementation value stored in the depth texture, the current fragment is not visible. Lastly colour the current fragment according to how many observers it is visible to. 4.3 Method 2: Scene voxelization and ray traversal The second method takes a different approach to solving the problem. Its idea comes from the first method but aims to eliminate the need for two textures for every observer. As with the last method, its workings are first described after which an overview of the complete method is provided Idea The previous shadowmap method utilizes the knowledge of how close a (transformed) terrain point may be to the observer, whilst still being visible. A simple comparison of distances, after transformations, is needed to determine the visibility. Since these comparisons and transformations are executed by shaders, they work in parallel at essentially no performance cost at all. This general method is a tested and widely used method, but issues arise the more observers are added since each observer needs two shadowmaps each. Shader texture lookups are generally fast, but looping and using many different textures for lookups will have a performance cost. The idea of this method is that all that is needed by shaders, in order to calculate the viewshed, is some representation of the terrain and some way of accessing this representation using world coordinates. It is then possible for the shaders to ray-traverse through this representation, and determine if every terrain point is visible or not. Using this approach, only one representation of the terrain is required, and can be reused continuously. This eliminates the need for several textures and may scale better with an increasing number of observers than the previous approach. One such representation would be a voxelized version of the terrain, which is also simple to traverse through. The following sections describe how the terrain is voxelized and mapped to a texture that can be used for lookup in the shaders, how to traverse this representation, as well as several optimizations Scene voxelization A voxel representation of a scene consists of cubes of some size, that together make up a model, as opposed to having triangles that make up a model. These cubes may have a certain colour and normal associated with them, much like vertices, but for our purposes it is sufficient to know if a voxel is empty or not. It would be considered empty if a voxel at a certain world position does not contain or intersect a triangle at the same world position, and considered filled if it does contain or intersect a triangle at the same world position. By altering the sizes of the cubes, different resolutions of a scene can be obtained. If the cubes that represent the voxels are sufficiently small, there would be no difference between

45 4.3 Method 2: Scene voxelization and ray traversal 29 a triangle representation and a voxel representation of a scene. The issue is that no voxel representation exists for the terrain used by Vricon and this thesis, which is why a voxelization algorithm is required. The issue of scene voxelization has been studied to some extent, most notably in [4]. The problem lies in converting a TIN mesh into a voxel representation, and to do so efficiently. In the method described in [4], a voxel representation of an arbitrarily complex mesh can be generated on the GPU using shaders. The method works by utilizing the usual rasterization pipeline with modifications added that ultimately lead to the rasterized pixels corresponding to voxels. First, the viewport of the rendering window is set to correspond to the dimensions of the desired voxel grid. For instance, if a voxel grid resolution of N xn xn is wanted, the viewport is set to N xn pixels. Next, the terrain is rendered using a special voxelization shader program. The shaders orthographically project each triangle, using one of three projection matrices, so that each triangle s rasterized fragments are transformed to the voxel grid. Since the viewport was set to match the voxel grid dimensions, each fragment within the fragment shader now corresponds to a voxel. The following description provides an overview of the GPU shaders with more detailed explanations. Vertex Shader A basic pass-through shader that simply sets the vertex positions to the world coordinate of the input data. Geometry shader In this shader stage, one of three orthogonal projection matrices is chosen based on the input triangle s normal. The projection that yields the largest triangle surface, after projection, is chosen. The three possible projection matrices are aligned with each of the three coordinate axes respectively. Information about what projection was used is sent to the fragment shader. The reason that this is done, as opposed to simply using a single orthographic projection, is to minimize holes in the voxelization caused by triangles whose faces are parallel to the orthographic projection. These types of triangles might simply not be rasterized, and therefor not yield any voxels, because they are not visible from a certain orthographic projection s view. Fragment shader The fragment shader has a 3D texture bound, which has the same dimensions as the voxel grid, for instance 512x512x512 pixels. This texture has been initialized with 0 s in every pixel. Since we know that if the fragment shader is run, its fragment must come from a triangle that should be written to a voxel. It is then sufficient to simply write a 1 at the current fragment s pixel coordinates, accessible by the built-in variable gl_fragcoord. However, before this is done, the fragment shader uses the information passed from the geometry shader about how the current fragment was projected, to flip the axis to correspond correctly to the 3D texture. The actual writing of the voxel is done with the imagestore function provided since OpenGL 4.2, which allows shaders to write to a bound texture, as opposed to only being able to read from them.

46 30 4 Implementation After this fast voxelization, which only requires a single draw call to voxelize the entire scene, the 3D texture that was used in the fragment shader now contains a 0 for every empty voxel, and a 1 for every filled voxel. It is a complete, variable resolution, representation of the space occupation of the terrain. There are however issues with this approach, that will be described later sections Ray traversal Given the voxel representation of the scene, visibility can be calculated by traversing a ray from a terrain point to an observer, and checking along this ray whether or not a filled voxel is in its way. A simple approach to this would be to add a fixed increment to the ray origin, in the direction of the observer, for each traversal loop as shown in algorithm 4. Algorithm 4 Simple ray traversal algorithm for every observer O do for every terrain point P do dir normalize(o P ) increment 1 dist dist(o, P ) mindist 0.5 ray P while dist > mindist do if voxelfilled(ray) then return invisible end if ray ray + dir increment dist dist(ray, O) end while end for end for This type of algorithm, of course also executed on the GPU, is fast. But it may also miss voxels due to the fixed increment of the ray traversal. See figure 4.7 for an example of this. A better, yet slightly less efficient algorithm, is the Amanatides & Woo algorithm presented in [1]. This traversal algorithm is specifically designed to not miss any voxels. Its implementation is detailed in algorithm 5. Using this algorithm on the GPU allows us to determine visibility for each terrain point and for every observer, in parallel Sparse voxel octree representation The most notable issue with the voxel representation presented in the previous section relates to memory usage. The memory complexity is O(N 3 ), where N is the dimensionality of the terrain data that is to be voxelized. The vast majority of

47 4.3 Method 2: Scene voxelization and ray traversal 31 Algorithm 5 Amanatides & Woo ray traversal algorithm function intbounds(s, ds) ds>0?ceil(s) s:s f loor(s) return abs(ds) end function for every observer O do for every terrain point P do dir normalize(o P ) dist dist(o, P ) mindist 0.5 X f loor(currp os.x) Y f loor(currp os.y) Z f loor(currp os.z) StepX sign(dir.x) StepY sign(dir.y) StepZ sign(dir.z) tmaxx intbounds(o.x, dir.x) tmaxy intbounds(o.y, dir.y) tmaxz intbounds(o.z, dir.z) tdeltax StepX dir.x tdeltay StepY dir.y tdeltaz StepZ dir.z while dist > mindist do if tmaxx < tmaxy then if tmaxx < tmaxz then X X + StepX tmaxx tmaxx + tdeltax else Z Z + StepZ tmaxz tmaxz + tdeltaz end if else if tmaxy < tmaxz then Y Y + StepY tmaxy tmaxy + tdeltay else Z Z + StepZ tmaxz tmaxz + tdeltaz end if end if if voxelfilled(x, Y, Z) then return invisible end if end while end for end for

48 32 4 Implementation Figure 4.7: A case where the simple ray traversal algorithm described in algorithm 4 fails. Empty voxels are represented by the white squares, and filled voxels by red squares. If the fixed increment used to advance the ray falls at for instance the two points indicated by circles, the filled voxel in between them may be missed. This results in the terrain point falsely being classified as visible although it is not. all voxels will be empty, given that terrain tiles usually are much wider than they are high, yet the same amount of memory is required to represent every empty voxel as every filled voxel. The sparse voxel octree (SVO) representation, detailed in [4], largely reduces this memory consumption by grouping together spatially close empty voxels, and then uses memory to instead represent this empty group. The SVO is an octree (nodes have eight or zero children) that consists of a root node, that represents the entire scene, with eight children that each represent one eighth of the scene. The eighths are split spatially along the middle of each of the three axes, giving each child of a node an equal size, and yields 2 3 = 8 children. Each child node potentially has eight children of its own, provided that the space that the child node represents contains voxels. Figure 4.8 shows this behaviour for a quadtree, from where the step to an octree is trivial. In figure 4.8 the entire A quadrant is represented by a single leaf node, as opposed to representing each individual empty voxel in A by its own. This significantly reduces the memory complexity, but makes it dependent on the terrain itself. Voxels are however only generated along the surface of the terrain and not inside it, since there is no need for this type of filling. This makes the SVO an efficient representation, memory-wise. Construction of the sparse voxel octree The implementation of the SVO is somewhat complicated in comparison with the previously described method using 3D textures in the fragment shader. When using the 3D texture, each voxel generated in the fragment shader was simply written directly to the texture and no post-processing was required. Due to the recursive nature of the SVO, where each child node potentially requires sub-division into n additional child nodes, such a simple implementation is not possible. Instead, some processing on the CPU is required to build the SVO and to make it usable in the shaders for traversal later. The SVO construction has the following main steps. Since the voxelization only needs to be run once for each terrain tile, the additional overhead introduced by the SVO construction is acceptable. Run voxelization shaders once to calculate the number of filled voxels

49 4.3 Method 2: Scene voxelization and ray traversal 33 Figure 4.8: A quadtree that demonstrates the principle of how the sparse voxel octree is constructed. Filled voxels are illustrated as red squares and empty voxels are painted white. The main quadrants A, B, C and D, counted in a counter-clockwise fashion starting from the upper left, are each subdivided into four additional child nodes since at least one of them contains a voxel. This process is repeated until no child node contains any filled voxels. The backslashes represent leaf nodes that are empty, whereas the red leaf nodes represent a filled voxel. Run voxelization again, with a buffer the size of the calculated number of voxels, and store the voxel world positions in the buffer Copy the buffer content to CPU memory Build a CPU-side SVO from the buffer content Build a texture that represents the SVO to be used in the shaders for traversal Since there is no way to calculate the SVO directly on the shaders during the voxelization, information about all filled voxels is required on the CPU. An

50 34 4 Implementation imagebuffer in OpenGL can be used for this, since it can be written to in the fragment shader. However, since the buffer requires memory to be allocated, a pre-pass is done before the actual voxels are stored. This pass uses an atomic_uint counter in OpenGL that can be atomically incremented in the shaders to give the correct required size for the buffer. The shaders are then run again with the attached buffer, and using imagestore the generated voxel s world positions are written to it. The contents of the buffer are then copied to CPU memory using glgetbuffersubdata, where an SVO is built recursively using this data. The CPU-side SVO is trivial to construct in a recursive fashion using C++ classes. However, the main purpose of the SVO is to traverse through the terrain in the shaders in order to calculate the viewshed. For this, a representation of the SVO is required that is shader compatible. The shader-side SVO structure was implemented using a 3D texture with dimensions N x N x 8, where N is the number of nodes in the SVO. Each x, y pair in this texture corresponds to a node in the SVO tree, and each z-layer corresponds to that node s children. The alpha channel of each child can take values of 0, 1 or 2. A value of 0 indicates that the current child has eight children of its own, 1 indicates an empty leaf node and 2 lastly represents a leaf node that contains a filled voxel. If the alpha channel value is 0, the red and green channels hold information about at what x, y index in the texture the children of the current node reside. See figure 4.9 for an illustration using a quadtree instead of an octree. Figure 4.9: On the right, an illustration of how the SVO is represented in GPU memory using a 3D texture, using the quadtree on the left as its basis. The x and y texture indices each represent a tree node. Each z layer then corresponds to information about every child in that node. The alpha channel of each child indicates whether the node is empty (alpha = 1), contains a filled voxel (alpha = 2), or has children of its own (alpha = 0). The red and green channels hold information about the texture index at which a child node resides. This texture is then uploaded to the terrain shaders and used for the ray traversal in order to calculate the viewshed. The ray traversal in essence works the same as for the full 3D texture. The retrieval of voxel information is however considerably more complex than for the 3D texture, since the SVO needs to be traversed top-down for each increment in the traversal algorithm using the SVO texture.

51 4.3 Method 2: Scene voxelization and ray traversal 35 This was trivial using the memory heavy 3D texture approach, since each world coordinate of the terrain corresponded to the same x, y and z coordinates in the texture Final overview This method also works in two major steps, the terrain voxelization and the actual viewshed calculations. The following description provides an overview of the steps. Terrain voxelization The voxelization is required in order to get the SVO that can be traversed through in the calculation step. First, the voxel world positions are generated using a GPU shader program. This information is used to construct an SVO wrapped in a 3D texture that is usable in the next step. This step is only required to execute once for each terrain tile and is completely independent of the observers. Viewshed calculation The SVO texture calculated in the previous step is continually used to calculate the viewshed, at the same time as the terrain is rendered. The terrain is traversed through for each observer, using the SVO and algorithm 5, and the visibility is calculated based on the number of observers to which each terrain point was visible.

52

53 5 Results This chapter shows the results of the two implemented methods in the form of images and algorithm timings. Two test rigs were used to obtain timing values, as detailed in table 5.1. The most notable difference is the GPU, where the second rig s GPU is more powerful. Rig no. CPU GPU RAM 1 Intel i7 3.6GHz Nvidia GTX660 Ti 16GB 2 Intel i7 4.5GHz Nvidia GTX970 16GB Table 5.1: Setup rig used for timing measurements. 5.1 Method 1: Arrays of shadowmaps This method yields results as shown in figure 5.1. The result is visualized in grayscale, where black corresponds to invisible to all observers and white corresponds to visible to all observers Timings and memory consumption Table 5.2 shows timings for the shadowmap method. Both the one-time shadowmap calculations and the continuous viewshed calculation timings are shown. Memory complexity for the shadowmap method is O(N), where N is the number of observers. 37

54 38 5 Results (a) (b) Figure 5.1: Result of the array of shadowmap method. 5.1a: terrain before any viewshed calculations. 5.1b: Viewshed calculated and simultaneously visualized for the 7 observers visible on the mountain top.

55 5.2 Method 2: Scene voxelization and ray traversal 39 No. of observers S.m. rig 1 S.m. rig 2 V.s. rig 1 V.s. rig ms 3 ms 1 ms <1 ms 3 12 ms 5 ms 1 ms <1 ms 5 13 ms 7 ms 1 ms <1 ms 7 16 ms 9 ms 2 ms <1 ms ms 13 ms 2 ms <1 ms ms 19 ms 13 ms <1 ms ms 31 ms 21 ms 15 ms ms 36 ms 32 ms 23 ms Table 5.2: Timings for the shadowmap method. S.m. = shadowmap construction, V.s. = viewshed calculation Artifacts Triangular artifacts arise in certain situations, as seen in figures 5.2 and 5.3. The artifacts are also visible in the shadowmap. 5.2 Method 2: Scene voxelization and ray traversal Results for this method is shown in figure Voxelization Figure 5.5 shows an overview of the voxelization and SVO structure. The different levels of sub-division have been rendered in wireframe mode, and coloured darker the finer the sub-division. Figure 5.6 shows the non-empty voxels from the SVO, next to the terrain that it was voxelized from Timings and memory consumption Table 5.3 shows timings for the voxelization and ray traversal method. The time it takes to voxelize a scene, build an SVO and to calculate the viewsheds are shown. The SVO structure requires nodes to represent the test terrain and reduces the required number of voxel nodes by approximately a factor when using a 5122 terrain, compared to using a full 3D texture Artifacts Figure 5.7 shows holes in the voxelization that may lead to issues. Figure 5.8 illustrates artifacts related to the chosen voxel size in red, and artifacts related to the voxelization itself in green.

56 40 5 Results Figure 5.2: Single triangle-shaped artifact in viewshed, with the northern stereographic projection shadowmap. Type of timing Time rig 1 Time rig 2 Voxelize scene 5 ms 2 ms Build SVO 488 ms 369 ms Viewshed calculation 1 observer 509 ms 47 ms Viewshed calculation 2 observers 839 ms 95 ms Viewshed calculation 3 observers 1017 ms 142 ms Viewshed calculation 4 observers 1368 ms 189 ms Viewshed calculation 5 observers 1659 ms 236 ms Table 5.3: Timings for the voxelization and ray traversal method.

57 5.2 Method 2: Scene voxelization and ray traversal 41 Figure 5.3: Several triangular artifacts on a mountain side, with the northern stereographic projection shadowmap.

58 42 5 Results (a) (b) Figure 5.4: Result of the voxelization and ray traversal method. 5.4a: terrain before any viewshed calculations. 5.4b: Viewshed calculated and simultaneously visualized for the 3 observers visible on the mountain top.

59 5.2 Method 2: Scene voxelization and ray traversal 43 Figure 5.5: SVO structure after voxelization of terrain.

60 44 5 Results (a) (b) Figure 5.6: Result of the voxelization. 5.6a: terrain before voxelization. 5.6b: Terrain after voxelization.

61 5.2 Method 2: Scene voxelization and ray traversal 45 Figure 5.7: Holes in voxelized terrain.

62 46 5 Results Figure 5.8: Artifacts induced by voxel size (red) and the voxelization itself (green).

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

Pipeline Operations. CS 4620 Lecture 14

Pipeline Operations. CS 4620 Lecture 14 Pipeline Operations CS 4620 Lecture 14 2014 Steve Marschner 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology Point Cloud Filtering using Ray Casting by Eric Jensen 01 The Basic Methodology Ray tracing in standard graphics study is a method of following the path of a photon from the light source to the camera,

More information

Rasterization Overview

Rasterization Overview Rendering Overview The process of generating an image given a virtual camera objects light sources Various techniques rasterization (topic of this course) raytracing (topic of the course Advanced Computer

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

CS451Real-time Rendering Pipeline

CS451Real-time Rendering Pipeline 1 CS451Real-time Rendering Pipeline JYH-MING LIEN DEPARTMENT OF COMPUTER SCIENCE GEORGE MASON UNIVERSITY Based on Tomas Akenine-Möller s lecture note You say that you render a 3D 2 scene, but what does

More information

CS 130 Final. Fall 2015

CS 130 Final. Fall 2015 CS 130 Final Fall 2015 Name Student ID Signature You may not ask any questions during the test. If you believe that there is something wrong with a question, write down what you think the question is trying

More information

Adaptive Point Cloud Rendering

Adaptive Point Cloud Rendering 1 Adaptive Point Cloud Rendering Project Plan Final Group: May13-11 Christopher Jeffers Eric Jensen Joel Rausch Client: Siemens PLM Software Client Contact: Michael Carter Adviser: Simanta Mitra 4/29/13

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

Lecture 25 of 41. Spatial Sorting: Binary Space Partitioning Quadtrees & Octrees

Lecture 25 of 41. Spatial Sorting: Binary Space Partitioning Quadtrees & Octrees Spatial Sorting: Binary Space Partitioning Quadtrees & Octrees William H. Hsu Department of Computing and Information Sciences, KSU KSOL course pages: http://bit.ly/hgvxlh / http://bit.ly/evizre Public

More information

Volume Shadows Tutorial Nuclear / the Lab

Volume Shadows Tutorial Nuclear / the Lab Volume Shadows Tutorial Nuclear / the Lab Introduction As you probably know the most popular rendering technique, when speed is more important than quality (i.e. realtime rendering), is polygon rasterization.

More information

Pipeline Operations. CS 4620 Lecture 10

Pipeline Operations. CS 4620 Lecture 10 Pipeline Operations CS 4620 Lecture 10 2008 Steve Marschner 1 Hidden surface elimination Goal is to figure out which color to make the pixels based on what s in front of what. Hidden surface elimination

More information

Direct Rendering of Trimmed NURBS Surfaces

Direct Rendering of Trimmed NURBS Surfaces Direct Rendering of Trimmed NURBS Surfaces Hardware Graphics Pipeline 2/ 81 Hardware Graphics Pipeline GPU Video Memory CPU Vertex Processor Raster Unit Fragment Processor Render Target Screen Extended

More information

Graphics Shaders. Theory and Practice. Second Edition. Mike Bailey. Steve Cunningham. CRC Press. Taylor&FnincIs Croup tootutor London New York

Graphics Shaders. Theory and Practice. Second Edition. Mike Bailey. Steve Cunningham. CRC Press. Taylor&FnincIs Croup tootutor London New York Graphics Shaders Second Edition ' -i'nsst«i«{r szizt/siss?.aai^m&/gm^mmm3$8iw3ii Theory and Practice Mike Bailey Steve Cunningham CRC Press Taylor&FnincIs Croup tootutor London New York CRCPrea it an Imprint

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

CS 381 Computer Graphics, Fall 2012 Midterm Exam Solutions. The Midterm Exam was given in class on Tuesday, October 16, 2012.

CS 381 Computer Graphics, Fall 2012 Midterm Exam Solutions. The Midterm Exam was given in class on Tuesday, October 16, 2012. CS 381 Computer Graphics, Fall 2012 Midterm Exam Solutions The Midterm Exam was given in class on Tuesday, October 16, 2012. 1. [7 pts] Synthetic-Camera Model. Describe the Synthetic-Camera Model : how

More information

CS 130 Exam I. Fall 2015

CS 130 Exam I. Fall 2015 S 3 Exam I Fall 25 Name Student ID Signature You may not ask any questions during the test. If you believe that there is something wrong with a question, write down what you think the question is trying

More information

Could you make the XNA functions yourself?

Could you make the XNA functions yourself? 1 Could you make the XNA functions yourself? For the second and especially the third assignment, you need to globally understand what s going on inside the graphics hardware. You will write shaders, which

More information

Deferred Rendering Due: Wednesday November 15 at 10pm

Deferred Rendering Due: Wednesday November 15 at 10pm CMSC 23700 Autumn 2017 Introduction to Computer Graphics Project 4 November 2, 2017 Deferred Rendering Due: Wednesday November 15 at 10pm 1 Summary This assignment uses the same application architecture

More information

Module Contact: Dr Stephen Laycock, CMP Copyright of the University of East Anglia Version 1

Module Contact: Dr Stephen Laycock, CMP Copyright of the University of East Anglia Version 1 UNIVERSITY OF EAST ANGLIA School of Computing Sciences Main Series PG Examination 2013-14 COMPUTER GAMES DEVELOPMENT CMPSME27 Time allowed: 2 hours Answer any THREE questions. (40 marks each) Notes are

More information

Terrain rendering (part 1) Due: Monday, March 10, 10pm

Terrain rendering (part 1) Due: Monday, March 10, 10pm CMSC 3700 Winter 014 Introduction to Computer Graphics Project 4 February 5 Terrain rendering (part 1) Due: Monday, March 10, 10pm 1 Summary The final two projects involves rendering large-scale outdoor

More information

Computergrafik. Matthias Zwicker. Herbst 2010

Computergrafik. Matthias Zwicker. Herbst 2010 Computergrafik Matthias Zwicker Universität Bern Herbst 2010 Today Bump mapping Shadows Shadow mapping Shadow mapping in OpenGL Bump mapping Surface detail is often the result of small perturbations in

More information

Universiteit Leiden Computer Science

Universiteit Leiden Computer Science Universiteit Leiden Computer Science Optimizing octree updates for visibility determination on dynamic scenes Name: Hans Wortel Student-no: 0607940 Date: 28/07/2011 1st supervisor: Dr. Michael Lew 2nd

More information

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models 3D Programming Concepts Outline 3D Concepts Displaying 3D Models 3D Programming CS 4390 3D Computer 1 2 3D Concepts 3D Model is a 3D simulation of an object. Coordinate Systems 3D Models 3D Shapes 3D Concepts

More information

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture Texture CS 475 / CS 675 Computer Graphics Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. Lecture 11 : Texture http://en.wikipedia.org/wiki/uv_mapping

More information

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture CS 475 / CS 675 Computer Graphics Lecture 11 : Texture Texture Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. http://en.wikipedia.org/wiki/uv_mapping

More information

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world

More information

CS 4620 Program 3: Pipeline

CS 4620 Program 3: Pipeline CS 4620 Program 3: Pipeline out: Wednesday 14 October 2009 due: Friday 30 October 2009 1 Introduction In this assignment, you will implement several types of shading in a simple software graphics pipeline.

More information

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo Computer Graphics Bing-Yu Chen National Taiwan University The University of Tokyo Hidden-Surface Removal Back-Face Culling The Depth-Sort Algorithm Binary Space-Partitioning Trees The z-buffer Algorithm

More information

PowerVR Hardware. Architecture Overview for Developers

PowerVR Hardware. Architecture Overview for Developers Public Imagination Technologies PowerVR Hardware Public. This publication contains proprietary information which is subject to change without notice and is supplied 'as is' without warranty of any kind.

More information

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03 1 Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03 Table of Contents 1 I. Overview 2 II. Creation of the landscape using fractals 3 A.

More information

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the

More information

CS 465 Program 4: Modeller

CS 465 Program 4: Modeller CS 465 Program 4: Modeller out: 30 October 2004 due: 16 November 2004 1 Introduction In this assignment you will work on a simple 3D modelling system that uses simple primitives and curved surfaces organized

More information

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015 Enhancing Traditional Rasterization Graphics with Ray Tracing October 2015 James Rumble Developer Technology Engineer, PowerVR Graphics Overview Ray Tracing Fundamentals PowerVR Ray Tracing Pipeline Using

More information

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer The exam consists of 10 questions. There are 2 points per question for a total of 20 points. You

More information

Real-Time Voxelization for Global Illumination

Real-Time Voxelization for Global Illumination Lecture 26: Real-Time Voxelization for Global Illumination Visual Computing Systems Voxelization to regular grid Input: scene triangles Output: surface information at each voxel in 3D grid - Simple case:

More information

CEng 477 Introduction to Computer Graphics Fall 2007

CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection Visible surface detection or hidden surface removal. Realistic scenes: closer objects occludes the

More information

Terrain Rendering (Part 1) Due: Thursday November 30 at 10pm

Terrain Rendering (Part 1) Due: Thursday November 30 at 10pm CMSC 23700 Autumn 2017 Introduction to Computer Graphics Project 5 November 16, 2015 Terrain Rendering (Part 1) Due: Thursday November 30 at 10pm 1 Summary The final project involves rendering large-scale

More information

Procedural modeling and shadow mapping. Computer Graphics CSE 167 Lecture 15

Procedural modeling and shadow mapping. Computer Graphics CSE 167 Lecture 15 Procedural modeling and shadow mapping Computer Graphics CSE 167 Lecture 15 CSE 167: Computer graphics Procedural modeling Height fields Fractals L systems Shape grammar Shadow mapping Based on slides

More information

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker Rendering Algorithms: Real-time indirect illumination Spring 2010 Matthias Zwicker Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

Z-Buffer hold pixel's distance from camera. Z buffer

Z-Buffer hold pixel's distance from camera. Z buffer Z-Buffer hold pixel's distance from camera Z buffer Frustrum Culling and Z-buffering insufficient Given a large enough set of polygons, no matter how fast the graphics card, sending it too many hidden

More information

Shadow Techniques. Sim Dietrich NVIDIA Corporation

Shadow Techniques. Sim Dietrich NVIDIA Corporation Shadow Techniques Sim Dietrich NVIDIA Corporation sim.dietrich@nvidia.com Lighting & Shadows The shadowing solution you choose can greatly influence the engine decisions you make This talk will outline

More information

Computer Graphics Shadow Algorithms

Computer Graphics Shadow Algorithms Computer Graphics Shadow Algorithms Computer Graphics Computer Science Department University of Freiburg WS 11 Outline introduction projection shadows shadow maps shadow volumes conclusion Motivation shadows

More information

Data Representation in Visualisation

Data Representation in Visualisation Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have

More information

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015 Orthogonal Projection Matrices 1 Objectives Derive the projection matrices used for standard orthogonal projections Introduce oblique projections Introduce projection normalization 2 Normalization Rather

More information

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters.

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters. 1 2 Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters. Crowd rendering in large environments presents a number of challenges,

More information

Ray Tracing. Computer Graphics CMU /15-662, Fall 2016

Ray Tracing. Computer Graphics CMU /15-662, Fall 2016 Ray Tracing Computer Graphics CMU 15-462/15-662, Fall 2016 Primitive-partitioning vs. space-partitioning acceleration structures Primitive partitioning (bounding volume hierarchy): partitions node s primitives

More information

Game Architecture. 2/19/16: Rasterization

Game Architecture. 2/19/16: Rasterization Game Architecture 2/19/16: Rasterization Viewing To render a scene, need to know Where am I and What am I looking at The view transform is the matrix that does this Maps a standard view space into world

More information

CS 130 Exam I. Fall 2015

CS 130 Exam I. Fall 2015 CS 130 Exam I Fall 2015 Name Student ID Signature You may not ask any questions during the test. If you believe that there is something wrong with a question, write down what you think the question is

More information

Computer Graphics. Bing-Yu Chen National Taiwan University

Computer Graphics. Bing-Yu Chen National Taiwan University Computer Graphics Bing-Yu Chen National Taiwan University Visible-Surface Determination Back-Face Culling The Depth-Sort Algorithm Binary Space-Partitioning Trees The z-buffer Algorithm Scan-Line Algorithm

More information

CSE4030 Introduction to Computer Graphics

CSE4030 Introduction to Computer Graphics CSE4030 Introduction to Computer Graphics Dongguk University Jeong-Mo Hong Timetable 00:00~00:10 Introduction (English) 00:10~00:50 Topic 1 (English) 00:50~00:60 Q&A (English, Korean) 01:00~01:40 Topic

More information

Chapter 1. Introduction

Chapter 1. Introduction Introduction 1 Chapter 1. Introduction We live in a three-dimensional world. Inevitably, any application that analyzes or visualizes this world relies on three-dimensional data. Inherent characteristics

More information

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM NAME: Login name: Computer Science 46 Midterm 3//4, :3PM-:5PM This test is 5 questions, of equal weight. Do all of your work on these pages (use the back for scratch space), giving the answer in the space

More information

CS559 Computer Graphics Fall 2015

CS559 Computer Graphics Fall 2015 CS559 Computer Graphics Fall 2015 Practice Midterm Exam Time: 2 hrs 1. [XX Y Y % = ZZ%] MULTIPLE CHOICE SECTION. Circle or underline the correct answer (or answers). You do not need to provide a justification

More information

(Refer Slide Time: 00:04:20)

(Refer Slide Time: 00:04:20) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 8 Three Dimensional Graphics Welcome back all of you to the lectures in Computer

More information

lecture 19 Shadows - ray tracing - shadow mapping - ambient occlusion Interreflections

lecture 19 Shadows - ray tracing - shadow mapping - ambient occlusion Interreflections lecture 19 Shadows - ray tracing - shadow mapping - ambient occlusion Interreflections In cinema and photography, shadows are important for setting mood and directing attention. Shadows indicate spatial

More information

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students) TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students) Saturday, January 13 th, 2018, 08:30-12:30 Examiner Ulf Assarsson, tel. 031-772 1775 Permitted Technical Aids None, except

More information

1 Overview. EPFL 14 th Apr, /6. Michaël Defferrard Pierre Fechting Vu Hiep Doan

1 Overview. EPFL 14 th Apr, /6. Michaël Defferrard Pierre Fechting Vu Hiep Doan 1/6 1 Overview This report presents our advancement on the rst part of the project : terrain generation using procedural methods. Figure 1 shows an example of what our actual code base is able to generate.

More information

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions.

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions. 1 2 Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions. 3 We aim for something like the numbers of lights

More information

9. Three Dimensional Object Representations

9. Three Dimensional Object Representations 9. Three Dimensional Object Representations Methods: Polygon and Quadric surfaces: For simple Euclidean objects Spline surfaces and construction: For curved surfaces Procedural methods: Eg. Fractals, Particle

More information

CS 354R: Computer Game Technology

CS 354R: Computer Game Technology CS 354R: Computer Game Technology Texture and Environment Maps Fall 2018 Texture Mapping Problem: colors, normals, etc. are only specified at vertices How do we add detail between vertices without incurring

More information

Computer Graphics. Shadows

Computer Graphics. Shadows Computer Graphics Lecture 10 Shadows Taku Komura Today Shadows Overview Projective shadows Shadow texture Shadow volume Shadow map Soft shadows Why Shadows? Shadows tell us about the relative locations

More information

6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, :05-12pm Two hand-written sheet of notes (4 pages) allowed 1 SSD [ /17]

6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, :05-12pm Two hand-written sheet of notes (4 pages) allowed 1 SSD [ /17] 6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, 2011 9:05-12pm Two hand-written sheet of notes (4 pages) allowed NAME: 1 / 17 2 / 12 3 / 35 4 / 8 5 / 18 Total / 90 1 SSD [ /17]

More information

Hello, Thanks for the introduction

Hello, Thanks for the introduction Hello, Thanks for the introduction 1 In this paper we suggest an efficient data-structure for precomputed shadows from point light or directional light-sources. Because, in fact, after more than four decades

More information

Robust Stencil Shadow Volumes. CEDEC 2001 Tokyo, Japan

Robust Stencil Shadow Volumes. CEDEC 2001 Tokyo, Japan Robust Stencil Shadow Volumes CEDEC 2001 Tokyo, Japan Mark J. Kilgard Graphics Software Engineer NVIDIA Corporation 2 Games Begin to Embrace Robust Shadows 3 John Carmack s new Doom engine leads the way

More information

From Vertices to Fragments: Rasterization. Reading Assignment: Chapter 7. Special memory where pixel colors are stored.

From Vertices to Fragments: Rasterization. Reading Assignment: Chapter 7. Special memory where pixel colors are stored. From Vertices to Fragments: Rasterization Reading Assignment: Chapter 7 Frame Buffer Special memory where pixel colors are stored. System Bus CPU Main Memory Graphics Card -- Graphics Processing Unit (GPU)

More information

Announcements. Submitting Programs Upload source and executable(s) (Windows or Mac) to digital dropbox on Blackboard

Announcements. Submitting Programs Upload source and executable(s) (Windows or Mac) to digital dropbox on Blackboard Now Playing: Vertex Processing: Viewing Coulibaly Amadou & Mariam from Dimanche a Bamako Released August 2, 2005 Rick Skarbez, Instructor COMP 575 September 27, 2007 Announcements Programming Assignment

More information

Rendering Objects. Need to transform all geometry then

Rendering Objects. Need to transform all geometry then Intro to OpenGL Rendering Objects Object has internal geometry (Model) Object relative to other objects (World) Object relative to camera (View) Object relative to screen (Projection) Need to transform

More information

CMSC427 Transformations II: Viewing. Credit: some slides from Dr. Zwicker

CMSC427 Transformations II: Viewing. Credit: some slides from Dr. Zwicker CMSC427 Transformations II: Viewing Credit: some slides from Dr. Zwicker What next? GIVEN THE TOOLS OF The standard rigid and affine transformations Their representation with matrices and homogeneous coordinates

More information

Project report Augmented reality with ARToolKit

Project report Augmented reality with ARToolKit Project report Augmented reality with ARToolKit FMA175 Image Analysis, Project Mathematical Sciences, Lund Institute of Technology Supervisor: Petter Strandmark Fredrik Larsson (dt07fl2@student.lth.se)

More information

CS4620/5620: Lecture 14 Pipeline

CS4620/5620: Lecture 14 Pipeline CS4620/5620: Lecture 14 Pipeline 1 Rasterizing triangles Summary 1! evaluation of linear functions on pixel grid 2! functions defined by parameter values at vertices 3! using extra parameters to determine

More information

CSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018

CSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018 CSE 167: Introduction to Computer Graphics Lecture #9: Visibility Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018 Announcements Midterm Scores are on TritonEd Exams to be

More information

CS488. Visible-Surface Determination. Luc RENAMBOT

CS488. Visible-Surface Determination. Luc RENAMBOT CS488 Visible-Surface Determination Luc RENAMBOT 1 Visible-Surface Determination So far in the class we have dealt mostly with simple wireframe drawings of the models The main reason for this is so that

More information

Next-Generation Graphics on Larrabee. Tim Foley Intel Corp

Next-Generation Graphics on Larrabee. Tim Foley Intel Corp Next-Generation Graphics on Larrabee Tim Foley Intel Corp Motivation The killer app for GPGPU is graphics We ve seen Abstract models for parallel programming How those models map efficiently to Larrabee

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

9. Visible-Surface Detection Methods

9. Visible-Surface Detection Methods 9. Visible-Surface Detection Methods More information about Modelling and Perspective Viewing: Before going to visible surface detection, we first review and discuss the followings: 1. Modelling Transformation:

More information

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic. Shading Models There are two main types of rendering that we cover, polygon rendering ray tracing Polygon rendering is used to apply illumination models to polygons, whereas ray tracing applies to arbitrary

More information

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial Data Structures and Speed-Up Techniques Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial data structures What is it? Data structure that organizes

More information

Introduction to Visualization and Computer Graphics

Introduction to Visualization and Computer Graphics Introduction to Visualization and Computer Graphics DH2320, Fall 2015 Prof. Dr. Tino Weinkauf Introduction to Visualization and Computer Graphics Visibility Shading 3D Rendering Geometric Model Color Perspective

More information

Ray Tracing Acceleration Data Structures

Ray Tracing Acceleration Data Structures Ray Tracing Acceleration Data Structures Sumair Ahmed October 29, 2009 Ray Tracing is very time-consuming because of the ray-object intersection calculations. With the brute force method, each ray has

More information

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz Computing Visibility BSP Trees Ray Casting Depth Buffering Quiz Power of Plane Equations We ve gotten a lot of mileage out of one simple equation. Basis for D outcode-clipping Basis for plane-at-a-time

More information

COMP30019 Graphics and Interaction Scan Converting Polygons and Lines

COMP30019 Graphics and Interaction Scan Converting Polygons and Lines COMP30019 Graphics and Interaction Scan Converting Polygons and Lines Department of Computer Science and Software Engineering The Lecture outline Introduction Scan conversion Scan-line algorithm Edge coherence

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin Stony Brook University (SUNY at Stony Brook) Stony Brook, New York 11794-2424 Tel: (631)632-845; Fax: (631)632-8334 qin@cs.stonybrook.edu

More information

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading: Reading Required: Watt, Section 5.2.2 5.2.4, 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading: 18. Projections and Z-buffers Foley, et al, Chapter 5.6 and Chapter 6 David F. Rogers

More information

Point based Rendering

Point based Rendering Point based Rendering CS535 Daniel Aliaga Current Standards Traditionally, graphics has worked with triangles as the rendering primitive Triangles are really just the lowest common denominator for surfaces

More information

The Graphics Pipeline

The Graphics Pipeline The Graphics Pipeline Ray Tracing: Why Slow? Basic ray tracing: 1 ray/pixel Ray Tracing: Why Slow? Basic ray tracing: 1 ray/pixel But you really want shadows, reflections, global illumination, antialiasing

More information

Advanced Lighting Techniques Due: Monday November 2 at 10pm

Advanced Lighting Techniques Due: Monday November 2 at 10pm CMSC 23700 Autumn 2015 Introduction to Computer Graphics Project 3 October 20, 2015 Advanced Lighting Techniques Due: Monday November 2 at 10pm 1 Introduction This assignment is the third and final part

More information

Spring 2009 Prof. Hyesoon Kim

Spring 2009 Prof. Hyesoon Kim Spring 2009 Prof. Hyesoon Kim Application Geometry Rasterizer CPU Each stage cane be also pipelined The slowest of the pipeline stage determines the rendering speed. Frames per second (fps) Executes on

More information

Graphics Hardware and Display Devices

Graphics Hardware and Display Devices Graphics Hardware and Display Devices CSE328 Lectures Graphics/Visualization Hardware Many graphics/visualization algorithms can be implemented efficiently and inexpensively in hardware Facilitates interactive

More information

Viewing with Computers (OpenGL)

Viewing with Computers (OpenGL) We can now return to three-dimension?', graphics from a computer perspective. Because viewing in computer graphics is based on the synthetic-camera model, we should be able to construct any of the classical

More information

Blis: Better Language for Image Stuff Project Proposal Programming Languages and Translators, Spring 2017

Blis: Better Language for Image Stuff Project Proposal Programming Languages and Translators, Spring 2017 Blis: Better Language for Image Stuff Project Proposal Programming Languages and Translators, Spring 2017 Abbott, Connor (cwa2112) Pan, Wendy (wp2213) Qinami, Klint (kq2129) Vaccaro, Jason (jhv2111) [System

More information

CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015

CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015 CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015 Announcements Project 2 due tomorrow at 2pm Grading window

More information

The Rasterization Pipeline

The Rasterization Pipeline Lecture 5: The Rasterization Pipeline (and its implementation on GPUs) Computer Graphics CMU 15-462/15-662, Fall 2015 What you know how to do (at this point in the course) y y z x (w, h) z x Position objects

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Practical Shadow Mapping

Practical Shadow Mapping Practical Shadow Mapping Stefan Brabec Thomas Annen Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany Abstract In this paper we propose several methods that can greatly improve

More information