Parameter Space Footprints for Online Visibility

Size: px
Start display at page:

Download "Parameter Space Footprints for Online Visibility"

Transcription

1 Parameter Space Footprints for Online Visibility Daniel Cohen-Or Tel Aviv University Tommer Leyvand Tel Aviv University Olga Sorkine Tel Aviv University (a) (b) (c) Figure 1: A view of a scene in primal space (a) and parameter space (b). In (a), the colored buildings are the potentially visible set (PVS) and the gray ones are hidden. (b) shows the footprints of the PVS in the parameter space. The tall red buildings are distant but their top floors are visible, and their red footprints are visible in the parameter space. (c) shows the buildings close to the viewcell; only the visible floors are colored. Abstract We present an online from-region visibility method. We start by introducing a parameterization of lines intersecting a rectangle, whose representation in the parameter space has no singularities. We show how 3D segments and vertical 3D polygons in the primal space are mapped to 3D polygonal footprints in the parameter space. Based on this mapping a conservative from-region visibility problem can be solved by a conventional visible surface determination in the parameter space under orthographic projection. A conservative occlusion test allows discretizations and overestimation to be integrated into a hierarchical (z-buffer) framework applied in the parameter space. The footprints of bounded vertical prisms associated with the objects of the scene are mapped during the hierarchical traversal to incrementally generate an occlusion z-map in the parameter space. Rendering the polygonal footprints with standard graphics hardware allows online computation of from-region visibility during rendering. Our results show that a conservative occlusion culling of large scenes from large regions takes only a fraction of a second. Since the visibility computation is valid for many frames, its cost is further amortized and imposes a negligible overhead on the rendering. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Visible line/surface algorithms Keywords: Dual Space, Line Parameterization, Visibility, Occlusion Culling, PVS, Hardware Acceleration 1 Introduction Rendering large scenes in real-time remains a challenge as the complexity of models keeps growing. Visibility techniques such as occlusion culling can effectively reduce the rendering depth complexity. Methods that compute the visibility from a point [Greene et al. 1993; Zhang et al. 1997; Coorg and Teller 1996; Hudson et al. 1997; Bittner et al. 1998] are necessarily applied in each frame during rendering. Recently, based on earlier methods [Airey et al. 1990; Teller and Sequin 1991; Teller 1992; Funkhouser et al. 1992], more attention is devoted to from-region methods [Cohen-Or et al. 1998; Saona-Vazquez et al. 1999; Gotsman et al. 1999; Schaufler et al. 2000; Durand et al. 2000; Wonka et al. 2000] where the computed visibility is valid for a region rather than a single point. The visibility problem from a region is considered significantly harder than the from-point visibility. To decide whether an object S is (partially) visible or occluded from a region C requires detecting whether there is at least a single ray that leaves C and intersects S before it intersects an occluder. As mentioned above this is a highdimensional problem and its exact solution seems impractical. Conservative solutions are feasible [Durand et al. 2000; Schaufler et al. 2000], but too slow for online computation. To gain speed recent methods have restricted themselves to dealing with 2.5D scenes [Koltun et al. 2001; Bittner et al. 2001; Wonka et al. 2001]. The assumption of 2.5D occluders is quite reasonable in practice, especially in walkthrough applications where architectural models can be well approximated conservatively by 2.5D shapes. However, it is desirable to be able to have a culling method that can correctly

2 deal with a larger domain of shapes. The occlusion culling method we present in this paper handles the occlusion of 3D vertical prisms [Downs et al. 2001]. Thus, a general occluder can be approximated by an inner set of, possibly overlapping, large parts that conservatively represent the occlusion cast by the original occluder [Andujar et al. 2000; Germs and Jansen 2001; Downs et al. 2001; Law and Tan 1999]. Note in Figure 1(c) that the buildings are not axisaligned and that the potentially visible parts of the models (the colored ones) consist of 3D boxes. Visibility problems and in particular occlusion problems are often expressed as problems in line spaces. For example, the following is a basic occlusion problem. Given a segment C, an object A is occluded from any point on C by a set of objects B i, if all the lines intersecting C and A also intersect the union of B i. Using the line space, lines in the primal space are mapped to points. The mapping between 2D lines and points is commonly defined by the coefficients of the lines in the primal space. The line y = ax + b is mapped to the point (a; b) in the line parameter space. All the lines that intersect a segment A in the primal space are mapped to a double wedge in the parameter space, and we call this the footprint of A. All the lines intersecting the union of segments B i are mapped to the union of their footprints. All the lines passing through two given segments A and C are the intersection of their footprints. The above occlusion problem can be expressed as a simple Boolean set operation of the footprints. In 2D these footprints can be discretized and drawn as polygons, and their intersection can be applied in image-space, taking advantage of fast per-pixel Boolean operations. However, the above-mentioned choice of mapping is not optimal since vertical lines are mapped to infinity. This causes serious problems for all lines with large coefficients, because for practical reasons the dual space must be bounded. Moving to higher dimensions or using a projective space can alleviate this problem [Bittner et al. 2001], but it loses the simplicity and efficiency of operating in image space. A direct extension to the 3D case is not as easy as in the 2D case since the parameter space of 3D lines has high dimensionality. In general, 3D lines have four degrees of freedom. However, since we are usually interested in an asymmetric visibility problem, another dimension is required to specify the origin of the lines, which are thus regarded as rays. Unfortunately, the visibility in 2D flatland is not of much interest. It is important to note that it might be the case that most of the objects are detected as occluded in flatland, while actually being visible in 2.5D due to their various heights. However, Koltun et al. [Koltun et al. 2001] successfully used 2D visibility to solve a 2.5D problem, and Bittner et al. [Bittner et al. 2001] used 2D visibility to quickly detect the set of potential occluders for a given 2.5D object. In this paper we introduce a mapping that inherently treats the third dimension. We present a novel non-singular parameterization for mapping 3D lines. We start by parameterizing the lines in 2D and defining the footprints of segments. Then, we extend the mapping to deal with 3D shapes. The key point in our parameterization is that its domain is not the entire line space, but is restricted to describe only lines that intersect a unit square region (the viewcell). This parameterization is a factorization of the dimensions of the rays into horizontal and vertical components. The vertical components are conservatively represented by a pair of scalars mapped to the parameter space of the horizontal components. This reduces the high dimensionality of the from-region visibility problem and translates it to a hidden-surface-removal problem under orthographic projection. The latter is solved and accelerated by conventional graphics hardware. One of the main contributions of this paper is the introduction of a from-region culling method that works at a comparable rate to from-point methods. The framework of the method is structurally similar to hierarchical image-based occlusion culling methods [Greene et al. 1993; Zhang et al. 1997]. The scene is top-down traversed via a hierarchical data structure (a kd-tree). Each kd-cell is first tested with the occluders encountered so far (in image-space) to decide whether the kd-cell can be culled together with its subhierarchy. However, here, the objects and kd-cells are not projected and drawn onto the image space but onto the parameter space image. We show that drawing the footprint of a kd-cell into the line space is as costly as drawing a small number of polygons. Thus, the visibility test in the parameter space is implemented with the common z-buffer similarly to the visibility tests used in image-space occlusion techniques [Greene et al. 1993; Zhang et al. 1997]. An important conceptual feature of our method is that each visible object and kd-cell is accessed and rendered only once. This requires the hierarchical traversal of the scene to be in an approximated front-to-back order, ensuring that the drawn footprints of the visible objects can be used as valid occluders of the kd-cells. The rest of the paper is organized as follows. In Section 2 we describe the parameterization in 2D and define the footprints of line segments. Then, in Section 3 we show how to incorporate the third dimension into the mapping to deal with 3D shapes. Section 4 describes the from-region visibility culling algorithm based on our parameterization. The implementation and results are presented in Section 5, and we conclude in Section 6. 2 Ray parameterization The standard duality of R 2 is a correspondence between 2D lines y = ax + b in the primal space and points (a; b) in the dual space. This parameterization suffers from singularity since the values of the parameters a and b are not bounded, and vertical lines cannot be mapped. We present a different parameterization of 2D lines. We choose to parameterize only the oriented lines (rays) that emerge from a given bounded region in the 2D plane. In other words, the parameterization represents the visibility lines that leave a given viewcell. This parameterization does not have singularities and the parameter space is bounded. In addition, as shown below, all the rays that leave the viewcell and intersect a given segment, form a footprint in the parameter space that can be represented by a few polygons. 2.1 The parameter space Given a square viewcell, a representation of visibility rays that originate from this viewcell is defined as follows. Each ray can be represented by its two intersections with two concentric squares: an inner square and an outer square (see Figure 2). Hereafter, the term rays will always refer to rays leaving the viewcell. Parameters s and t are associated with the inner and outer squares, respectively (see Figure 2). They are assigned an arbitrary range, for example, the unit square 0» s;t < 1. Any ray that starts inside the viewcell must intersect both the inner and outer squares. Thus, each such ray r is represented by a pair of parameters s r and t r that correspond to its intersections with the inner and outer squares, respectively. Since the parameter space is bounded and each ray has a mapping, there are no singularities. As shown in Figure 2, a ray emanating from the viewcell is a half infinite line that intersects some point p t1 on the outer square. The extended ray, that is, the infinite oriented line coincident with the ray, intersects the inner square twice, at p s1 and p s2. Thus, the ray can be mapped to two points in the parameter space: (s 1 ; t 1 ) and (s 2 ; t 1 ). It should be noted that although we may map a ray to two points, it has a unique representation in the parameter space. The intersection point p t of a ray with the outer square is either on a vertical edge or a horizontal edge. We choose to map the ray only to points (s; t) such that the edges of the inner and outer

3 the parameter space. Let us look at each pair of parallel edges of the parameter squares separately, and capture the rays that hit the parallel edges and the segment. For a fixed value of s on the inner edge, the range of corresponding t s defines a vertical segment f(s; t) : t 2 (t q0 (s); t q1 (s))g in the parameter space (see Figure 3). Figure 2: Two rays leaving the viewcell (in blue) and meeting the endpoints of segment A. The upper ray intersects the outer square at p t1 and the inner square twice at p s1 and p s2, which lie on parallel edges. Thus, the ray is mapped to two points in the parameter space: (s 1 ;t 1 ) and (s 2 ;t 1 ) The lower ray is mapped to one point (s 1 ;t 2 ). squares, associated with s and t, respectively, are parallel. As we shall see later, this avoids dealing with non-linear footprint boundaries. This parameterization is still unique and a mapping for any ray always exists. We choose to map the ray to two points whenever possible to avoid the need to distinguish between them. 2.2 The footprint of a segment Let us define the footprint of a segment S as the collection of all points in the parameter space that refer to rays that intersect S. To understand the shape of the footprint of a segment, we first consider the footprint of a single point. Then, the footprint of a segment is described using the footprints of its endpoints. Let q be some point in the primal space. A ray that starts at a point p s on the viewcell boundary and intersects q hits the outer square boundary at some point p t. Denote by t q (s) the value of the parameter t that corresponds to that ray. The shape of the footprint of q is defined by the relation between s and t, that is, by the function t = t q (s): Since the inner and outer squares are aligned, t q (s) is a first order rational function: t αs q (s) = + β γs + δ ; where α; β ; γ; δ are some real numbers (for details, see Appendix A). There are two cases: (i) p s and p t lie on perpendicular boundaries, and (ii) p s and p t lie on parallel boundaries. In the latter case, t q (s) becomes a linear function (see Appendix A). Since dealing with linear functions is more efficient, we consider only the second case when generating the footprints. This is why we choose the mapping of a ray in the parameter space to be the (s; t) pair that corresponds to parallel edges only. As explained above, there is no danger in discarding the perpendicular pairs, since each ray can always be associated with at least one parallel pair. As follows from above, the footprint of a point is a collection of segments in the parameter space. To compute the footprint we need to consider the eight pairs of parallel edges of the squares. Each pair of edges defines a line t q (s) =αs +β in the parameter space. Since the range of both s and t is bounded, the footprint of q is a segment on the line t q (s) bounded by the domain of s and t. For example, when considering the rightmost parallel edges, 1=4» s;t» 1=2. See Figure 3. We are now ready to characterize the footprint of a segment A = q 0 q 1. We will show that it is a collection of polygons in (a) (b) Figure 3: The footprint of a point is a line. The rays passing through a point in the primal space (in (a)) are mapped to a line in the parameter space (b). The rays passing through a segment A in the primal space are mapped to the area bounded by the two footprints of the endpoints of A. Since t q0 (s) and t q1 (s) are linear functions of s, the collection of the vertical segments of all s defines a polygon (maybe non-simple) in the parameter space. Given an arbitrary segment, its footprint consists of up to six polygons out of the eight possible pairs of parallel squares edges (see Figure 1(b)). However, in most cases no more than four are required. Note that only eight of the sixteen sub-squares in the parameter are actually used, since the other eight correspond to perpendicular values of s and t. 3 The footprints of 3D shapes So far we have shown the parameterization of rays leaving a viewcell and passing through a 2D segment. Now let us extend the definition of the parameter space to handle 3D rays by adding a third parameter v which expresses the ray elevation angle. The parameters s and t define the horizontal orientation of rays. Let R be a vertical trapezoid whose two bases are vertical lines (see Figure 4). Its top and bottom edges (A 1 and A 2 ) have the same vertical projection A 0. All 3D rays passing through the segment

4 Figure 4: Vertical 3D trapezoid. The top segment A 1 and the bottom segment A 2 both have the same vertical projection A 0. A ray with fixed (s 0 ;t 0 ) that sees the top of the trapezoid (the point p 1 ) has a vertical elevation angle α = arctan(h=l). H p 1 A 1 at a point p 1, with fixed horizontal parameters (s 0 ;t 0 ), define a vertical plane in the 3D primal space. The intersection of this plane with the viewcell is a region denoted by K. All the rays emanating from K and passing through p 1 form a range of elevation angles [α min (p 1 ); α max (p 1 )]. The values of α min and α max are attained at the boundary of the viewcell. Assuming the viewcell has a height ε and p 1 is higher than ε, α max (p 1 ) is attained at the front bottom of the viewcell and α min (p 1 ) at the back top. Now let us consider the intersection of the vertical plane with the trapezoid R. The intersection defines a vertical segment p 1 p 2. We define the umbra of the vertical segment as the occluded area with respect to the viewcell (see Figure 5). In other words, the two values α min (p 1 ) and α max (p 2 ) define the umbra of p 1 p 2. Recall that the values α min (p 1 ) and α max (p 2 ) are associated with the fixed parameters (s 0 ;t 0 ). Thus, for each given pair of horizontal parameters we maintain these two values on the v-axis that expresses the occlusion domain. Given a point p, we compute the elevation values by α = arctan(h=l), where H is the height of p and L is the horizontal distance between the ray origin and p. Let s go back to the vertical trapezoid R (see Figure 4). All the rays leaving the viewcell and passing through R have a polygonal footprint in the (s;t) plane, which is the footprint of the segment A 0. The two v-values associated with each point (s;t) in the footprint of A 0 define a segment along the v-axis. The collection of all such segments forms a prism in the s;t;v space, bounded by two rational surfaces, which can be conservatively approximated by linear planes (see Appendix B). This prism represents the occlusion domain of R; in other words, the prism is the footprint of R. Figure 6 visualizes three trapezoids in the primal space and the mappings of their top edges as 3D polygons in the parameter space. Given a set O of vertical trapezoids, the union of their footprints represents their aggregate occlusion domain. Given a vertical trapezoid U that is behind O with respect to the viewcell, O occludes U if the prism footprint of U is contained in the union of prisms of O. The shape of this union can be non-trivial; in particular, for a pair of (s;t) parameters, the range of corresponding v-values can be non-contiguous. For practical reasons, we would like to express the v-range by just two values. Therefore, we maintain a subset of the union, such that for each (s;t) the v-values are contiguous. The visibility test based on the subset is conservative, since if U is visible, then its footprint is not fully contained in the entire union and therefore cannot be contained in a subset of the union. As a result, the conservative footprint of O can be represented by two depth maps over the s;t plane. We call these two maps the occlusion z-maps, and the visibility tests are applied by two simple depth tests with the two depth maps (although the maps encode the v-values, we use the notation of z due to their implementation with a z-buffer). The occlusion z-maps are built incrementally during the front-to-back traversal of the scene, where the footprint of each vertical trapezoid, detected as potentially visible, augments the occlusion z-map. An interesting property of the mapping is that the third dimension of the primal space is encoded as depth values in the parameter space, and the visibility in the primal space is solved by depth tests in parameter space. R amin viewcell a max p 2 L Figure 5: The values α min (p 1 ) and α max (p 2 ) define the occlusion cast by R within a plane with fixed parameters (s;t). The box can be detected as occluded by the two values. (a) Figure 6: Example of an occlusion based on the combined umbrae of the top edges of two trapezoids. (a) primal space where the yellow trapezoid in the back is occluded, (b) parameter space where the yellow footprint polygon is occluded by the union of the two other footprints along the v-direction. (b) 4 The visibility culling operation The method described in the previous section deals with vertical trapezoids. To handle scenes consisting of general models, each object is associated with an inner approximation consisting of a union of bounded vertical prisms (BVP) (similarly to [Downs et al. 2001]). However, our prisms do not necessarily extend from the ground and thus can approximate floating parts and not just a height field (see Figure 8). The BVP s consist of vertical trapezoids whose footprints are projected and drawn into the z-buffer to form the occlusion z-map. The original objects are also conservatively approximated by axis-aligned bounding boxes and inserted into a kd-tree. The kdtree has two functions. First, it serves as a means to traverse the objects in a front-to-back order. Second, it allows the culling of large portions of the model. The kd-tree is traversed top-down, so that early on large kd-tree cells of the hierarchy can be detected as hidden and culled with all their sub-trees. If a leaf of the tree is still visible, then the visibility of the bounding boxes associated with it are tested. If a bounding box is visible, the object bounded in it is defined as visible and its BVP s are drawn into the occlusion z-map. A key point in the design of this algorithm is that each visible kdcell is accessed only once, and each object in a visible kd-leaf cell is

5 accessed only once. The occlusion z-map is incrementally updated as the footprints of the potentially visible objects are drawn (see Figure 1(b)). Typically, the close objects fill the occlusion z-map with values that form an efficient occlusion fusion in the parameter space. After traversing the close front objects most of the larger kd-cells are detected as hidden. As discussed above two depth maps represent a conservative footprint of a set of occluders. These two depth maps are conservatively discretized (as in [Koltun et al. 2001]) by two z-buffers. The top z-buffer represents the α min values, and the bottom one represents the α max values. Adding the footprint F of an occluder into the two z-maps is not a straightforward operation, since the augmented umbra needs to be represented by two values. The discrete representation allows the operation to be applied in a per-pixel base, realized by standard frame-buffer operations. Thus, given F, the range of F(s;t) at the pixel level, must overlap the corresponding current range (Z(s;t)) in the z-maps. If F(s;t) and Z(s;t)) are disjoint, F(s;t) is ignored. Otherwise, F(s;t) can extend the range in Z(s;t) (see Figure 7). Figure 8: A scene composed of T & H shaped buildings. (a) Figure 7: In (a) the new footprint (in blue) augments the current ranges (in red) represented by the z-maps. (b) shows the result of applying the overlap test on a per-pixel basis, thus augmenting only overlapping ranges. The figure illustrates a slice for a fixed t. The overlap test is implemented using OpenGL s stencil buffer, by masking out the pixels where the corresponding vertical ranges are disjoint. This requires a 2-bit stencil buffer to store three possible states: empty, extendable, non-extendable. The stencil pixels are initially set to empty. To set up the mask for the top z-buffer we draw the bottom footprint polygons only to the stencil buffer, while replacing empty pixels with extendable, and setting non-extendable pixels, where the depth test has succeeded. To augment the top ranges we then draw the top footprint polygons, setting the stencil test to mask out the non-extendable pixels. The bottom z-buffer is handled similarly. To test whether a given trapezoid R is visible is straightforward. It simply requires one to z-test the upper and bottom polygonal footprints of R with the top and bottom z-buffers. If any pixel is reported to be visible, then R is visible. This latter visibility test is costless if the hardware provides a feedback loop that indicates whether the drawn polygons would have made a change to the z- buffer [Klosowski and Silva 2001]. 5 Implementation and results We have implemented the mapping of trapezoids to their polygonal footprints and integrated it into a hierarchical occlusion culling mechanism based on a kd-tree data-structure. The test scenes are randomly generated outdoor urban models, controlled by a large set of parameters with which we can define their size, density, distribution of heights, regularity, etc. It should be emphasized that such scenes are significantly more challenging than typical urban models commonly used, where only the nearby buildings are visible. We have forced some buildings to be high enough to be seen (b) from a distance (see red buildings in Figure 1(a)). Moreover, the high buildings avoid the early culling of large cells in the hierarchy. This also implies that it is not easy to predefine the effective occluders. We have also generated non-simple models that have the shape of the letters H and T (see Figure 8). Each such building consists of three prisms where one of them does not extend from the ground. Unlike common reconstructed urban models, here the building consists of several floors, each being a 3D box. Note that the building and the boxes are not axis-aligned. No. of Total Footprint Frame Buffer trapezoids times (ms) times (ms) Total time Read 1.2K K K K K ,308.7K Table 1: Culling times (milliseconds) with respect to model size. Total times are broken into the footprint transform times and frame-buffer operations. The latter is further broken into two columns: one showing the total time, and the other shows specifically the time that is spent on reading the frame buffer. This is due to the implementation of the occlusion test in software, which requires reading the frame buffer. A hardware support would yield a five-fold speedup. Tables 1 and 2 summarize the results of computing the fromregion visibility of the scenes for various sizes of scenes and of viewcell regions. Results were gathered on a 1Ghz Pentium III machine with an NVIDIA Quadro2 MXR graphics card. The results show that the visibility of a large urban model consisting of thousands of buildings and millions of trapezoids is computed in less than a second. It should be noted that the number of polygons in the scene is irrelevant since only the kd-cells and bounding boxes are tested. A large number of polygons can only increase the importance of applying an occlusion culling compared to a naive rendering of the entire scene. The costly part of the algorithm is the occlusion test (see Table 1). We implemented this test by drawing the footprint polygons into the color buffer using a single color and scanning the buffer to test whether any portion of it was drawn, while the actual modification of the z-buffer is disabled. We use 128x128 resolution for the top and bottom z-maps. However, since only eight square parts of each z-map are used (see Figure 1(b)), they are compactly encoded into a single 128x128 map. Using 256x256 resolution yields

6 a less conservative PVS but the buffer-read is slower. Some vendors supply a hardware implementation of an occlusion flag that indicates whether, during rendering, any pixel is detected as visible [Klosowski and Silva 2001]. This of course can eliminate the visibility testing time altogether and further accelerate the from-region occlusion. 6 Conclusions We have presented a novel parameterization of 3D rays with which all the rays leaving a viewcell and blocked by an object in the primal space are mapped to a polygonal footprint in the parameter space. This allows a conservative from-region visibility to be computed by applying simple depth tests in the parameter space under orthographic projection. In doing so, the 5D from-region visibility problem is transformed and solved with a simpler visibility problem in 3D space. It is interesting to understand this reduction in the dimensionality of the problem. In from-point visibility the origin of the rays is fixed, while in from-region visibility the origin of a visibility ray has three degrees of freedom (x;y;z) (the other two describe the ray directions). First, one dimension is reduced by the observation that it is enough to consider rays leaving the faces of the viewcell. The four dimensions are then factored into the horizontal and vertical components. Two dimensions describe the horizontal direction and are mapped to the (s;t) plane. To express the vertical component exactly we need two more parameters. Instead, the beam of rays leaving the viewcell with a fixed (s;t) towards a given occluder is represented by just two scalar values. For n occluders we would have needed two n-arrays of such scalars, that is, a twodimensional array. However, by exploiting the vertical coherence, we reduce these two dimensions to just two scalars. One scalar per (s;t) is sufficient to deal with a height field (2.5D). With two (or more) values per (s;t) we can deal with scenes that have a higher vertical complexity. An interesting observation is that a conservative from-point visibility can be applied in the parameter space by fixing s. This requires treating only a single column of pixels, where the vertical information of the scene is encoded as per-pixel depth values. Clearly, the from-region problem has a higher dimensionality than the frompoint problem. However, exploiting the per-pixel hardware operations reduces the actual cost of the visibility test. This is effective assuming a scene of low vertical complexity. While in practice this is a reasonable assumption, our method makes a weaker one: we require the BVP s to be effective occluders. Since our z-map is an occluder fusion operation, the aggregate occlusion cast by the collection of BVP s is effective. For example, the T-H scene (see Figure 8) consists of vertical BVP s of low vertical complexity and horizontal BVP s of higher vertical complexity. A notable advantage of our method is that it does not require pre-selecting the occluders but rather that they accumulate during the scene traversal. Pre-selection of the occluder hampers the occluder fusion, since small parts that could be fused into significant occluders are missing. It should be recalled that the cost of the from-region visibility is further amortized since the visibility set computed is valid for many frames. As can be learned from Table 2, the computation time is not too sensitive to the viewcell size. The visibility from a larger viewcell requires more time just because more objects are visible. With the amortization factor the cost of the from-region visibility computation turns negligible compared to the rendering. In client server applications, where the PVS is computed in the server and transmitted to the client, a from-region visibility is imperative to avoid communication latency. An online from-region visibility avoids the need to store a huge amount of precalculated visibility data. Cell Times in milliseconds Tested kd-tree cells Size Total Footprint Frame buf. Visible Occluded 44x x x x Table 2: The computation time (in milliseconds) as a function of the viewcell size in a scene consisting of 1,308,696 trapezoids. The table also shows the number of visible and occluded kd-cells tested. Since the computation time is primarily dependent on the total number of kd-tree cells tested, the visibility set grows slowly with respect to the viewcell size. Figure 9: A view over the buildings close to the viewcell; only the visible parts are colored. References AIREY, J. M., ROHLF, J. H., AND BROOKS, JR., F. P Towards image realism with interactive update rates in complex virtual building environments. Computer Graphics (1990 Symposium on Interactive 3D Graphics) 24, 2 (Mar.), ANDUJAR, C., SAONA-VAZQUEZ, C., NAVAZO, I., AND BRUNET, P Integrating occlusion culling and levels of details through hardly-visible sets. Computer Graphics Forum 19,3. BITTNER, J., HAVRAN, V., AND SLAVIK, P Hierarchical visibility culling with occlusion trees. In Proceedings of Computer Graphics International 98, BITTNER, J., WONKA, P., AND WIMMER, M Visibility preprocessing for urban scenes using line space subdivision. 9th Pacific Conference on Computer Graphics and Applications (October), ISBN COHEN-OR, D., FIBICH, G., HALPERIN, D., AND ZADICARIO, E Conservative visibility and strong occlusion for viewspace partitioning of densely occluded scenes. Computer Graphics Forum 17, 3, COORG, S., AND TELLER, S Temporally coherent conservative visibility. In Proc. 12th Annu. ACM Sympos. Comput. Geom., DOWNS, L., MÖLLER, T., AND SÉQUIN, C. H Occlusion horizons for driving through urban scenery ACM Symposium on Interactive 3D Graphics (March), ISBN DURAND, F., DRETTAKIS, G., THOLLOT, J., AND PUECH, C Conservative visibility preprocessing using extended projections. Proceedings of SIGGRAPH 2000 (July), FUNKHOUSER, T. A., SÉQUIN, C. H., AND TELLER, S. J Management of large amounts of data in interactive building walkthroughs Symposium on Interactive 3D Graphics 25, 2 (March), ISBN GERMS, R., AND JANSEN, F. W Geometric simplification for efficient occlusion culling in urban scenes. In WSCG 2001 Conference Proceedings, V. Skala, Ed., GOTSMAN, C., SUDARSKY, O., AND FAYMAN, J Optimized occlusion culling. Computer & Graphics 23, 5, GREENE, N., KASS, M., AND MILLER, G Hierarchical Z-buffer visibility. In Computer Graphics Proceedings, Annual Conference Series, 1993, HUDSON, T.,MANOCHA, D., COHEN, J., LIN, M., HOFF, K., AND ZHANG, H Accelerated occlusion culling using shadow frustra. In Proc. 13th Annu. ACM Sympos. Comput. Geom., KLOSOWSKI, J. T., AND SILVA, C. T Efficient conservative visibility culling using the prioritized-layered projection algorithm. IEEE Transactions on Visualization and Computer Graphics 7, 4, KOLTUN, V., COHEN-OR, D., AND CHRYSANTHOU, Y Hardware-accelerated from-region visibility using a dual ray space. Rendering Techniques 2001: 12th Eurographics Workshop on Rendering (June).

7 LAW, F.-A., AND TAN, T.-S Preprocessing occlusion for real-time selective refinement (color plate S. 221). In Proceedings of the Conference on the 1999 Symposium on interactive 3D Graphics, ACM Press, New York, S. N. Spencer, Ed., SAONA-VAZQUEZ, C., NAVAZO, I., AND BRUNET, P The visibility octree: A data structure for 3d navigation. Computer & Graphics 23, 5, SCHAUFLER, G., DORSEY, J., DECORET, X., AND SILLION, F. X Conservative volumetric visibility with occluder fusion. Proceedings of SIGGRAPH 2000 (July), ISBN TELLER, S. J., AND SEQUIN, C. H Visibility preprocessing for interactive walkthroughs. Computer Graphics (Proceedings of SIGGRAPH 91) 25, 4, TELLER, S Visibility Computations in Densely Occluded Environments. PhD thesis, University of California, Berkeley. WONKA, P., WIMMER, M., AND SCHMALSTIEG, D Visibility preprocessing with occluder fusion for urban walkthroughs. Rendering Techniques 2000: 11th Eurographics Workshop on Rendering (June), ISBN WONKA, P., WIMMER, M., AND SILLION, F. X Instant visibility. In Computer Graphics Forum (Proc. of Eurographics 01), A. Chalmers and T.-M. Rhyne, Eds., vol. 20(3). ZHANG, H., MANOCHA, D., HUDSON, T., AND HOFF III, K. E Visibility culling using hierarchical occlusion maps. In SIGGRAPH 97 Conference Proceedings, Addison Wesley, T. Whitted, Ed., Annual Conference Series, ACM SIG- GRAPH, Appendix A In the following, we describe the detailed steps in calculating the relation t = t q (s) (see Section 2), showing that the parameterization by two perpendicular lines is not linear. Let q =(x 0 ; y 0 ) be some point in the primal space. The parametric equation of the ray r that starts at a point p s on the viewcell boundary and intersects q is p s + λ (q p s ). This ray hits the outer square boundary at a point p t. Denote by t q (s) the value of the parameter t that corresponds to that ray. The equation of the outer square boundary segment is N 0 + t(n 1 N 0 ), where N 0 and N 1 are the endpoints of the outer square edge intersected by the ray. We need to solve the following: p s + λ (q p s )=N 0 +t(n 1 N 0 ): By regrouping, N 0 p s = λ (q p s ) t(n 1 N 0 ): Denote by n the normal vector to (q p s ). Then, by taking a dot product with n from both sides, we get Therefore, < N 0 p s ; n > = t < N 1 N 0 ; n >: t = < N 0 p s ; n > < N 1 N 0 ; n > : We can write p s and n as some expressions depending on s. For example, when p s is on the top edge of the inner square, we can write it as ( 1 + 8s; 1): Substituting these explicit expressions in the last equation shows that t q (s) is a first order rational function: t q (s) = αs + β γs + δ ; where α; β ; γ; δ are some real numbers that depend on the relative position of the inner and outer squares edges hit by the ray r. When these edges are parallel, γ = 0 and the t q (s) function is linear, as expected. Appendix B The elevation angle α is a function of H and L (see Section 3). Given a 3D segment u 0 u 1, we would like to interpolate v = H=L as a function of s and t, to bound the volume v(s;t) by linear planes from above and below. N0 t u0 M0 l2 v1 s l1 Figure 10: The red line denotes the 3D segment, as seen from above. The height of endpoint u 0 is h 0 and the height of u 1 is h 1.A ray is shot from point v 0 on the viewcell boundary to the point v 1 on the outer square boundary. L is the horizontal distance between the ray origin and the segment. In Figure 10, L = λ 1 kv 1 v 0 k. Define n 1 =? u 0 u 1. As seen in Section 2, λ 1 can be calculated by: v0 u1 M1 λ 1 = < u 0 v 0 ; n 1 > < v 1 v 0 ; n 1 > : Since v 0 = M 0 + s(m 1 M 0 ) and v 1 = N 0 +t(n 1 N 0 ), we obtain by substitution that where λ 1 = a 1s + a 3 b 1 s + b 2 t + b 3 ; N1 a 1 = < M 1 M 0 ; n 1 >; a 3 =< u 0 M 0 ; n 1 >; b 1 = a 1 ; b 2 =< N 1 N 0 ; n 1 >; b 3 =< N 0 M 0 ; n 1 >: Thus, L is a first order rational function of s and t. Now, let us proceed to compute the function H, which is the height of the segment in the given viewing direction. Let h 0 and h 1 be the heights of the endpoints u 0 and u 1, respectively. Then H = h 0 + λ 2 (h 1 h 0 ). It is thus sufficient to calculate λ 2 as a function of s and t. Denote n 2 =? v 0 v 1. Then similarly to λ 1, λ 2 can be written as λ 2 = < v 0 u 0 ; n 2 > < u 1 u 0 ; n 2 > : Assume M i = (M ix ;M iy ), N i = (N ix ;N iy ), i=0,1. We can write n 2 explicitly as n 2 =( N y t M y s + N 0y M 0y ; M x s N x t + M 0x N 0x ): After substituting this explicit form in the equation of λ 2, we get λ 2 = a 1s + a 2 t + a 3 b 1 s + b 2 t + b 3 ; where the numbers a j and b j, j=1,2,3, are constants (they depend on M i, N i and the segment endpoints only). Thus, H is also a first order rational function of s and t. How to linearly approximate H=L? First, either H or L are conservatively approximated by a constant. It is preferable to take the less changing function for that purpose. That is, if the height does not change much (h 1 ß h 2 ), then H should be set to a constant. Otherwise, L is set to a constant which is the maximal/minimal distance of a point in the viewcell from the segment, for occluder/occludee accordingly. The function that is not set to a constant should be approximated by a plane. Suppose we have F(s;t) = a 1s + a 2 t + a 3 b 1 s + b 2 t + b 3 = F 1(s;t) F 2 (s;t) ;

8 where F = H or F = L. The point (s;t) is in a compact polygonal domain P. The denominator F 2 (s;t) cannot vanish in this domain, if the problem is well posed. Therefore, F 2 (s;t) is either positive or negative. Then, F 1 (s;t) has the same sign as F 2 (s;t), because F(s;t) is positive (recall that F approximates λ 2 which is a positive value between 0 and 1). Therefore, all we need is to compute m 1 = maxf 2 (s;t) and m 2 = minf 2 (s;t) over the polygonal domain P. This is done simply by testing the vertices of P. Then, supposing w.l.o.g. that F 2 is positive: a 1 s + a 2 t + a 3 m 1» F(s;t)» a 1s + a 2 t + a 3 m 2 : For even tighter approximation, we triangulate P and compute linear approximations of F over each triangle.

Ray Space Factorization for From-Region Visibility

Ray Space Factorization for From-Region Visibility Ray Space Factorization for From-Region Visibility Tommer Leyvand Olga Sorkine Daniel Cohen-Or School of Computer Science, Tel Aviv University Figure 1: A view of a large urban model consisting of 26.8M

More information

Real-Time Rendering (Echtzeitgraphik) Dr. Michael Wimmer

Real-Time Rendering (Echtzeitgraphik) Dr. Michael Wimmer Real-Time Rendering (Echtzeitgraphik) Dr. Michael Wimmer wimmer@cg.tuwien.ac.at Visibility Overview Basics about visibility Basics about occlusion culling View-frustum culling / backface culling Occlusion

More information

Fast Line-of-Sight Computations in Complex Environments

Fast Line-of-Sight Computations in Complex Environments Fast Line-of-Sight Computations in Complex Environments David Tuft Brian Salomon Sean Hanlon Dinesh Manocha Dept. of Computer Science Univ. of North Carolina at Chapel Hill Chapel Hill, NC 27599-3175 USA

More information

Hardware-accelerated from-region visibility using a dual ray space

Hardware-accelerated from-region visibility using a dual ray space Hardware-accelerated from-region visibility using a dual ray space Vladlen Koltun 1, Yiorgos Chrysanthou 2, and Daniel Cohen-Or 1 1 School of Computer Science, Tel Aviv University, Tel Aviv, Israel fvladlen,dcorg@tau.ac.il

More information

Occlusion Horizons for Driving through Urban Scenery

Occlusion Horizons for Driving through Urban Scenery Occlusion Horizons for Driving through Urban Scenery Laura Downs laura@cs.berkeley.edu Tomas Möller tompa@acm.org Computer Science Division University of California at Berkeley Carlo H. Séquin sequin@cs.berkeley.edu

More information

Inner-cover of Non-convex Shapes

Inner-cover of Non-convex Shapes Submitted to the The 4th Israel-Korea Bi-National Conference on Geometric Modeling and Computer Graphics Inner-cover of Non-convex Shapes Daniel Cohen-Or Tel Aviv University dcor@post.tau.ac.il Shuly Lev-Yehudi

More information

Visibility Preprocessing with Occluder Fusion for Urban Walkthroughs

Visibility Preprocessing with Occluder Fusion for Urban Walkthroughs . Visibility Preprocessing with Occluder Fusion for Urban Walkthroughs Peter Wonka, Michael Wimmer, Dieter Schmalstieg Vienna University of Technology Figure 1. Views from the 8 million polygon model of

More information

Frédo Durand, George Drettakis, Joëlle Thollot and Claude Puech

Frédo Durand, George Drettakis, Joëlle Thollot and Claude Puech Frédo Durand, George Drettakis, Joëlle Thollot and Claude Puech imagis-gravir/imag-inria (Grenoble, France) Laboratory for Computer Science MIT (USA) Special thanks Leo Guibas Mark de Berg Introduction

More information

Goal. Interactive Walkthroughs using Multiple GPUs. Boeing 777. DoubleEagle Tanker Model

Goal. Interactive Walkthroughs using Multiple GPUs. Boeing 777. DoubleEagle Tanker Model Goal Interactive Walkthroughs using Multiple GPUs Dinesh Manocha University of North Carolina- Chapel Hill http://www.cs.unc.edu/~walk SIGGRAPH COURSE #11, 2003 Interactive Walkthrough of complex 3D environments

More information

Fast and Simple Occlusion Culling using Hardware-Based Depth Queries

Fast and Simple Occlusion Culling using Hardware-Based Depth Queries Fast and Simple Occlusion Culling using Hardware-Based Depth Queries K. Hillesland B. Salomon A. Lastra D. Manocha University of North Carolina at Chapel Hill {khillesl,salomon,lastra,dm}@cs.unc.edu http://www.cs.unc.edu/

More information

Visibility Preprocessing for Urban Scenes using Line Space Subdivision

Visibility Preprocessing for Urban Scenes using Line Space Subdivision Visibility Preprocessing for Urban cenes using Line pace ubdivision Jiří Bittner Center for Applied Cybernetics Czech Technical University in Prague bittner@fel.cvut.cz Michael Wimmer Institute of Computer

More information

Occluder Simplification using Planar Sections

Occluder Simplification using Planar Sections Occluder Simplification using Planar Sections Ari Silvennoinen Hannu Saransaari Samuli Laine Jaakko Lehtinen Remedy Entertainment Aalto University Umbra Software NVIDIA NVIDIA Aalto University Coping with

More information

Occluder Shadows for Fast Walkthroughs of Urban Environments

Occluder Shadows for Fast Walkthroughs of Urban Environments EUROGRAPHICS 99 / P. Brunet and R. Scopigno (Guest Editors) Volume 18 (1999), number 3 Occluder Shadows for Fast Walkthroughs of Urban Environments Peter Wonka* and Dieter Schmalstieg+ * IRISA Rennes,

More information

Visibility Streaming for Network-based Walkthroughs

Visibility Streaming for Network-based Walkthroughs Visibility Streaming for Network-based Walkthroughs Daniel Cohen-Or and Eyal Zadicario Computer Science Department Tel-Aviv University, Ramat-Aviv 69978, Israel Abstract In network-based walkthroughs the

More information

Visibility and Occlusion Culling

Visibility and Occlusion Culling Visibility and Occlusion Culling CS535 Fall 2014 Daniel G. Aliaga Department of Computer Science Purdue University [some slides based on those of Benjamin Mora] Why? To avoid processing geometry that does

More information

Comparison of hierarchies for occlusion culling based on occlusion queries

Comparison of hierarchies for occlusion culling based on occlusion queries Comparison of hierarchies for occlusion culling based on occlusion queries V.I. Gonakhchyan pusheax@ispras.ru Ivannikov Institute for System Programming of the RAS, Moscow, Russia Efficient interactive

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

Physically-Based Laser Simulation

Physically-Based Laser Simulation Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on

More information

An On-line Occlusio-CuUing Algorithm for Fast Walkthrough

An On-line Occlusio-CuUing Algorithm for Fast Walkthrough EUROGRAPHICS 2001 / Jonathan C. Roberts Short Presentations An On-line Occlusio-CuUing Algorithm for Fast Walkthrough in Urban Areas^ Yusu Wang* Pankaj K. Agarwal* and Sariel Har-Peled^ Abstract We describe

More information

Efficient View-Dependent Sampling of Visual Hulls

Efficient View-Dependent Sampling of Visual Hulls Efficient View-Dependent Sampling of Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Computer Graphics Group MIT Laboratory for Computer Science Cambridge, MA 02141 Abstract In this paper

More information

Horizon Occlusion Culling for Real-time Rendering of Hierarchical Terrains

Horizon Occlusion Culling for Real-time Rendering of Hierarchical Terrains Brigham Young University BYU ScholarsArchive All Faculty Publications 2002-10-27 Horizon Occlusion Culling for Real-time Rendering of Hierarchical Terrains Parris K. Egbert egbert@cs.byu.edu Brandon Lloyd

More information

Soft Shadow Volumes for Ray Tracing with Frustum Shooting

Soft Shadow Volumes for Ray Tracing with Frustum Shooting Soft Shadow Volumes for Ray Tracing with Frustum Shooting Thorsten Harter Markus Osswald Chalmers University of Technology Chalmers University of Technology University Karlsruhe University Karlsruhe Ulf

More information

Visible Surface Detection. (Chapt. 15 in FVD, Chapt. 13 in Hearn & Baker)

Visible Surface Detection. (Chapt. 15 in FVD, Chapt. 13 in Hearn & Baker) Visible Surface Detection (Chapt. 15 in FVD, Chapt. 13 in Hearn & Baker) 1 Given a set of 3D objects and a viewing specifications, determine which lines or surfaces of the objects should be visible. A

More information

Universiteit Leiden Computer Science

Universiteit Leiden Computer Science Universiteit Leiden Computer Science Optimizing octree updates for visibility determination on dynamic scenes Name: Hans Wortel Student-no: 0607940 Date: 28/07/2011 1st supervisor: Dr. Michael Lew 2nd

More information

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 Announcements Homework project #2 due this Friday, October

More information

Techniques for an Image Space Occlusion Culling Engine

Techniques for an Image Space Occlusion Culling Engine Techniques for an Image Space Occlusion Culling Engine Leandro R. Barbagallo, Matias N. Leone, Mariano M. Banquiero, Diego Agromayor, Andres Bursztyn Proyecto de Investigación Explotación de GPUs y Gráficos

More information

Ray Tracing Acceleration Data Structures

Ray Tracing Acceleration Data Structures Ray Tracing Acceleration Data Structures Sumair Ahmed October 29, 2009 Ray Tracing is very time-consuming because of the ray-object intersection calculations. With the brute force method, each ray has

More information

An Efficient Visual Hull Computation Algorithm

An Efficient Visual Hull Computation Algorithm An Efficient Visual Hull Computation Algorithm Wojciech Matusik Chris Buehler Leonard McMillan Laboratory for Computer Science Massachusetts institute of Technology (wojciech, cbuehler, mcmillan)@graphics.lcs.mit.edu

More information

Visible Zone Maintenance for Real-Time Occlusion Culling

Visible Zone Maintenance for Real-Time Occlusion Culling Visible Zone Maintenance for Real-Time Occlusion ulling Olaf Hall-Holt Szymon Rusinkiewicz Figure 1: frame from a flythrough of a forest scene accelerated using visible zone maintenance. The scene consists

More information

Spatial Data Structures

Spatial Data Structures CSCI 480 Computer Graphics Lecture 7 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids BSP Trees [Ch. 0.] March 8, 0 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s/

More information

Ray Tracing. Computer Graphics CMU /15-662, Fall 2016

Ray Tracing. Computer Graphics CMU /15-662, Fall 2016 Ray Tracing Computer Graphics CMU 15-462/15-662, Fall 2016 Primitive-partitioning vs. space-partitioning acceleration structures Primitive partitioning (bounding volume hierarchy): partitions node s primitives

More information

Intersection Acceleration

Intersection Acceleration Advanced Computer Graphics Intersection Acceleration Matthias Teschner Computer Science Department University of Freiburg Outline introduction bounding volume hierarchies uniform grids kd-trees octrees

More information

Computer Graphics Shadow Algorithms

Computer Graphics Shadow Algorithms Computer Graphics Shadow Algorithms Computer Graphics Computer Science Department University of Freiburg WS 11 Outline introduction projection shadows shadow maps shadow volumes conclusion Motivation shadows

More information

Spatial Data Structures

Spatial Data Structures 15-462 Computer Graphics I Lecture 17 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) March 28, 2002 [Angel 8.9] Frank Pfenning Carnegie

More information

Spatial Data Structures

Spatial Data Structures 15-462 Computer Graphics I Lecture 17 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) April 1, 2003 [Angel 9.10] Frank Pfenning Carnegie

More information

Spatial Data Structures

Spatial Data Structures CSCI 420 Computer Graphics Lecture 17 Spatial Data Structures Jernej Barbic University of Southern California Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees [Angel Ch. 8] 1 Ray Tracing Acceleration

More information

A Survey of Visibility for Walkthrough Applications

A Survey of Visibility for Walkthrough Applications A Survey of Visibility for Walkthrough Applications Daniel Cohen-Or Tel Aviv University Yiorgos Chrysanthou UCL Cláudio T. Silva AT&T Frédo Durand MIT - LCS Abstract Visibility algorithms for walkthrough

More information

Conservative occlusion culling for urban visualization using a slice-wise data structure

Conservative occlusion culling for urban visualization using a slice-wise data structure Graphical Models 69 (2007) 191 210 www.elsevier.com/locate/gmod Conservative occlusion culling for urban visualization using a slice-wise data structure Türker Yılmaz, Uǧur Güdükbay * Department of Computer

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Interactive View-Dependent Rendering with Conservative Occlusion Culling in Complex Environments

Interactive View-Dependent Rendering with Conservative Occlusion Culling in Complex Environments Interactive View-Dependent Rendering with Conservative Occlusion Culling in Complex Environments Sung-Eui Yoon Brian Salomon Dinesh Manocha University of North Carolina at Chapel Hill http://gamma.cs.unc.edu/vdr

More information

Accelerated Ambient Occlusion Using Spatial Subdivision Structures

Accelerated Ambient Occlusion Using Spatial Subdivision Structures Abstract Ambient Occlusion is a relatively new method that gives global illumination like results. This paper presents a method to accelerate ambient occlusion using the form factor method in Bunnel [2005]

More information

RASTERIZING POLYGONS IN IMAGE SPACE

RASTERIZING POLYGONS IN IMAGE SPACE On-Line Computer Graphics Notes RASTERIZING POLYGONS IN IMAGE SPACE Kenneth I. Joy Visualization and Graphics Research Group Department of Computer Science University of California, Davis A fundamental

More information

Practical Shadow Mapping

Practical Shadow Mapping Practical Shadow Mapping Stefan Brabec Thomas Annen Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany Abstract In this paper we propose several methods that can greatly improve

More information

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial Data Structures and Speed-Up Techniques Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial data structures What is it? Data structure that organizes

More information

Computing Conservative Multi-Focal Visibility using the z-buffer

Computing Conservative Multi-Focal Visibility using the z-buffer Computing Conservative Multi-Focal Visibility using the z-buffer James Gregory Marsden Leonidas J. Guibas Stanford University Abstract Despite advancements in computer graphics technology, algorithmic

More information

Accelerating Ray Tracing

Accelerating Ray Tracing Accelerating Ray Tracing Ray Tracing Acceleration Techniques Faster Intersections Fewer Rays Generalized Rays Faster Ray-Object Intersections Object bounding volumes Efficient intersection routines Fewer

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Interactive View-Dependent Rendering with Conservative Occlusion Culling in Complex Environments

Interactive View-Dependent Rendering with Conservative Occlusion Culling in Complex Environments Interactive View-Dependent Rendering with Conservative Occlusion Culling in Complex Environments Sung-Eui Yoon Brian Salomon Dinesh Manocha University of North Carolina at Chapel Hill {sungeui,salomon,dm}@cs.unc.edu

More information

A Survey of Visibility for Walkthrough Applications

A Survey of Visibility for Walkthrough Applications A Survey of Visibility for Walkthrough Applications Daniel Cohen-Or Tel Aviv University Yiorgos Chrysanthou UCL Cláudio T. Silva AT&T Frédo Durand MIT - LCS Abstract Visibility algorithms for walkthrough

More information

Hoops: 3D Curves as Conservative Occluders for Cell-Visibility

Hoops: 3D Curves as Conservative Occluders for Cell-Visibility EUROGRAPHICS 2001 / A. Chalmers and T.-M. Rhyne (Guest Editors) Volume 20 (2001), Number 3 Hoops: 3D Curves as Conservative Occluders for Cell-Visibility Pere Brunet, Isabel Navazo, Jarek Rossignac and

More information

Some Thoughts on Visibility

Some Thoughts on Visibility Some Thoughts on Visibility Frédo Durand MIT Lab for Computer Science Visibility is hot! 4 papers at Siggraph 4 papers at the EG rendering workshop A wonderful dedicated workshop in Corsica! A big industrial

More information

6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, :05-12pm Two hand-written sheet of notes (4 pages) allowed 1 SSD [ /17]

6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, :05-12pm Two hand-written sheet of notes (4 pages) allowed 1 SSD [ /17] 6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, 2011 9:05-12pm Two hand-written sheet of notes (4 pages) allowed NAME: 1 / 17 2 / 12 3 / 35 4 / 8 5 / 18 Total / 90 1 SSD [ /17]

More information

Simple Silhouettes for Complex Surfaces

Simple Silhouettes for Complex Surfaces Eurographics Symposium on Geometry Processing(2003) L. Kobbelt, P. Schröder, H. Hoppe (Editors) Simple Silhouettes for Complex Surfaces D. Kirsanov, P. V. Sander, and S. J. Gortler Harvard University Abstract

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin Stony Brook University (SUNY at Stony Brook) Stony Brook, New York 11794-2424 Tel: (631)632-845; Fax: (631)632-8334 qin@cs.stonybrook.edu

More information

CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling

CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015 Announcements Project 4 due tomorrow Project

More information

A Three Dimensional Image Cache for Virtual Reality

A Three Dimensional Image Cache for Virtual Reality A Three Dimensional Image Cache for Virtual Reality Gernot Schaufler and Wolfgang Stürzlinger GUP, Johannes Kepler Universität Linz, Altenbergerstr.69, A- Linz, Austria/Europe schaufler@gup.uni-linz.ac.at

More information

Visibility-Based Prefetching for Interactive Out-Of-Core Rendering

Visibility-Based Prefetching for Interactive Out-Of-Core Rendering Visibility-Based Prefetching for Interactive Out-Of-Core Rendering Wagner T. Corrêa Princeton University James T. Klosowski IBM Research Cláudio T. Silva University of Utah Abstract We present a new visibility-based

More information

Interactive Visibility Culling in Complex Environments using Occlusion-Switches

Interactive Visibility Culling in Complex Environments using Occlusion-Switches Interactive Visibility Culling in Complex Environments using Occlusion-Switches Naga K. Govindaraju Avneesh Sud Sung-Eui Yoon Dinesh Manocha University of North Carolina at Chapel Hill {naga,sud,sungeui,dm}@cs.unc.edu

More information

Real-Time Voxelization for Global Illumination

Real-Time Voxelization for Global Illumination Lecture 26: Real-Time Voxelization for Global Illumination Visual Computing Systems Voxelization to regular grid Input: scene triangles Output: surface information at each voxel in 3D grid - Simple case:

More information

CS 563 Advanced Topics in Computer Graphics Culling and Acceleration Techniques Part 1 by Mark Vessella

CS 563 Advanced Topics in Computer Graphics Culling and Acceleration Techniques Part 1 by Mark Vessella CS 563 Advanced Topics in Computer Graphics Culling and Acceleration Techniques Part 1 by Mark Vessella Introduction Acceleration Techniques Spatial Data Structures Culling Outline for the Night Bounding

More information

Occlusion Culling Using Minimum Occluder Set and Opacity Map

Occlusion Culling Using Minimum Occluder Set and Opacity Map Occlusion Culling Using Minimum Occluder Set and Opacity Map Poon Chun Ho Wenping Wang Department of Computer Science and Information Systems University of Hong Kong { chpoon wenping } @csis.hku.hk Abstract

More information

Point based Rendering

Point based Rendering Point based Rendering CS535 Daniel Aliaga Current Standards Traditionally, graphics has worked with triangles as the rendering primitive Triangles are really just the lowest common denominator for surfaces

More information

Conservative Visibility Preprocessing using Extended Projections

Conservative Visibility Preprocessing using Extended Projections To appear in the SIGGRH 2 conference proceedings UDTED EXTENDED VERSION Conservative Visibility reprocessing using Extended rojections Frédo Durand, George Drettakis, Joëlle Thollot and Claude uech imgis

More information

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Efficient Rendering of Glossy Reflection Using Graphics Hardware Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,

More information

Announcements. Written Assignment2 is out, due March 8 Graded Programming Assignment2 next Tuesday

Announcements. Written Assignment2 is out, due March 8 Graded Programming Assignment2 next Tuesday Announcements Written Assignment2 is out, due March 8 Graded Programming Assignment2 next Tuesday 1 Spatial Data Structures Hierarchical Bounding Volumes Grids Octrees BSP Trees 11/7/02 Speeding Up Computations

More information

OpenGL-assisted Visibility Queries of Large Polygonal Models

OpenGL-assisted Visibility Queries of Large Polygonal Models OpenGL-assisted Visibility Queries of Large Polygonal Models Tobias Hüttner, Michael Meißner, Dirk Bartz WSI-98-6 ISSN 0946-3852 WSI/GRIS University of Tübingen Auf der Morgenstelle 10/C9 D72076 Tübingen,

More information

Computer Graphics. Bing-Yu Chen National Taiwan University

Computer Graphics. Bing-Yu Chen National Taiwan University Computer Graphics Bing-Yu Chen National Taiwan University Visible-Surface Determination Back-Face Culling The Depth-Sort Algorithm Binary Space-Partitioning Trees The z-buffer Algorithm Scan-Line Algorithm

More information

Spatial Data Structures

Spatial Data Structures Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) [Angel 9.10] Outline Ray tracing review what rays matter? Ray tracing speedup faster

More information

Intro to Modeling Modeling in 3D

Intro to Modeling Modeling in 3D Intro to Modeling Modeling in 3D Polygon sets can approximate more complex shapes as discretized surfaces 2 1 2 3 Curve surfaces in 3D Sphere, ellipsoids, etc Curved Surfaces Modeling in 3D ) ( 2 2 2 2

More information

Visibility. Tom Funkhouser COS 526, Fall Slides mostly by Frédo Durand

Visibility. Tom Funkhouser COS 526, Fall Slides mostly by Frédo Durand Visibility Tom Funkhouser COS 526, Fall 2016 Slides mostly by Frédo Durand Visibility Compute which part of scene can be seen Visibility Compute which part of scene can be seen (i.e., line segment from

More information

CSE328 Fundamentals of Computer Graphics

CSE328 Fundamentals of Computer Graphics CSE328 Fundamentals of Computer Graphics Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 794--44 Tel: (63)632-845; Fax: (63)632-8334 qin@cs.sunysb.edu

More information

CS452/552; EE465/505. Clipping & Scan Conversion

CS452/552; EE465/505. Clipping & Scan Conversion CS452/552; EE465/505 Clipping & Scan Conversion 3-31 15 Outline! From Geometry to Pixels: Overview Clipping (continued) Scan conversion Read: Angel, Chapter 8, 8.1-8.9 Project#1 due: this week Lab4 due:

More information

CEng 477 Introduction to Computer Graphics Fall 2007

CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection Visible surface detection or hidden surface removal. Realistic scenes: closer objects occludes the

More information

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering T. Ropinski, F. Steinicke, K. Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster

More information

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

Culling. Computer Graphics CSE 167 Lecture 12

Culling. Computer Graphics CSE 167 Lecture 12 Culling Computer Graphics CSE 167 Lecture 12 CSE 167: Computer graphics Culling Definition: selecting from a large quantity In computer graphics: selecting primitives (or batches of primitives) that are

More information

Visible Surface Detection Methods

Visible Surface Detection Methods Visible urface Detection Methods Visible-urface Detection identifying visible parts of a scene (also hidden- elimination) type of algorithm depends on: complexity of scene type of objects available equipment

More information

Conservative Visibility Preprocessing using Extended Projections

Conservative Visibility Preprocessing using Extended Projections Conservative Visibility reprocessing using Extended rojections Frédo Durand, George Drettakis, Joëlle Thollot and Claude uech imgis - GRVIR/IMG - INRI Laboratory for Computer Science - MIT bstract Visualization

More information

Lecture 21: Shading. put your trust in my shadow. Judges 9:15

Lecture 21: Shading. put your trust in my shadow. Judges 9:15 Lecture 21: Shading put your trust in my shadow. Judges 9:15 1. Polygonal Models Polygonal models are one of the most common representations for geometry in Computer Graphics. Polygonal models are popular

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

Computer Graphics. - Rasterization - Philipp Slusallek

Computer Graphics. - Rasterization - Philipp Slusallek Computer Graphics - Rasterization - Philipp Slusallek Rasterization Definition Given some geometry (point, 2D line, circle, triangle, polygon, ), specify which pixels of a raster display each primitive

More information

Integrating Occlusion Culling with View-Dependent Rendering

Integrating Occlusion Culling with View-Dependent Rendering Integrating Occlusion Culling with View-Dependent Rendering Jihad El-Sana Neta Sokolovsky Cláudio T. Silva Ben-Gurion University Ben-Gurion University AT&T Abstract We present a novel approach that integrates

More information

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds 1 Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds Takeru Niwa 1 and Hiroshi Masuda 2 1 The University of Electro-Communications, takeru.niwa@uec.ac.jp 2 The University

More information

Lecture 11: Ray tracing (cont.)

Lecture 11: Ray tracing (cont.) Interactive Computer Graphics Ray tracing - Summary Lecture 11: Ray tracing (cont.) Graphics Lecture 10: Slide 1 Some slides adopted from H. Pfister, Harvard Graphics Lecture 10: Slide 2 Ray tracing -

More information

SOUND SPATIALIZATION BASED ON FAST BEAM TRACING IN THE DUAL SPACE. Marco Foco, Pietro Polotti, Augusto Sarti, Stefano Tubaro

SOUND SPATIALIZATION BASED ON FAST BEAM TRACING IN THE DUAL SPACE. Marco Foco, Pietro Polotti, Augusto Sarti, Stefano Tubaro SOUND SPATIALIZATION BASED ON FAST BEAM TRACING IN THE DUAL SPACE Marco Foco, Pietro Polotti, Augusto Sarti, Stefano Tubaro Dipartimento di Elettronica e Informazione Politecnico di Milano Piazza L. Da

More information

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for Comparison of Two Image-Space Subdivision Algorithms for Direct Volume Rendering on Distributed-Memory Multicomputers Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc Dept. of Computer Eng. and

More information

Sung-Eui Yoon ( 윤성의 )

Sung-Eui Yoon ( 윤성의 ) CS380: Computer Graphics Ray Tracing Sung-Eui Yoon ( 윤성의 ) Course URL: http://sglab.kaist.ac.kr/~sungeui/cg/ Class Objectives Understand overall algorithm of recursive ray tracing Ray generations Intersection

More information

Lecture 25 of 41. Spatial Sorting: Binary Space Partitioning Quadtrees & Octrees

Lecture 25 of 41. Spatial Sorting: Binary Space Partitioning Quadtrees & Octrees Spatial Sorting: Binary Space Partitioning Quadtrees & Octrees William H. Hsu Department of Computing and Information Sciences, KSU KSOL course pages: http://bit.ly/hgvxlh / http://bit.ly/evizre Public

More information

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo Computer Graphics Bing-Yu Chen National Taiwan University The University of Tokyo Hidden-Surface Removal Back-Face Culling The Depth-Sort Algorithm Binary Space-Partitioning Trees The z-buffer Algorithm

More information

Visible-Surface Detection Methods. Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin

Visible-Surface Detection Methods. Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin Visible-Surface Detection Methods Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin The Visibility Problem [Problem Statement] GIVEN: a set of 3-D surfaces, a projection from 3-D to 2-D screen,

More information

Speeding up your game

Speeding up your game Speeding up your game The scene graph Culling techniques Level-of-detail rendering (LODs) Collision detection Resources and pointers (adapted by Marc Levoy from a lecture by Tomas Möller, using material

More information

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development Chap. 5 Scene Management Overview Scene Management vs Rendering This chapter is about rendering

More information

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz Computing Visibility BSP Trees Ray Casting Depth Buffering Quiz Power of Plane Equations We ve gotten a lot of mileage out of one simple equation. Basis for D outcode-clipping Basis for plane-at-a-time

More information

Overview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality.

Overview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality. Overview A real-time shadow approach for an Augmented Reality application using shadow volumes Introduction of Concepts Standard Stenciled Shadow Volumes Method Proposed Approach in AR Application Experimental

More information

Parallel and perspective projections such as used in representing 3d images.

Parallel and perspective projections such as used in representing 3d images. Chapter 5 Rotations and projections In this chapter we discuss Rotations Parallel and perspective projections such as used in representing 3d images. Using coordinates and matrices, parallel projections

More information

in order to apply the depth buffer operations.

in order to apply the depth buffer operations. into spans (we called span the intersection segment between a projected patch and a proxel array line). These spans are generated simultaneously for all the patches on the full processor array, as the

More information

Self-shadowing Bumpmap using 3D Texture Hardware

Self-shadowing Bumpmap using 3D Texture Hardware Self-shadowing Bumpmap using 3D Texture Hardware Tom Forsyth, Mucky Foot Productions Ltd. TomF@muckyfoot.com Abstract Self-shadowing bumpmaps add realism and depth to scenes and provide important visual

More information

4.5 VISIBLE SURFACE DETECTION METHODES

4.5 VISIBLE SURFACE DETECTION METHODES 4.5 VISIBLE SURFACE DETECTION METHODES A major consideration in the generation of realistic graphics displays is identifying those parts of a scene that are visible from a chosen viewing position. There

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

3D Building Generalisation and Visualisation

3D Building Generalisation and Visualisation Photogrammetric Week '03 Dieter Fritsch (Ed.) Wichmann Verlag, Heidelberg, 2003 Kada 29 3D Building Generalisation and Visualisation MARTIN KADA, Institute for Photogrammetry (ifp), University of Stuttgart

More information