First Steps in Hardware Two-Level Volume Rendering

Similar documents
Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering

Volume Graphics Introduction

Volume Visualization

Interactive Volume Illustration and Feature Halos

Volume Visualization. Part 1 (out of 3) Volume Data. Where do the data come from? 3D Data Space How are volume data organized?

A SURVEY ON 3D RENDERING METHODS FOR MRI IMAGES

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

Applications of Explicit Early-Z Culling

CS GPU and GPGPU Programming Lecture 2: Introduction; GPU Architecture 1. Markus Hadwiger, KAUST

Direct Volume Rendering

Volume Rendering - Introduction. Markus Hadwiger Visual Computing Center King Abdullah University of Science and Technology

Clipping. CSC 7443: Scientific Information Visualization

High-Quality Pre-Integrated Volume Rendering Using Hardware-Accelerated Pixel Shading

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Lecture 21

Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller

GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering

Direct Volume Rendering

Previously... contour or image rendering in 2D

Scalar Data. CMPT 467/767 Visualization Torsten Möller. Weiskopf/Machiraju/Möller

3/29/2016. Applications: Geology. Appliations: Medicine. Applications: Archeology. Applications: Klaus Engel Markus Hadwiger Christof Rezk Salama

Hardware Accelerated Volume Visualization. Leonid I. Dimitrov & Milos Sramek GMI Austrian Academy of Sciences

CSE 167: Lecture #17: Volume Rendering. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Advanced GPU Raycasting

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Ray Casting on Programmable Graphics Hardware. Martin Kraus PURPL group, Purdue University

lecture 21 volume rendering - blending N layers - OpenGL fog (not on final exam) - transfer functions - rendering level surfaces

E.Order of Operations

A Survey of Volumetric Visualization Techniques for Medical Images

Could you make the XNA functions yourself?

GPU-based Volume Rendering. Michal Červeňanský

1 State of The Art for Volume Rendering

Direct Volume Rendering

Applications of Explicit Early-Z Z Culling. Jason Mitchell ATI Research

Chapter 7 - Light, Materials, Appearance

COMPUTER GRAPHICS COURSE. Rendering Pipelines

Feature Enhancement using Locally Adaptive Volume Rendering

The Rasterization Pipeline

Shear-Warp Volume Rendering. Volume Rendering Overview

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming

Visualization of the Marked Cells of Model Organism

GPU Ray Casting. Ricardo Marques Dep. Informática, Universidade do Minho VICOMTech

Pipeline Operations. CS 4620 Lecture 14

Real-Time Volumetric Smoke using D3D10. Sarah Tariq and Ignacio Llamas NVIDIA Developer Technology

Computer Graphics Shadow Algorithms

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

DiFi: Distance Fields - Fast Computation Using Graphics Hardware

CS4620/5620: Lecture 14 Pipeline

Emissive Clip Planes for Volume Rendering Supplement.

Fast and Flexible High-Quality Texture Filtering With Tiled High-Resolution Filters

Tutorial on GPU Programming #2. Joong-Youn Lee Supercomputing Center, KISTI

Volume Rendering with libmini Stefan Roettger, April 2007

AMCS / CS 247 Scientific Visualization Lecture 10: (GPU) Texture Mapping. Markus Hadwiger, KAUST

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes

Visualization. CSCI 420 Computer Graphics Lecture 26

Multidimensional Transfer Functions in Volume Rendering of Medical Datasets. Master thesis. Tor Øyvind Fluør

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for

Deferred Rendering Due: Wednesday November 15 at 10pm

Volume Illumination & Vector Field Visualisation

A view-dependent approach to MIP for very large data

Visibility and Occlusion Culling

Cache-Aware Sampling Strategies for Texture-Based Ray Casting on GPU

A Bandwidth Effective Rendering Scheme for 3D Texture-based Volume Visualization on GPU

Visualization Computer Graphics I Lecture 20

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Interpolation using scanline algorithm

Fast Interactive Region of Interest Selection for Volume Visualization

Hardware-driven visibility culling

A Hardware F-Buffer Implementation

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Texture Partitioning and Packing for Accelerating Texture-based Volume Rendering

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload)

Scalar Data. Alark Joshi

Medical Visualization - Volume Rendering. J.-Prof. Dr. Kai Lawonn

Data Visualization (CIS/DSC 468)

Scalar Algorithms: Contouring

Introduction to 3D Graphics

Spring 2009 Prof. Hyesoon Kim

A Flexible Multi-Volume Shader Framework for Arbitrarily Intersecting Multi-Resolution Datasets

A Bandwidth Reduction Scheme for 3D Texture-Based Volume Rendering on Commodity Graphics Hardware

Order Independent Transparency with Dual Depth Peeling. Louis Bavoil, Kevin Myers

Imorph: An Interactive System for Visualizing and Modeling Implicit Morphs

Hardware-Assisted Visibility Ordering for Point-Based and Volume Rendering

Filtering theory: Battling Aliasing with Antialiasing. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Visualization Computer Graphics I Lecture 20

Controlling GPU-based Volume Rendering using Ray Textures

Visible-Surface Detection Methods. Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

Particle-Based Volume Rendering of Unstructured Volume Data

Smooth Mixed-Resolution GPU Volume Rendering

Parallel Volume Rendering Using PC Graphics Hardware

CS 5630/6630 Scientific Visualization. Volume Rendering III: Unstructured Grid Techniques

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Interactive Multi-Volume Visualization

Data Visualization (DSC 530/CIS )

cs6630 November TRANSFER FUNCTIONS Alex Bigelow University of Utah

Mach band effect. The Mach band effect increases the visual unpleasant representation of curved surface using flat shading.

Transcription:

First Steps in Hardware Two-Level Volume Rendering Markus Hadwiger, Helwig Hauser Abstract We describe first steps toward implementing two-level volume rendering (abbreviated as 2lVR) on consumer PC graphics hardware. Two-level volume rendering allows to combine local per-object rendering modes, like direct volume rendering (DVR) and maximum intensity projection (MIP), with a global compositing mode for combining the contributions of different objects embedded in a segmented volume data set. In this way, the rendering mode most suitable to a specific kind of object can be used, e.g., MIP for the skin and DVR for the bones of a medical data set. The original implementation of 2lVR was done in the context of a fast and flexible software volume renderer, the RTVR library. In this paper, however, we present first steps toward achieving the flexibility of 2lVR in a volume renderer exploiting texture-mapping hardware, which will enable fast high-quality two-level volume rendering at high resolutions in the future. Keywords: volume rendering, two-level volume rendering, interactive visualization, graphics hardware 1 Introduction Two-level volume rendering [5, 6] is a powerful approach for rendering segmented volume data that allows to employ different rendering modes for different objects embedded in the volume data. Rendering modes in this context are, for example, direct volume rendering (DVR), maximum intensity projection (MIP), surface rendering (with non-polygonal iso-surfaces), value integration ( X-ray summation ), and non-photorealistic rendering (NPR), see section 2. Figure 1 shows an example image of two-level volume rendering of a medical data set. In this case, three different rendering modes 1

Figure 1: Two-level volume rendering of a medical data-set (parts of a human hand): bones are rendered using DVR, surface rendering is used for vessels, and non-photorealistic rendering is used for the skin. have been employed in order to highlight specific objects or features embedded in the data set. The bones are rendered with DVR, whereas the vessels are rendered as surfaces. The skin is shown only as context with minimal use of screen space, which is achieved by only showing contours via NPR as rendering mode. In general, the best rendering mode for a specific object not only depends on the object and data set itself, but also on the requirements of the user. Two-level volume rendering allows maximum flexibility with regard to what rendering mode should be used for which objects. 2 Rendering modes The basic framework of two-level volume rendering allows choosing different rendering modes for different objects embedded in a volumetric data set. 2

Naturally, the actual usefulness of such a framework is closely tied to the rendering modes that are actually available. This section briefly describes each of the rendering modes that have been implemented in the original 2lVR implementation, most of which are also possible in the hardware implementation: Semi-transparent volume rendering (DVR) - the most common approach to rendering a volume without any intermediate geometric representation. A transfer function maps density values to optical properties, most commonly directly to RGBA values. In 2lVR, different transfer functions may be used for different objects. In hardware, we realize DVR with per-object transfer functions via 2D RGBA texture look-up tables and dependent texturing, i.e., using colors sampled from a texture as coordinates to index another subsequent texture. One coordinate for the look-up corresponds to the object id, whereas the other one is mapped to the volume density. That is, the 1D transfer functions for all embedded objects are all stored in a single 2D texture map, sorted according to their respective object id. Maximum intensity projection (MIP) - projects the maximum value encountered along a ray onto the corresponding pixel. Although it lacks depth information, MIP can be very useful to make features visible that would be occluded or not visible very well in DVR. In hardware volume rendering, selection of the maximum value along a ray can be achieved via the OpenGL [12] extension EXT blend minmax, which supports a blend equation of GL MAX EXT which computes the maximum of all values rendered on top of any given pixel. Surface rendering - can also be achieved without any intermediate geometric representation, by using an appropriate transfer function [8]. In hardware volume rendering, non-polygonal iso-surfaces are usually implemented with the OpenGL alpha test [13], which is also the approach that we are using. Multiple iso-surfaces are possible with the use of a lookup texture mapping two adjacent density values to the appropriate iso-surface and respective color and transparency [4]. Value integration (X-ray summation) - tries to mimic typical X-ray images by summing up the contribution of values along a ray, thus generating monochrome images that look very similar to real X-ray. 3

This summation can be achieved in OpenGL via a blend mode of glblend(gl ONE, GL ONE). However, the result must also be normalized, which cannot easily be achieved in hardware volume rendering. Thus, this mode is currently not implemented in our hardware implementation. Non-photorealistic (NPR) volume rendering - has been introduced recently [3] to the volume rendering community and is especially useful for visualizing the context, e.g., contours of subobjects [2], such as human skin, when the actual focus is on structures beneath the skin (see figure 1). NPR rendering modes can be implemented on recent graphics hardware, although our implementation does not support any such modes yet. 3 Two-level volume rendering basics The original implementation of 2lVR in the context of RTVR [9] is a very fast Java-based software volume renderer. The flexibility allowed for by pure software rendering turns out to be especially useful for the separation of objects required by two-level volume rendering. Basically, the input to the algorithm is an already segmented data set, where in addition to the volume comprised of one density value per voxel, an object id is also known for each voxel. For each object id contained in the data set, a rendering mode can be chosen for rendering. All voxels that have been assigned the same object id will be rendered using the corresponding rendering mode. In addition to these local (per-object) rendering modes, one global compositing mode must be chosen, which is used to combine the different, locally composited, object contributions into the final image. The actual renderer of RTVR uses a shear-warp factorization [7] of the viewing transform, together with nearest-neighbor approximation within volume slices. For two-level volume rendering, two render buffers of equal size are employed. First, a local compositing buffer for repeatedly compositing the contribution of voxels along a ray that belong to the same object, and a global compositing buffer, which is used to combine the contribution of different objects each time a ray passes an object boundary from one object to the next. After the entire volume has been processed and a final merge step of both buffers, the global compositing buffer contains the final output image. 4

Figure 2: Object segmentation implicitly yields viewing rays to be partitioned into segments (one per object intersection). Figure 2 shows that each viewing ray, corresponding to exactly one final pixel in the output image, potentially traverses multiple objects, which implies that different compositing modes have to be used along individual rays. The different objects encountered along a single ray subdivide this ray into segments. In principle, each segment has to be rendered separately, in order to enable use of different rendering modes for different segments, and composition of segments using the global rendering mode. However, a very efficient implementation of this concept is possible when the local compositing buffer stores object ids in addition to composited values. Each time a new value is being composited into this buffer, a check is performed whether the new value s object id matches the object id stored in the buffer. If this is the case, the ray is still in the same object and the new value can simply be composited using the local compositing mode corresponding to the stored object id. If not, the value stored in the local buffer is composited into the global buffer using the global rendering mode, the corresponding location in the local buffer is cleared, and the new value is simply written into the local buffer, along with its id. 5

In section 4 we will describe how this can also be implemented in a volume renderer exploiting graphics hardware instead of a software shear-warp. 4 Hardware two-level volume rendering Volume rendering on general-purpose consumer graphics hardware such as the NVIDIA GeForce 4 [10], or the ATI Radeon 8500 [1], is in the process of becoming much more popular due to the high degree of programmability of these architectures, as well as increased fill-rate and texture memory of current and upcoming hardware. Texture mapping hardware can be exploited for volume rendering by storing the volume data in one or several texture maps and rendering proxy geometry in order to resample and display the volume. Common approaches include non-polygonal iso-surfaces [13], on-thefly interpolation of additional slices [11], and pre-integrated classification [4]. The major problem in two-level volume rendering on graphics hardware is the use of different rendering modes for different objects. The primary reason for this is that both the pixel shader and the blend mode cannot be changed on a per-pixel basis. In order to be able to use different modes for different objects, multi-pass rendering has to be employed, in each pass only selecting those voxels that correspond to the rendering mode configured for the current rendering pass. The following section outlines our method for 2lVR on graphics hardware. 4.1 Algorithm In our hardware volume renderer, a rendering mode is described as a combination of a pixel shader and a blend mode. The former determines how fragments are generated and shaded, and the latter is responsible for how the composition with pixels already stored in the frame buffer is performed. As outlined in section 3, 2lVR basically requires two render buffers, which are realized in our hardware implementation as OpenGL pbuffers (off-screen render buffers that can also be used as textures in subsequent rendering operations): The global compositing buffer is exclusively used in conjunction with the global rendering mode. For each slice of the volume being rendered, at most one rendering pass that transfers certain pixels from the local into the global compositing buffer is performed. After all 6

rendering passes have been executed, the global compositing buffer contains the final output image. The local (object) compositing buffer is used for intermediate results, together with per-object rendering modes. For each slice of the volume, the slice has to be rendered into the local buffer as many times as there are different rendering modes in the slice (which is equal or less than the number of different object ids contained in the slice). For each slice pass, a different rendering mode must be activated. After all passes corresponding to a single slice have been executed, the local buffer has to be transferred into the global buffer at those pixel locations where a change of the object id will occur in the next slice. Table 1 outlines the high-level operation of hardware-2lvr. FOR each slice DO FOR each object id contained in slice DO ConfigureFragmentRejectionForObjectID(); RenderSliceIntoLocalPbuffer(); TransferLocalPbufferIntoGlobalPbuffer(); ClearTransferredPixelsInLocalPbuffer(); TransferGlobalPbufferIntoRenderingWindow(); Table 1: Basic algorithm for two-level volume rendering on graphics hardware. The volume consists of a stack of slices, which are rendered from back to front. In a preprocess, the object ids contained in each slice are determined. This information is then used at run-time to determine the number of rendering passes necessary, together with the corresponding rendering modes. Object ids sharing the same rendering mode are handled in a single pass. In addition to the volume data needed by the basic rendering mode, which always includes the volume density and might include additional information like gradients, we also store the segmentation mask in an object id volume that allows the pixel shader to retrieve an object id for each voxel contained in the original volume. ConfigureFragmentRejectionForObjectID() configures rejection of fragments that do not correspond to the rendering mode that is currently of interest. Due to limitations in fragment culling support in the hardware architectures we are using, we do not use actual fragment culling, but force 7

those fragments that should be rejected to a null operation in the blend operation. For example, if the blend operation is alpha blending, e.g., for DVR, the fragments that should be rejected are forced to have an alpha of zero. RenderSliceIntoLocalPbuffer() renders the current slice into the local compositing buffer, employing a pixel shader corresponding to both the desired rendering mode and the rejection of unwanted fragments. TransferLocalPbufferIntoGlobalPbuffer() transfers the local compositing buffer into the global buffer after all passes for the current slice have been executed. Pixels are only transferred at those locations where the object id will change in the next pass. In order to be able to determine where the id will change, three simultaneous textures are necessary in this pass. One contains the buffer, and the other two contain the object ids of the current and the next slice, respectively. The object id texture of the next slice must be projected onto the current slice in a way that matches the exact same pixel locations as they would be covered in the next pass. ClearTransferredPixelsInLocalPbuffer() clears the pixels in the local compositing buffer that have just been transferred into the global buffer. This is necessary in order to make room for subsequently generated fragments using a different rendering mode. Similarly to the preceding step, two textures for object ids of the current and the next slice are needed. TransferGlobalPbufferIntoRenderingWindow() is only necessary if the global compositing buffer and the back buffer of the rendering window are not the same. 4.2 Performance considerations The algorithm outlined in section 4.1 has several severe implications on rendering performance. The major performance problem is the number of rendering passes that are required. Each slice through the volume is rendered as many times as there are different rendering modes contained in its corresponding segmentation mask slice. After all the passes for a single slice, another pass is performed that transfers the local into the global buffer at specific pixel locations. Additionally, many of these passes render directly into textures and require textures as input that have previously been rendered into. It is especially crucial to minimize the number of passes as much as possible, e.g., render all object ids that share the same rendering mode in only a single pass. Also, since actual fragment culling cannot be used, even the unnecessary voxels are rendered from a performance point of view, although they do not erroneously affect the output. 8

5 Conclusions and future work We have shown first step toward a hardware implementation of two-level volume rendering. However, although the fill-rate of current consumer graphics hardware is astonishing, the demands of hardware 2lVR on the texture fetch and fill rate are tremendous, and the performance of our algorithm needs to be improved further. Our current implementation does not yet contain all desired features, especially with respect to rendering modes and flexibility, which we are planning to implement. Additionally, some already known opportunities for performance improvements have not been implemented yet. Another problem is that the segmentation masks used have the same resolution as the volume itself, and cannot easily be filtered, since filtering of object ids is not a meaningful concept. Therefore, the boundary between different objects is sometimes clearly visible, as the boundary voxels can be discerned individually. References [1] ATI web page. http://www.ati.com/. [2] B. Csebfalvi, L. Mroz, H. Hauser, A. König, and M. E. Gröller. Fast visualization of object contours by non-photorealistic volume rendering. In Proceedings of EUROGRAPHICS 2001, pages 452 460, 2001. [3] D. Ebert and P. Rheingans. Volume illustration: non-photographic rendering of volume models. In Proceedings of IEEE Visualization 2000, pages 195 202, 2000. [4] K. Engel, M. Kraus, and T. Ertl. High-Quality Pre-Integrated Volume Rendering Using Hardware-Accelerated Pixel Shading. In Proceedings of Graphics Hardware 2001, pages 9 16, 2001. [5] H. Hauser, L. Mroz, G-I. Bischi, and M. E. Gröller. Two-level volume rendering - fusing mip and dvr. In Proceedings of IEEE Visualization 2000, pages 211 218, 2000. [6] H. Hauser, L. Mroz, G-I. Bischi, and M. E. Gröller. Two-level volume rendering - fusing mip and dvr. IEEE Transactions on Visualization and Computer Graphics, 7(3):242 252, 2001. 9

[7] P. Lacroute and M. Levoy. Fast volume rendering using a shearwarp factorization of the viewing transformation. In Proceedings of SIGGRAPH 94, pages 451 458, 1994. [8] M. Levoy. Display of surfaces from volume data. IEEE Computer Graphics and Applications, 8(3):29 37, May 1988. [9] L. Mroz and H. Hauser. RTVR a flexible java library for interactive volume rendering. In Proceedings of IEEE Visualization 2001, pages 279 286, 2001. [10] NVIDIA web page. http://www.nvidia.com/. [11] C. Rezk-Salama, K. Engel, M. Bauer, G. Greiner, and T. Ertl. Interactive Volume Rendering on Standard PC Graphics Hardware Using Multi-Textures and Multi-Stage Rasterization. In Proceedings of Graphics Hardware 2000, 2000. [12] M. Segal and K. Akeley. The OpenGL Graphics System: A Specification. http://www.opengl.org. [13] R. Westermann and T. Ertl. Efficiently using graphics hardware in volume rendering applications. In Proceedings of SIGGRAPH 98, pages 169 178, 1998. 10