Anisotropic Texture Filtering using Line Integral Textures

Similar documents
Sampling, Aliasing, & Mipmaps

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Footprint Area Sampled Texturing

Sampling, Aliasing, & Mipmaps

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Sampling, Aliasing, & Mipmaps

Today. Texture mapping in OpenGL. Texture mapping. Basic shaders for texturing. Today. Computergrafik

Sampling II, Anti-aliasing, Texture Filtering

Non-Linearly Quantized Moment Shadow Maps

First Order Approximation for Texture Filtering

CS GPU and GPGPU Programming Lecture 12: GPU Texturing 1. Markus Hadwiger, KAUST

Lecture 6: Texture. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Computer Graphics. Texture Filtering & Sampling Theory. Hendrik Lensch. Computer Graphics WS07/08 Texturing

CS GPU and GPGPU Programming Lecture 16+17: GPU Texturing 1+2. Markus Hadwiger, KAUST

Texturing Theory. Overview. All it takes is for the rendered image to look right. -Jim Blinn 11/10/2018

Texture mapping. Computer Graphics CSE 167 Lecture 9

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

View-Independent Object-Space Surface Splatting

CS 450: COMPUTER GRAPHICS TEXTURE MAPPING SPRING 2015 DR. MICHAEL J. REALE

Real-Time Graphics Architecture. Kurt Akeley Pat Hanrahan. Texture

Perspective Projection and Texture Mapping

Adaptive Normal Map Compression for 3D Video Games

Fast HDR Image-Based Lighting Using Summed-Area Tables

Persistent Background Effects for Realtime Applications Greg Lane, April 2009

Overview. Goals. MipMapping. P5 MipMap Texturing. What are MipMaps. MipMapping in OpenGL. Generating MipMaps Filtering.

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming

Hardware Displacement Mapping

3D Rasterization II COS 426

CHAPTER 1 Graphics Systems and Models 3

TEXTURE MAPPING. DVA338 Computer Graphics Thomas Larsson, Afshin Ameri

28 SAMPLING. ALIASING AND ANTI-ALIASING

Lecture 6: Texturing Part II: Texture Compression and GPU Latency Hiding Mechanisms. Visual Computing Systems CMU , Fall 2014

Forward Image Mapping

Texture-Mapping Tricks. How Bad Does it Look? We've Seen this Sort of Thing Before. Sampling Texture Maps

Computer Graphics. Lecture 8 Antialiasing, Texture Mapping

CS 428: Fall Introduction to. Texture mapping and filtering. Andrew Nealen, Rutgers, /18/2010 1

Irradiance Gradients. Media & Occlusions

The simplest and most obvious method to go from a continuous to a discrete image is by point sampling,

CS 354R: Computer Game Technology

Interactive Volumetric Shadows in Participating Media with Single-Scattering

Forward Image Warping

Image Base Rendering: An Introduction

Ptex: Per-face Texture Mapping for Production Rendering

Texture Mapping. Texture (images) lecture 16. Texture mapping Aliasing (and anti-aliasing) Adding texture improves realism.

lecture 16 Texture mapping Aliasing (and anti-aliasing)

Textures. Texture coordinates. Introduce one more component to geometry

Rationale for Non-Programmable Additions to OpenGL 2.0

CSE 167: Lecture 11: Textures 2. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03

Computational Strategies

Here s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06T 15/60 ( )

Multi-View Soft Shadows. Louis Bavoil

Hardware-Assisted Relief Texture Mapping

Texturing. Slides done bytomas Akenine-Möller and Ulf Assarsson Department of Computer Engineering Chalmers University of Technology

Point-Based Rendering

Programming Graphics Hardware

Applications of Explicit Early-Z Culling

x ~ Hemispheric Lighting

Circular Arcs as Primitives for Vector Textures

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

6.837 Introduction to Computer Graphics Quiz 2 Thursday November 20, :40-4pm One hand-written sheet of notes allowed

Mattan Erez. The University of Texas at Austin

Texture. Real-Time Graphics Architecture. Kurt Akeley Pat Hanrahan.

Monday Morning. Graphics Hardware

Reading. 12. Texture Mapping. Texture mapping. Non-parametric texture mapping. Required. w Watt, intro to Chapter 8 and intros to 8.1, 8.4, 8.6, 8.8.

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim

The Light Field and Image-Based Rendering

Technical Report. Non-Power-of-Two Mipmap Creation

3D Video Over Time. Presented on by. Daniel Kubacki

Adaptive Point Cloud Rendering

CPSC 4040/6040 Computer Graphics Images. Joshua Levine

Texture Caches. Michael Doggett Department of Computer Science Lund University Sweden mike (at) cs.lth.se

CS GPU and GPGPU Programming Lecture 11: GPU Texturing 1. Markus Hadwiger, KAUST

LEVEL 1 ANIMATION ACADEMY2010

TSBK03 Screen-Space Ambient Occlusion

Rendering Subdivision Surfaces Efficiently on the GPU

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology

Point based Rendering

Computer Graphics. Shadows

CS 431/636 Advanced Rendering Techniques

Texture Caching. Héctor Antonio Villa Martínez Universidad de Sonora

Summed-Area Tables. And Their Application to Dynamic Glossy Environment Reflections Thorsten Scheuermann 3D Application Research Group

A Shadow Volume Algorithm for Opaque and Transparent Non-Manifold Casters

C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev

Computer Graphics Shadow Algorithms

Texture Mapping using Surface Flattening via Multi-Dimensional Scaling

arxiv: v1 [cs.cv] 28 Sep 2018

Texture Mapping and Special Effects

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Light Field Occlusion Removal

The Boundary Graph Supervised Learning Algorithm for Regression and Classification

Mosaics. Today s Readings

Texture Mapping II. Light maps Environment Maps Projective Textures Bump Maps Displacement Maps Solid Textures Mipmaps Shadows 1. 7.

High-quality multi-pass image resampling

Filtering theory: Battling Aliasing with Antialiasing. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Transcription:

Anisotropic Texture Filtering using Line Integral Textures Blended Mipmap The Proposed Method 16x Anisotropic Filtering 1 2 3 5 6 7 8 9 10 11 12 13 1 15 16 17 18 19 20 21 22 23 2 25 26 27 Figure 1: Our algorithm takes as input an ordinary texture with 8 bits per channel and converts it into a line integral texture with 32 bits per channel. The line integral texture allows us to achieve anisotropic filtering while using fewer samples than hardware. These images show a section of a typical game scene rendered using blended mipmapping, the line integral method, and 16x hardware anisotropic filtering. Abstract In real time applications, textured rendering is one of the most important algorithms to enhance realism. In this work, we propose a line integral texture filtering algorithm that is a both fast and accurate method to improving filtered image quality. In particular, we present an efficient shader implementation of the algorithm that can be compared with 16x hardware anisotropic filtering. The authors show that the quality of the result is similar to 16x anisotropic filtering, but that the line integral method requires fewer samples to function, making it a more efficient rendering option. I.3.7 [Texture Filtering]: Computer Graphics Real-time Rendering Keywords: texture, filter, anisotropic, real-time rendering 1 Introduction Textures are a popular way to add detail to 3D scenes. However, because they are an approximation of actual geometry, they must be filtered to achieve satisfactory results. Traditional methods to filter textures include nearest neighbor, bilinear, bicubic, trilinear, and anisotropic filtering. These methods vary in reconstruction quality, particularly when the texture in question is being viewed at an extreme angle. Nearest neighbor filtering suffers extreme aliasing, especially at glancing angles or when minified. Bilinear filtering improves the situation by taking four samples, and modern graphics hardware optimizes it to be nearly as fast. However, when minifying, aliasing reemerges. Trilinear filtering effectively allows for the bilinear filter s footprint to cover any size square area in texture space, resolving aliasing during minification. However, mipmaps suffer from 28 29 30 31 32 33 3 35 36 37 38 39 0 1 2 3 5 6 7 8 9 50 51 52 53 5 55 56 57 58 59 60 61 overblurring and/or undersampling when the texture is viewed at an angle, because the necessary sampling area is anisotropic. Traditional hardware anisotropic filtering algorithms improve on these techniques by taking multiple samples inside this anisotropic sampling region. Although this approach is simple to implement, the result is merely an approximation. Also, for good results, oftentimes many samples must be taken. For some angles, texture quality suffers unavoidably. The proposed line integral method addresses these issues. The line integral method computes a more accurate average sample inside the texture footprint, thus leading to more accurate visual results. The line integral method rivals the quality of even the best hardware anisotropic filtering but tends to require fewer texture samples than traditional anisotropic filtering, saving texture bandwidth and therefore rendering time. The algorithm, implemented in a GLSL shader, already provides similar performance and quality to the algorithms of highly optimized current hardware. If the line integral method were also to be implemented in hardware, it would perform faster than classic anisotropic filtering, due to its greater efficiency. 2 Previous Work Mipmapping: Mipmapping, also known in the context of twodimensional texturing as trilinear filtering, has been used since 1983 [Williams 1983]. Mipmapping works by sampling from a prefiltered texture image. By choosing an appropriate mipmap level, there will be less aliasing of high frequencies in the texture. Although mipmapping solves the aliasing problem inherent to texture minification, unfortunately, the prefiltered images are not directionally aware, and so when the minified texture is sampled anisotropically, the texture s quality suffers either through overblurring or undersampling. Summed Area Tables: Another approach introduces the summed area table to compute areas inside a sampling region [Crow 198]. The algorithm works by adding and subtracting rectangular sums. 1

62 63 6 65 66 67 68 69 70 71 72 73 7 75 76 77 78 79 80 81 82 83 8 85 86 87 88 89 90 91 92 93 9 95 96 97 98 99 100 101 102 103 10 105 106 107 108 109 110 111 112 113 11 115 116 117 118 119 120 The summed area table can easily compute a rectangular region, but cannot compute rectangles that are not aligned with the axis of the table nor arbitrary quadrilaterals. In addition, precision issues prevent the table from being practical for 32 bit textures, especially for large image sizes. Anisotropic Filtering: Traditional hardware anisotropic filtering improves upon standard texture filtering, usually being used in conjunction with trilinear or standard bilinear filtering. Anisotropic filtering takes multiple samples from within the sampling area, allowing the hardware to average multiple samples together for a more accurate result [NVIDIA 2000]. Unfortunately, the algorithm is an approximation to an actual average sample, and thus a large number of samples must be taken for good quality. The proposed algorithm typically requires fewer samples than hardware anisotopic filtering, and is capable of producing better results. Other: Some other, such as Texture Potential Mipmapping [Cant and Shrubsole 2000] and Fast Footprint Mipmapping [Huttner and Strasser 1999], are hybrids of summed area tables and mipmaps (and variations on these techniques). By combining prefiltering and summed area tables, these authors were better able to anisotropically sample textures. However, both summed area tables and mipmapping remain inherently limited to axis-aligned sampling rectangles. The best known of these approaches is the RIP-map structure, which prefilters images in each direction. This allows for sample footprint to run in the U or V texture directions, but still remains ultimately orthogonally confined. Others have attempted to refine the basic technique of classic anisotropic filtering by weighting and choosing their samples carefully. Such techniques include Feline [McCormack et al. 1999], and the Elliptical Weighted Average [Greene and Heckbert 1986]. These can provide very compelling results, because they weight samples nearest the pixel center more strongly, thus more accurately approximating a sinc function. Still others seek to invent new hardware to solve the anisotropic filtering problem. Texram [Schilling et al. 1996], [Shin et al. 2006], a dedicated chip for anisotropic filtering, did something very similar to present-day hardware anisotropic filtering. 3 Line Integral Texture Filtering For practical purposes, the world can be thought of as an analog function defined everywhere; everywhere continuous. When a digital picture is taken, the real world is quantized into a twodimensional collection of samples from this analog signal. Equivalently, when a texture is created from scratch, the imagined world is sampled in the same way being reduced to a finite number of samples. Ideally, in the application, we would like to represent the original analog signal again. To do this, we must use a reconstruction filter to recreate an everywhere-defined continuous function. A texture can be thought of as a two-dimensional grid of samples. The ideal reconstruction filter for a texture is the sinc function. Therefore, the ideal color of a texture sample can be found by weighting samples under a sinc function originating at that point. In practice, because the sinc function is infinitely complex, we use discrete approximations to it instead. The two most common are the nearest neighbor and bilinear reconstruction kernels. These are implementation standards, and are available on nearly every graphics platform. However, there is a second sampling and reconstruction phase that is often overlooked: the now continuously defined reconstructed 121 122 123 12 125 126 127 128 129 130 131 132 133 13 135 136 137 138 139 10 11 12 13 1 15 16 17 18 19 150 151 texture is sampled again at each pixel it subtends, before being reconstructed on the screen as a pixel (itself a 2D rect function). 3.1 Motivation The one sample the pixel nominally takes in the second sampling step is often a poor representation of the now continuously defined texture. Even if that sample were representative of the texture at that location, the pixel s color would only be accurate at that one location other positions on the pixel would be inaccurate. Of course, a pixel can only display one color, so the ideal color is an average of all the possible samples under the pixel footprint (the quadrilateral a pixel s outline subtends when projected into texture space). This is almost never the case for a single sample. 3.2 Algorithm The line integral method is based on the observation that these two sampling/reconstruction phases can be combined into a single operation. Traditional approaches first reconstruct the texture with a nearest neighbor or bilinear filter. Then, they sample that reconstructed texture at each pixel s center, before finally reconstructing it on the screen with the pixel itself. The line integral method combines these operations into a single step, so that the texture reconstruction and the pixel reconstruction happen all at once, ensuring that the pixel s final color is representative of the ideal colors inside a pixel footprint. The line integral method defines a new kind of texture the line integral texture constructed from an original, ordinary source image. An example of a texture, its line integral texture, and a depiction of the following algorithm may be found in Figure 2. Each texel in the line integral texture corresponds to the sum of all texels below it in the same column in the original source image. I.e., for a texture M by N pixels, and a sampling function f(m,n), then point P = m,n in the line integral texture has a value: 152 S(x, y) = y f(p.x, P.y) dy 0 153 Note that because standard byte textures may only hold 256 pos- 15 sible values per-channel, whereas a given sample could store the 155 accumulated values of hundreds or a few thousand pixels, the line 156 integral texture must be floating point. This gives the proposed 157 method the same memory footprint as a summed area table. How- 158 ever, unlike classic summed area tables, in which precision be- 159 comes a problem due to integrating over a very large area, because 160 we only integrate over a one dimensional region, the spread and er- 161 ror of values in the line integral texture is not extreme. In practice, 162 single precision is far sufficient for the line integral texture. In ad- 163 dition, because the line integral texture only stores one-dimensional 16 integrals, arbitrary sampling quadrilaterals may be computed. 165 166 167 168 169 170 171 172 173 17 175 176 177 178 The algorithm continues by computing each pixel s footprint in texture space. An edge-walking algorithm then steps across the pixel footprint, finding the total texel sum along each column by sampling from the line integral texture. A sum from a column may be found as the difference of two samples from the line integral texture. Once the samples are accumulated, the algorithm divides the final sum by the number of texels underneath the pixel footprint. The resultant color retrieved with the line integral method is then an accurate average color of the original texture under the pixel footprint. 3.3 Implementation Notes The line integral method is implemented in GLSL. By contrast, anisotropic filtering is implemented in hardware. This makes direct comparison of performance impossible. 2

Online Submission ID: 0180 A B C Figure 2: Left: For a given point A, the line integral texture will store the sum of all the texels in the original image below A in the same column. Sums of any vertical line segment may be found by subtracting samples from one another (for example, the sum between points B and C is equal to the sample at B minus the sample at C ). By using this technique, we can find the sum under any given sampling quadrilateral (grey). Center: A source image. Right: The line integral texture created from the source image, tonemapped to fit in the visual range. 179 180 181 182 183 18 185 186 187 188 189 190 191 192 193 19 195 196 197 198 199 To calculate the pixel footprint in texture space, we had to calculate the per-pixel derivative of the texture coordinate. However, the built-in GLSL functions dfdx(...) and dfdy(...) operate on four-fragment areas simultaneously. This causes four fragments to share the same derivative value. To correct this, we implemented a two-pass algorithm, wherein the texture coordinates are read out into a floating-point framebuffer in a prepass, being fed into the line integral algorithm. We find that this creates more accurate results. We found that hardware bilinear interpolation of floating point textures is done with 8-bit precision, yet the algorithm depends on taking accurate interpolated samples from a floating point texture. Sampling from areas close together in texture space would return the same texture value, leading to a sum of zero. In practice, this manifested itself as 256 black bands between individual texels, instead of a smooth gradient. We corrected this issue by taking four nearest neighbor samples and executing the bilinear interpolation manually in the shader. We present framerate benchmarks to demonstrate that even with all these handicaps, the line integral method still produces realtime framerates. However, we provide texture sample counts as a more realistic comparison of performance. 205 On our testing machines, there is no extra performance penalty for sampling from a 32 bit texture as opposed to an 8 bit texture. We tested this by sparsely sampling over a large texture, so as to avoid caching optimization. Thus, it makes sense to compare sample counts between 16x hardware filtering and the line integral method, because each sample takes the same amount of time to retrieve. 206 200 201 202 203 20 Figure 3: Quantitative tests. Left to right, ground truth, 16x hardware filtering, line integral method. Figure : Source images used for qualitative tests 220 221 222 223 22 225 230 Unfortunately, no ground truth reference was available for the realtime engine, so no mean squared error results were computed instead, we direct the reader to the accompanying video, which demonstrates the qualitatively the practical effectiveness of the line integral method. 231.2 Demonstration of Performance 226 227 228 207 208 209 210 Results We began initial tests in a simple demo that renders a single planar surface using the line integral method. We then integrated the shader into code based off of the Quake 3 engine to demonstrate that the algorithm is robust and practical enough for realtime use. 229 232 211 233 212 213 21 215 216 ing that the algorithm functions as a solution to anisotropic texture reconstruction. Quantitatively, the algorithm s quality exceeds that of hardware anisotropic filtering. We attribute most of the error in the line integral algorithm to our handling of polygon edges, which could be improved. However, even with these unnecessarily large errors, the line integral method regularly beats hardware filtering..1 Comparison to Ground Truth In the controlled environment of the simple plane demo, we show how the algorithm s accuracy compares with standard techniques and with a ground-truth rendering, established with a ray-tracer using 256 random samples per pixel (Mental Ray in Maya 2011). To demonstrate the algorithm s usefulness in real-time scenarios, we use two heuristics. Method 16x Anisotropic : Line Integral : Scene 1 27.133 20.66 MSE Scene 2 Scene 3 6.593 50.090 7.38 35.661 217 218 219 Table 1: MSE metric for measuring the error of the proposed algorithm versus hardware 16x filtering. The MSE and MS-SSIM readings demonstrate that the line integral algorithm provides better quality than standard mipmapping, show- 3

Online Submission ID: 0180 Method 16x Anisotropic : Line Integral : Scene 1 0.980 0.987 MS-SSIM Scene 2 Scene 3 0.975 0.515 0.982 0.676 Table 2: MS-SSIM metric for measuring the performance of the proposed algorithm versus hardware 16x filtering. Average frame rate Mipmap 267.7 fps 16x-Anisotropic 265.1 fps Line Integral 119.1 fps Table 3: Average framerates of the proposed algorithm in software versus traditional rendering algorithms in hardware 23 235 236 We begin by directly comparing the framerates of the application using different texture filtering modes. The results are summarized in the following table: 237 238 239 20 21 22 23 2 The results in the table clearly demonstrate that the line integral method is already easily fast enough for realtime use in an actual game engine. However, recall that, due to implementation specifics (see 3.3, Implementation Notes), the proposed method is disadvantaged in this comparison because it is not implemented in hardware and because it must take four times as many samples due to inaccurate hardware bilinear interpolation. 259 To provide a more adequate comparison, in our second heuristic, we compare the number of texture fetches each algorithm requires. Texture sampling is the largest bottleneck in almost any shader; by comparing the number of samples each algorithm requires, we gain a better measure of how the algorithms performance would compare if they were to be implemented similarly. We visualize the results in Figure 5. Green tinted samples represent areas where our algorithm uses fewer texture samples than does 16x anisotropic filtering, while red tinted samples represent areas where our algorithm uses more. Grey tinted pixels contain magnified texels, and so are irrelevant to both techniques. As the figure shows, our algorithm tends to use fewer samples than 16x anisotropic filtering. This result suggests that, because the line integral method requires fewer samples, it would outperform classic anisotropic texture filtering if they were both implemented in hardware. 260.3 25 26 27 28 29 250 251 252 253 25 255 256 257 258 261 262 263 26 265 266 267 268 269 270 271 Figure 5: Comparison of anisotropic texture filtering algorithms. Left to right: standard mimapping, 16x hardware anisotropic filtering, the proposed line integral method, texture sample count benchmark; green tinted pixels are areas where the line integral method requires fewer than 16 samples, and red tinted pixels where it requires more. Grey tinted pixels are magnified, and should not be considered. 278 288 289 Acknowledgements 280 281 282 283 28 285 286 287 Further Analysis These quantitative quad demo MSE and MS-SSIM results demonstrate that, in controlled environments, the algorithm produces very good results comparable to 16x anisotropic filtering. The algorithm is robust and fast enough to function in a realtime game engine. In the game engine, the line integral method visually matches the quality provided by 16x anisotropic filtering but, tellingly, the line integral method uses fewer texture fetches if the scene is considered as a whole (again, see 5). Therefore, in theory, if the line integrals method and the traditional anisotropic texture filtering method were implemented in a similar framework, the line integrals method would perform faster than the anisotropic texture filtering method. 291 The images used in this work were either generated by the authors or taken from the public domain. 292 References 290 293 29 295 296 272 273 27 275 276 277 5 Conclusion We have presented a novel algorithm for sampling and reconstructing textures that is both efficient and practical in real-time applications by the application of a new data-structure, the line integral texture. The line integral method frequently uses fewer samples per pixel than hardware anisotropic filtering, and can provide equivalent or superior image quality. The line integral method, implemented in a GLSL shader, already provides realtime performance despite a number of implementation difficulties. If implemented in hardware, its performance would exceed that of current, state-ofthe-art anisotropic filtering techniques. 279 The line integral method provides the greatest gain in efficiency when the sampling quadrilaterals are anisotropic in the same direction as the line integral texture. Future work would include the use of two line integral textures running at right angles to each other; the algorithm would then select the best approach given the characteristics of the pixel footprint. 297 298 299 300 301 C ANT, R. J., AND S HRUBSOLE, P. A. 2000. Texture potential mip mapping, a new high-quality texture antialiasing algorithm. ACM Trans. Graph. 19 (July), 16 18. C ROW, F. C. 198. Summed-area tables for texture mapping. In Proceedings of the 11th annual conference on Computer graphics and interactive techniques, ACM, New York, NY, USA, SIGGRAPH 8, 207 212. G REENE, N., AND H ECKBERT, P. S. 1986. Creating raster omnimax images from multiple perspective views using the elliptical

302 303 30 305 306 307 308 309 310 311 312 313 31 315 316 317 318 319 320 321 322 323 32 325 326 weighted average filter. IEEE Comput. Graph. Appl. 6 (June), 21 27. HUTTNER, T., AND STRASSER, W. 1999. Fast footprint mipmapping. In Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware, ACM, New York, NY, USA, HWWS 99, 35. MCCORMACK, J., PERRY, R., FARKAS, K. I., AND JOUPPI, N. P. 1999. Feline: fast elliptical lines for anisotropic texture mapping. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, ACM Press/Addison- Wesley Publishing Co., New York, NY, USA, SIGGRAPH 99, 23 250. NVIDIA, 2000. Ext texture filter anisotropic. http: //www.opengl.org/registry/specs/ext/ texture_filter_anisotropic.txt, apr. SCHILLING, A., KNITTEL, G., AND STRASSER, W. 1996. Texram: A smart memory for texturing. IEEE Comput. Graph. Appl. 16 (May), 32 1. SHIN, H.-C., LEE, J.-A., AND KIM, L.-S. 2006. A cost-effective vlsi architecture for anisotropic texture filtering in limited memory bandwidth. IEEE Trans. Very Large Scale Integr. Syst. 1 (March), 25 267. WILLIAMS, L. 1983. Pyramidal parametrics. In Proceedings of the 10th annual conference on Computer graphics and interactive techniques, ACM, New York, NY, USA, SIGGRAPH 83, 1 11. 5