Computer Graphics Three-Dimensional Graphics VI Guoying Zhao 1 / 73
Texture mapping Guoying Zhao 2 / 73
Objectives Introduce Mapping Methods Texture Mapping Environment Mapping Bump Mapping Consider basic strategies Forward vs backward mapping Point sampling vs area averaging Guoying Zhao 3 / 73
The Limits of Geometric Modeling Although graphics cards can render over 10 million polygons per second, that number is insufficient for many phenomena Clouds Grass Terrain Skin Guoying Zhao 4 / 73
Modeling an Orange Consider the problem of modeling an orange (the fruit) Start with an orange-colored sphere Too simple Replace sphere with a more complex shape Does not capture surface characteristics (small dimples) Takes too many polygons to model all the dimples Guoying Zhao 5 / 73
Modeling an Orange (2) Take a picture of a real orange, scan it, and paste onto simple geometric model This process is known as texture mapping Still might not be sufficient because resulting surface will be smooth Need to change local shape Bump mapping Guoying Zhao 6 / 73
Texture Mapping Three Types of Mapping Uses images to fill inside of polygons Environment (reflection mapping) Uses a picture of the environment for texture maps Allows simulation of highly specular surfaces Bump mapping Emulates altering normal vectors during the rendering process Guoying Zhao 7 / 73
Texture Mapping geometric model texture mapped Guoying Zhao 8 / 73
Environment Mapping Guoying Zhao 9 / 73
Bump Mapping Guoying Zhao 10 / 73
Where does mapping take place? Mapping techniques are implemented at the end of the rendering pipeline Very efficient because few polygons make it past the clipper Guoying Zhao 11 / 73 1 1
Is it simple? Although the idea is simple---map an image to a surface---there are 3 or 4 coordinate systems involved 2D image 3D surface Guoying Zhao 12 / 73
Coordinate Systems Parametric coordinates May be used to model curves and surfaces Texture coordinates Used to identify points in the image to be mapped Object or World Coordinates Conceptually, where the mapping takes place Window Coordinates Where the final image is really produced Guoying Zhao 13 / 73
Texture Mapping parametric coordinates texture coordinates world coordinates window coordinates Guoying Zhao 14 / 73
Mapping Functions Basic problem is how to find the maps Consider mapping from texture coordinates to a point on surface Appear to need three functions x = x(s,t) y = y(s,t) z = z(s,t) But we really want to go the other way t s (x,y,z) Guoying Zhao 15 / 73
Backward Mapping We really want to go backwards Given a pixel, we want to know to which point on an object it corresponds Given a point on an object, we want to know to which point in the texture it corresponds Need a map of the form s = s(x,y,z) t = t(x,y,z) Such functions are difficult to find in general Guoying Zhao 16 / 73
Two-part mapping- First mapping (x,y,z)->(s,t) mapping can be difficult One solution to the mapping problem is to first map the texture to a simple intermediate surface Example: map to cylinder Guoying Zhao 17 / 73
Cylindrical Mapping parametric cylinder x = r cos 2π u y = r sin 2πu z = v/h maps rectangle in u,v space to cylinder of radius r and height h in world coordinates s = u t = v maps from texture space Guoying Zhao 18 / 73
Spherical Map We can use a parametric sphere x = r cos 2πu y = r sin 2πu cos 2πv z = r sin 2πu sin 2πv in a similar manner to the cylinder but have to decide where to put the distortion Spheres are used in environmental maps Guoying Zhao 19 / 73
Box Mapping Easy to use with simple orthographic projection Also used in environment maps Guoying Zhao 20 / 73
Two-part mapping- Second Mapping Map from intermediate object to actual object Normals from intermediate to actual Normals from actual to intermediate Vectors from center of intermediate actual intermediate Guoying Zhao 21 / 73
Aliasing Point sampling of the texture can lead to aliasing errors miss blue stripes point samples in u,v (or x,y,z) space point samples in texture space Guoying Zhao 22 / 73
Area Averaging A better but slower option is to use area averaging preimage pixel Note that preimage of pixel is curved Guoying Zhao 23 / 73
OpenGL Texture Mapping Guoying Zhao 24 / 73
Objectives Introduce the OpenGL texture functions and options Guoying Zhao 25 / 73
Basic Stragegy Three steps to applying a texture 1. specify the texture read or generate image assign to texture enable texturing 2. assign texture coordinates to vertices Proper mapping function is left to application 3. specify texture parameters wrapping, filtering Guoying Zhao 26 / 73
Texture Mapping y z x geometry display t image s Guoying Zhao 27 / 73
Texture Example The texture (below) is a 256 x 256 image that has been mapped to a rectangular polygon which is viewed in perspective Guoying Zhao 28 / 73
Texture Mapping and the OpenGL Pipeline Images and geometry flow through separate pipelines that join at the rasterizer complex textures do not affect geometric complexity vertices image geometry pipeline pixel pipeline rasterizer Guoying Zhao 29 / 73
Specifying a Texture Image Define a texture image from an array of texels (texture elements) in CPU memory Glubyte my_texels[512][512]; Define as any other pixel map Scanned image Generate by application code Enable texture mapping glenable(gl_texture_2d) OpenGL supports 1-4 dimensional texture maps Guoying Zhao 30 / 73
Define Image as a Texture glteximage2d( target, level, components, w, h, border, format, type, texels ); target: type of texture, e.g. GL_TEXTURE_2D level: used for mipmapping (discussed later) components: elements per texel w, h: width and height of texels in pixels border: used for smoothing (discussed later) format and type: describe texels texels: pointer to texel array glteximage2d(gl_texture_2d, 0, 3, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE, my_texels); Guoying Zhao 31 / 73
Converting A Texture Image OpenGL requires texture dimensions to be powers of 2 If dimensions of image are not powers of 2 gluscaleimage( format, w_in, h_in, type_in, *data_in, w_out, h_out, type_out, *data_out ); data_in is source image data_out is for destination image Image interpolated and filtered during scaling Guoying Zhao 32 / 73
Mapping a Texture Based on parametric texture coordinates gltexcoord*() specified at each vertex t 0, 1 Texture Space a 1, 1 Object Space (s, t) = (0.2, 0.8) A b 0, 0 1, 0 c s (0.4, 0.2) B C (0.8, 0.4) Guoying Zhao 33 / 73
Typical Code glbegin(gl_polygon); glcolor3f(r0, g0, b0); //if no shading used glnormal3f(u0, v0, w0); // if shading used gltexcoord2f(s0, t0); glvertex3f(x0, y0, z0); glcolor3f(r1, g1, b1); glnormal3f(u1, v1, w1); gltexcoord2f(s1, t1); glvertex3f(x1, y1, z1);.. glend(); Note that we can use vertex arrays to increase efficiency Guoying Zhao 34 / 73
Interpolation OpenGL uses interpolation to find proper texels from specified texture coordinates Can be distortions good selection of tex coordinates poor selection of tex coordinates texture stretched over trapezoid showing effects of bilinear interpolation Guoying Zhao 35 / 73
Texture Parameters OpenGL has a variety of parameters that determine how texture is applied Wrapping parameters determine what happens if s and t are outside the (0,1) range Filter modes allow us to use area averaging instead of point samples Mipmapping allows us to use textures at multiple resolutions Environment parameters determine how texture mapping interacts with shading Guoying Zhao 36 / 73
Wrapping Mode Clamping: if s,t > 1 use 1, if s,t <0 use 0 Wrapping: use s,t modulo 1 gltexparameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP ) gltexparameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT ) t texture s GL_REPEAT wrapping GL_CLAMP wrapping Guoying Zhao 37 / 73
Magnification and Minification More than one pixel can cover a texel (magnification) or more than one texel can cover a pixel (minification) or Can use point sampling (nearest texel) or linear filtering ( 2 x 2 filter) to obtain texture values Texture Magnification Polygon Texture Minification Polygon Guoying Zhao 38 / 73
Filter Modes Modes determined by gltexparameteri( target, type, mode ) gltexparameteri(gl_texture_2d, GL_TEXURE_MAG_FILTER, GL_NEAREST); gltexparameteri(gl_texture_2d, GL_TEXURE_MIN_FILTER, GL_LINEAR); Note that linear filtering requires a border of an extra texel for filtering at edges (border = 1) Guoying Zhao 39 / 73
Mipmapped Textures Mipmapping allows for prefiltered texture maps of decreasing resolutions Lessens interpolation errors for smaller textured objects Declare mipmap level during texture definition glteximage2d(gl_texture_*d, level, ) GLU mipmap builder routines will build all the textures from a given image glubuild*dmipmaps( ) Guoying Zhao 40 / 73
Calculating mipmap level Calculate mipmap level so that a unit step in pixels corresponds to a unit step in texels ( ( ) 2 ( ) 2 ( ) 2 ( ) 2 ) u x + v x u y + v y log 2 max, If you move one step in x (screen), how many steps do you move in u direction along the texture? How many levels do you have to filter down by factors of two so that distances on screen and texture space match? If you move one step in y (screen), how many steps do you move in some direction along the texture? Guoying Zhao 41 / 73
Example of mipmap level calculation Environment map on a teapot Approximately 9 texels will contribute to 1 pixel Get the texels from levels surrounding 9:1, that is, from level 1 (4:1) and 2 (16:1) Linearly interpolate those Guoying Zhao 42 / 73
point sampling Example linear filtering mipmapped point sampling mipmapped linear filtering Guoying Zhao 43 / 73
Quality vs. time The cheapest filtering is no filtering just do point sampling, that is, choose the texel that is nearest to current the pixel The next cheapest is to use prefiltering choose the closest mipmap level, perform point sampling there The next one is to do bilinear interpolation can actually do for both minification and magnification need to fetch 4 texels => more memory accesses => slower Combine nearest mipmap and bilinear interpolation a bit more calculation, again fetch 4 texels Trilinear filtering gives best results, but has the highest cost calculate mipmap, do bilinear filtering on both, then linear need to fetch 8 texels, big burden on memory bandwidth! Guoying Zhao 44 / 73
Texture Functions Controls how texture is applied gltexenv{fi}[v]( GL_TEXTURE_ENV, prop, param ) GL_TEXTURE_ENV_MODE modes GL_MODULATE: modulates with computed shade GL_BLEND: blends with an environmental color GL_REPLACE: use only texture color GL(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); Set blend color with GL_TEXTURE_ENV_COLOR Guoying Zhao 45 / 73
Perspective Correction Hint Texture coordinate and color interpolation either linearly in screen space or using depth/perspective values (slower) Noticeable for polygons on edge glhint( GL_PERSPECTIVE_CORRECTION_HINT, hint) ) where hint is one of GL_DONT_CARE GL_NICEST GL_FASTEST Guoying Zhao 46 / 73
Generating Texture Coordinates OpenGL can generate texture coordinates automatically gltexgen{ifd}[v]() specify a plane generate texture coordinates based upon distance from the plane generation modes GL_OBJECT_LINEAR GL_EYE_LINEAR GL_SPHERE_MAP (used for environmental maps) Guoying Zhao 47 / 73
Texture Objects Texture is part of the OpenGL state If we have different textures for different objects, OpenGL will be moving large amounts data from processor memory to texture memory Recent versions of OpenGL have texture objects one image per texture object Texture memory can hold multiple texture objects Guoying Zhao 48 / 73
Applying Textures II 1. specify textures in texture objects 2. set texture filter 3. set texture function 4. set texture wrap mode 5. set optional perspective correction hint 6. bind texture object 7. enable texturing 8. supply texture coordinates for vertex coordinates can also be generated Guoying Zhao 49 / 73
Environment Maps Other Texture Features Start with image of environment through a wide angle lens Can be either a real scanned image or an image created in OpenGL Use this texture to generate a spherical map Use automatic texture coordinate generation Multitexturing Apply a sequence of textures through cascaded texture units Guoying Zhao 50 / 73
Compositing and Blending Guoying Zhao 51 / 73
Objectives Learn to use the A component in RGBA color for Blending for translucent surfaces Compositing images Antialiasing Guoying Zhao 52 / 73
Opacity and Transparency Opaque surfaces permit no light to pass through Transparent surfaces permit all light to pass Translucent surfaces pass some light translucency = 1 opacity (α) opaque surface α =1 Guoying Zhao 53 / 73
The alpha channel In addition to RGB, we store an alpha value for every pixel the set of alpha values for an image is called the alpha channel Two interpretations for α coverage: how large portion of a pixel is covered opaqueness: how much light is blocked to pass through a pixel (opaqueness = 1-transparency) Values transparent / no coverage when α = 0 opaque / full coverage when α = 1 otherwise partially transparent / covered Relationship between α and RGB: computed at same time need comparable resolution can manipulate in almost exactly the same way coverage transparency Guoying Zhao 54 / 73
Physical Models Dealing with translucency in a physically correct manner is difficult due to the complexity of the internal interactions of light and matter Using a pipeline renderer Guoying Zhao 55 / 73
Writing Model Use A component of RGBA (or RGBα) color to store opacity During rendering we can expand our writing model to use RGBA values source blending factor source component destination blending factor blend Color Buffer destination component Guoying Zhao 56 / 73
Blending Equation We can represent source and destination pixels with the four-element (RGBA) arrays: s = [s r, s g, s b, s α ] d = [d r, d g, d b, d α ] Suppose that the source and destination blending factors are b = [b r, b g, b b, b α ] c = [c r, c g, c b, c α ] Blend as d = [b r s r + c r d r, b g s g + c g d g, b b s b + c b d b, b α s α + c α d α ] Guoying Zhao 57 / 73
Source (Rs, Gs, Bs, As) the new value Destination (Rd, Gd, Bd, Ad) in the frame buffer Blending factors (Sr, Sg, Sb, Sa) (Dr, Dg, Db, Da) Result multiply with factors, add How does blending work? source color * factor Rs Gs Bs As Sr Sg Sb Sa destination color * factor Rd Gd Bd Ad Dr Dg Db Da + = (Rs Sr, Gs Sg, Bs Sb, As Sa) + (Rd Dr, Gd Dg, Bd Db, Ad Da) = (Rs Sr+Rd Dr, Gs Sg+Gd Dg, Bs Sb+Bd Db, As Sa+Ad Da) R G B A Guoying Zhao 58 / 73
OpenGL Blending and Compositing Must enable blending and pick source and destination factors glenable(gl_blend) glblendfunc(source_factor, destination_factor) Only certain factors supported GL_ZERO, GL_ONE GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA See Redbook for complete list Guoying Zhao 59 / 73
Example Suppose that we start with the opaque background color (R 0,G 0,B 0,1) This color becomes the initial destination color We now want to blend in a translucent polygon with color (R 1,G 1,B 1,α 1 ) Select GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA as the source and destination blending factors R 1 = α 1 R 1 +(1- α 1 ) R 0, Note this formula is correct if polygon is either opaque or transparent Guoying Zhao 60 / 73
Blend 2 images 50% : 50% Blending examples draw first image with src factor GL_ONE (1,1,1,1) dst factor GL_ZERO (0,0,0,0) draw the second image with an alpha of.5 src factor GL_SRC_ALPHA (.5,.5,.5,.5) dst factor GL_SRC_ALPHA (.5,.5,.5,.5) Blend 2 images 75% : 25% draw first image with src factor GL_ONE (1,1,1,1) dst factor GL_ZERO (0,0,0,0) draw the second image with an alpha of.25 src factor GL_SRC_ALPHA (.25,.25,.25,.25) dst factor GL_ONE_MINUS_SRC_ALPHA (.75,.75,.75,.75) Guoying Zhao 61 / 73
Clamping and Accuracy All the components (RGBA) are clamped and stay in the range (0,1) However, in a typical system, RGBA values are only stored to 8 bits Can easily loose accuracy if we add many components together Example: add together n images Divide all color components by n to avoid clamping Blend with source factor = 1, destination factor = 1 But division by n loses bits Guoying Zhao 62 / 73
Order Dependency Is this image correct? Probably not Polygons are rendered in the order they pass down the pipeline Blending functions are order dependent Guoying Zhao 63 / 73
Opaque and Translucent Polygons Suppose that we have a group of polygons some of which are opaque and some translucent How do we use hidden-surface removal? Opaque polygons block all polygons behind them and affect the depth buffer Translucent polygons should not affect depth buffer Render with gldepthmask(gl_false) which makes depth buffer read-only Sort polygons first to remove order dependency Guoying Zhao 64 / 73
Fog We can composite with a fixed color and have the blending factors depend on depth Simulates a fog effect Blend source color C s and fog color C f by C s =f C s + (1-f) C f f is the fog factor Exponential Gaussian Linear (depth cueing) Guoying Zhao 65 / 73
Fog Functions Guoying Zhao 66 / 73
GLfloat fcolor[4] = { }: OpenGL Fog Functions glenable(gl_fog); glfogf(gl_fog_mode, GL_EXP); glfogf(gl_fog_density, 0.5); glfogv(gl_fog, fcolor); Guoying Zhao 67 / 73
Line Aliasing Ideal raster line is one pixel wide All line segments, other than vertical and horizontal segments, partially cover pixels Simple algorithms color only whole pixels Lead to the jaggies or aliasing Similar issue for polygons Guoying Zhao 68 / 73
Antialiasing Can try to color a pixel by adding a fraction of its color to the frame buffer Fraction depends on percentage of pixel covered by fragment Fraction depends on whether there is overlap no overlap overlap Guoying Zhao 69 / 73
Area Averaging Use average area α 1 +α 2 -α 1 α 2 as blending factor Guoying Zhao 70 / 73
OpenGL Antialiasing Can enable separately for points, lines, or polygons glenable(gl_point_smooth); glenable(gl_line_smooth); glenable(gl_polygon_smooth); glenable(gl_blend); glblendfunc(gl_src_alpha, GL_ONE_MINUS_SRC_ALPHA); Guoying Zhao 71 / 73
Accumulation Buffer Compositing and blending are limited by resolution of the frame buffer Typically 8 bits per color component The accumulation buffer is a high resolution buffer (16 or more bits per component) that avoids this problem Write into it or read from it with a scale factor Slower than direct compositing into the frame buffer Guoying Zhao 72 / 73
Applications Compositing Image Filtering (convolution) Whole scene antialiasing Motion effects Guoying Zhao 73 / 73