Image I/O and OpenGL Textures

Similar documents
Textures. Texture Mapping. Bitmap Textures. Basic Texture Techniques

Graphics. Texture Mapping 고려대학교컴퓨터그래픽스연구실.

Lecture 22 Sections 8.8, 8.9, Wed, Oct 28, 2009

Texture Mapping. Mike Bailey.

Lecture 19: OpenGL Texture Mapping. CITS3003 Graphics & Animation

Lighting and Texturing

Discussion 3. PPM loading Texture rendering in OpenGL

Texture Mapping. CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science

Lecture 4 Dynamic Memory Allocation and ImageMagick

CS 432 Interactive Computer Graphics

三維繪圖程式設計 3D Graphics Programming Design 第七章基礎材質張貼技術嘉大資工系盧天麒

CSE 167: Introduction to Computer Graphics Lecture #7: Textures. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018

Applying Textures. Lecture 27. Robb T. Koether. Hampden-Sydney College. Fri, Nov 3, 2017

CS4621/5621 Fall Basics of OpenGL/GLSL Textures Basics

Lecture 07: Buffers and Textures

CISC 3620 Lecture 7 Lighting and shading. Topics: Exam results Buffers Texture mapping intro Texture mapping basics WebGL texture mapping

Computergraphics Exercise 15/ Shading & Texturing

Buffers. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Grafica Computazionale

ก ก ก.

INF3320 Computer Graphics and Discrete Geometry

CSE 167: Introduction to Computer Graphics Lecture #8: Textures. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017

Texture Mapping CSCI 4229/5229 Computer Graphics Fall 2016

CS452/552; EE465/505. Texture Mapping in WebGL

Texturas. Objectives. ! Introduce Mapping Methods. ! Consider two basic strategies. Computação Gráfica

Texture Mapping and Sampling

CSE 167: Introduction to Computer Graphics Lecture #8: Textures. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2016

Texture Mapping. Texture Mapping. Map textures to surfaces. Trompe L Oeil ( Deceive the Eye ) The texture. Texture map

TSBK 07! Computer Graphics! Ingemar Ragnemalm, ISY

Objectives. Texture Mapping and NURBS Week 7. The Limits of Geometric Modeling. Modeling an Orange. Three Types of Mapping. Modeling an Orange (2)

CS212. OpenGL Texture Mapping and Related

CT5510: Computer Graphics. Texture Mapping

Information Coding / Computer Graphics, ISY, LiTH. OpenGL! ! where it fits!! what it contains!! how you work with it 11(40)

OpenGL Texture Mapping. Objectives Introduce the OpenGL texture functions and options

Fog example. Fog is atmospheric effect. Better realism, helps determine distances

GRAFIKA KOMPUTER. ~ M. Ali Fauzi

CMSC 425: Lecture 12 Texture Mapping Thursday, Mar 14, 2013

Cap. 3 Textures. Mestrado em Engenharia Informática (6931) 1º ano, 1º semestre

Assignment #5: Scalar Field Visualization 3D: Direct Volume Rendering

Steiner- Wallner- Podaras

Overview. Goals. MipMapping. P5 MipMap Texturing. What are MipMaps. MipMapping in OpenGL. Generating MipMaps Filtering.

Geometry Shaders. And how to use them

Texture and other Mappings

-=Catmull's Texturing=1974. Part I of Texturing

Computer Graphics Seminar

SUMMARY. CS380: Introduction to Computer Graphics Texture Mapping Chapter 15. Min H. Kim KAIST School of Computing 18/05/03.

CS 432 Interactive Computer Graphics

COMP371 COMPUTER GRAPHICS

Comp 410/510 Computer Graphics Spring Programming with OpenGL Part 2: First Program

CS452/552; EE465/505. Image Processing Frame Buffer Objects

Computer Graphics Texture Mapping

INF3320 Computer Graphics and Discrete Geometry

Chapter 9 Texture Mapping An Overview and an Example Steps in Texture Mapping A Sample Program Specifying the Texture Texture Proxy Replacing All or

CPSC 436D Video Game Programming

Texture Mapping. Computer Graphics, 2015 Lecture 9. Johan Nysjö Centre for Image analysis Uppsala University

OUTLINE. Implementing Texturing What Can Go Wrong and How to Fix It Mipmapping Filtering Perspective Correction

Announcements. Written Assignment 2 is out see the web page. Computer Graphics

Texture Mapping 1/34

GLSL Overview: Creating a Program

CS179: GPU Programming

QUESTION 1 [10] 2 COS340-A October/November 2009

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2014

Lecture 5. Scan Conversion Textures hw2

Computational Strategies

CSE 167: Introduction to Computer Graphics Lecture #7: GLSL. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2016

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

OpenGL. 1 OpenGL OpenGL 1.2 3D. (euske) 1. Client-Server Model OpenGL

Shading/Texturing. Dr. Scott Schaefer

Assignment #3: Scalar Field Visualization 3D: Cutting Plane, Wireframe Iso-surfacing, and Direct Volume Rendering

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

Computer graphics Labs: OpenGL (2/2) Vertex Shaders and Fragment Shader

CSE 167: Lecture 11: Textures 2. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167. Discussion 03 ft. Glynn 10/16/2017

Texturing. Slides done bytomas Akenine-Möller and Ulf Assarsson Department of Computer Engineering Chalmers University of Technology

Introduction to Computer Graphics with WebGL

Building Models. Prof. George Wolberg Dept. of Computer Science City College of New York

EDAF80 Introduction to Computer Graphics. Seminar 3. Shaders. Michael Doggett. Slides by Carl Johan Gribel,

Computer Graphics. Bing-Yu Chen National Taiwan University

Computer Graphics. Three-Dimensional Graphics VI. Guoying Zhao 1 / 73

Imaging and Raster Primitives

Texture Mapping 1/34

Texture Mapping. Texture (images) lecture 16. Texture mapping Aliasing (and anti-aliasing) Adding texture improves realism.

Methodology for Lecture

Texture mapping. Computer Graphics CSE 167 Lecture 9

last time put back pipeline figure today will be very codey OpenGL API library of routines to control graphics calls to compile and load shaders

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Computação Gráfica. Computer Graphics Engenharia Informática (11569) 3º ano, 2º semestre. Chap. 4 Windows and Viewports

OpenGL. Jimmy Johansson Norrköping Visualization and Interaction Studio Linköping University

9.Texture Mapping. Chapter 9. Chapter Objectives

Mipmaps. Lecture 23 Subsection Fri, Oct 30, Hampden-Sydney College. Mipmaps. Robb T. Koether. Discrete Sampling.

CS770/870 Spring 2017 Open GL Shader Language GLSL

CS770/870 Spring 2017 Open GL Shader Language GLSL

OpenGL Performances and Flexibility. Visual Computing Laboratory ISTI CNR, Italy

Lecture 5 3D graphics part 3

CPSC / Texture Mapping

lecture 16 Texture mapping Aliasing (and anti-aliasing)

Best practices for effective OpenGL programming. Dan Omachi OpenGL Development Engineer

Tutorial 12: Real-Time Lighting B

CSE 167: Introduction to Computer Graphics Lecture #9: Textures. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

The Application Stage. The Game Loop, Resource Management and Renderer Design

Transcription:

Image I/O and OpenGL Textures

Creating Images Using dynamic memory allocation we can create an array for an RGB image. The easiest way to do this is as follows Create an array based on the Width, Height and Number of Pixels in the Image depth Loop through these and fill in the pixels for each RGB Component Write to some image file format Free the array Most of the steps for this are simple but the saving of the image file relies on another library

ImageMagick / Magick++ Magick++ provides a simple C++ API to the ImageMagick image processing library which supports reading and writing a huge number of image formats as well as supporting a broad spectrum of traditional image processing operations. Magick++ provides access to most of the features available from the C API but in a simple object-oriented and well-documented framework. For More details look at the following url : http:// www.imagemagick.org/magick++/ The simplest operation with the Magick++ Library is the dumping of an array to an image file. This will be used in the following example

1 #include <iostream> 2 #include <Magick++.h> 3 #include <iostream> 4 #include <math.h> 5 #include <cstdlib> 6 7 // define widht and height of image 8 const int WIDTH=720; 9 const int HEIGHT=576; 10 11 int main(void) 12 13 // allocate and array of char for image 14 // where data is packed in RGB format 0-255 where 0=no intensity 15 // 255 = full intensity 16 char *image = new char [WIDTH*HEIGHT*3*sizeof(char)]; 17 // index into our image array 18 unsigned long int index=0; 19 // now loop for width and height of image and fill in 20 for(int y=0; y<height; ++y) 21 22 for(int x=0; x<width; ++x) 23 24 // set red channel to full 25 image[index]=255; 26 // G&B to off 27 image[index+1]=0; 28 image[index+2]=0; 29 // now skip to next RGB block 30 index+=3; 31 // end of width loop 32 // end of height loop 33 // now create an image data block 34 Magick::Image output(width,height,"rgb",magick::charpixel,image); 35 // set the output image depth to 16 bit 36 output.depth(16); 37 // write the file 38 output.write("test.tiff"); 39 // delete the image data. 40 delete [] image; 41 return EXIT_SUCCESS; 42 include the image magick headers allocate space for image data Loop to create image Write image to file

Building Like SDL image magick has a config script we can use in qmake TEMPLATE = app TARGET = SimpleImageWrite CONFIG -= app_bundle DEPENDPATH +=. INCLUDEPATH +=. QMAKE_CXXFLAGS+=$$system(Magick++-config --cppflags ) LIBS+=$$system(Magick++-config --ldflags --libs ) macx:includepath+=/usr/local/include/imagemagick macx:libs+= -L/usr/local/lib -lmagick++-q16 # Input SOURCES += SimpleImageWrite.cpp

Why not use char[][]? You will notice that the array used for the image data is a char [] You may think it would be easier to use a two dimensional array for x,y co-ordinates However we will see in various examples this is not the case. It doesn t take much code to allow use to set individual pixels in a single char [] array.

new / delete The first example uses new and delete for the image data array Obviously we could forget to delete the array so the next examples will use the boost smart pointers In particular we will use the boost::scoped_array template.

Setting Individual Pixels We can set individual pixels by accessing the memory data based on x,y co-ordinates We then set the block of 3 pixels for each R,G,B values The following code does this void setpixel(char *_data,unsigned int _x,unsigned int _y, char _r,char _g, char _b) unsigned int index=(_y*width*3)+_x*3; _data[index]=_r; _data[index+1]=_g; _data[index+2]=_b;

Setting Background Colour Once the SetPixel function is generated we can use it to set the background colour void setbgcolour(char *_data,char _r, char _g, char _b) for(unsigned int y=0; y<height; ++y) for(unsigned int x=0; x<width; ++x) setpixel(_data,x,y,_r,_g,_b);

Example int main() boost::scoped_array<char > image(new char [WIDTH*HEIGHT*3*sizeof(char)]); // clear to white setbgcolour(image.get(),255,255,255); int checksize=20; for(int y=0; y<height; ++y) for(int x=0; x<width; ++x) if(abs((x /checksize + y /checksize)) % 2 < 1 ) setpixel(image.get(),x,y,255,0,0); else setpixel(image.get(),x,y,255,255,255); Magick::Image output(width,height,"rgb",magick::charpixel,image.get()); output.depth(16); output.write("test.png"); return EXIT_SUCCESS;

The % (modulus) Operator The remainder operator (%) returns the integer remainder of the result of dividing the first operand with the second For example the value of 7 % 2 is 1 7/2 299/100 7 2 = 3 299 100 = 2 3 2=6 2 100 = 200 6 7 6 7%2=1 200 = 299 % 100 = 99 299 200

The % (modulus) Operator The magnitude of m % n must always be lest than the division n The table below show some of the results of the % operator 1 3 % 5 = 3 5 % 3 = 2 2 4 % 5 = 4 5 % 4 = 1 3 5 % 5 = 0 15 % 5 = 0 4 6 % 5 = 1 15 % 6 = 3 5 7 % 5 = 2 15 % -7 (varies 1 under gcc) 6 8 % 5 = 3 15 % 0 is undefined (core dump under gcc)

x%40 y %40 int main() boost::scoped_array<char > image(new char [WIDTH*HEIGHT*3*sizeof(char)]); // clear to white setbgcolour(image.get(),255,255,255); for(int y=0; y<height; ++y) for(int x=0; x<width; ++x) if( (y%20) && (x%20)) setpixel(image.get(),x,y,255,0,0); else setpixel(image.get(),x,y,255,255,255); Magick::Image output(width,height,"rgb",magick::charpixel,image.get()); output.depth(16); output.write("test.png"); return EXIT_SUCCESS; x%20 y %10 x%100 y %2

Fake Sphere The following function is used to describe a sphere // code modified from Computer Graphics with OpenGL F.S. Hill // get the value on the sphere at co-ord s,t float fakesphere(float _s, float _t) float r=sqrt((_s-0.5)*(_s-0.5)+(_t-0.5)*(_t-0.5)); if(r<0.5) return 1-r/0.5; else return 1.0;

Fake Sphere This function will work for any value of s and t in the range of 0-1. The values will then range from 1.0 outside the sphere to black edges and then to white in the centre as shown on the image above Using the template code add this function and draw a sphere.

int main() boost::scoped_array<float > image(new float [WIDTH*HEIGHT*3*sizeof(float)]); // index into our data structure unsigned long int index=0; // Our step in texture space from 0-1 within the width of the image float sstep=1.0/width; float tstep=1.0/height; // actual S,T value for texture space float s=0.0; float t=0.0; // loop for the image dimensions for(int y=0; y<height; y++) for(int x=0; x<width; x++) // fill the data values with sphere values image[index]=fakesphere(s,t); image[index+1]=fakesphere(s,t); image[index+2]=fakesphere(s,t); // update the S value s+=sstep; // step to the next image index index+=3; // update the T value t+=tstep; // reset S to the left hand value s=0.0; Magick::Image output(width,height,"rgb",magick::floatpixel,image.get()); output.depth(16); output.write("test.png"); return EXIT_SUCCESS;

Repeating Patterns As the previous function works from 0-1 if we make the Sphere values range from 0-8 and only use the part after the decimal point we can create a pattern as shown above To do this we use the C++ function fmod The fmod() functions computes the floating-point remainder of x/ y. Specifically, the functions return the value x-i*y, for some integer i such that, if y is non-zero, the result has the same sign as x and magnitude less than the magnitude of y. So to make the value of T repeat 8 times we would use 1 float ss=fmod(s*8,1); 2 float tt=fmod(t*8,1);

int main() boost::scoped_array<float > image(new float [WIDTH*HEIGHT*3*sizeof(float)]); // index into our data structure unsigned long int index=0; // Our step in texture space from 0-1 within the width of the image float sstep=1.0/width; float tstep=1.0/height; // actual S,T value for texture space float s=0.0; float t=0.0; float ss,tt; // loop for the image dimensions for(int y=0; y<height; y++) for(int x=0; x<width; x++) ss=fmod(s*40,1.0); tt=fmod(t*40,1.0); // fill the data values with sphere values image[index]=fakesphere(ss,tt); image[index+1]=fakesphere(ss,tt); image[index+2]=fakesphere(ss,tt); // update the S value s+=sstep; // step to the next image index index+=3; // update the T value t+=tstep; // reset S to the left hand value s=0.0; Magick::Image output(width,height,"rgb",magick::floatpixel,image.get()); output.depth(16); output.write("test.png"); return EXIT_SUCCESS; ss=fmod(s*6,1); ss=fmod(s*2,1); tt=fmod(t*4,1); tt=fmod(t*6,1); ss=fmod(s*4,2); ss=fmod(s*16,4); tt=fmod(t*4,2); tt=fmod(t*16,4

Reading Images The read method of the image will attempt to read the image and determine the format. We can then access the different elements of the image (size, pixels etc) via the different methods The following example loads an image and generates mipmaps

mipmapping mipmapping is a technique where an image is reduce each time in size (as a power or 2) This is done by sampling the image and storing the average pixels in the new mipmap There algorithm used can vary using different filtering techniques.

Magick::Image image; image.read(argv[1]); int width=image.size().width(); int height=image.size().height(); // only going to deal with RGB for now unsigned char *sourceimage= new unsigned char[width*height*3]; unsigned int i=-1; // this is slow and we could use the image.getpixels to acces the raw data, however this will mean // we have to manage bits per pixe and other type information the method below is easy to use // as the quantum will always be converted for us to the correct type (uchar) Magick::Color c; for(int h=0; h<height; ++h) for(int w=0; w<width; ++w) c=image.pixelcolor(w,h); sourceimage[++i]= c.redquantum(); sourceimage[++i]= c.greenquantum(); sourceimage[++i]= c.bluequantum();

Getting Data void getrgb( const unsigned char *_data, int _x, int _y, unsigned char &o_r, unsigned char &o_g, unsigned char &o_b, int _width ) o_r=_data[((_width*3)*_y)+(_x*3)]; o_g=_data[((_width*3)*_y)+(_x*3)+1]; o_b=_data[((_width*3)*_y)+(_x*3)+2];

// loop until we run out of mip levels int miplevel=2; for(int ml=width/2; ml>=2; ml/=2) unsigned char *destimage = new unsigned char[(width/2*height/2)*3]; i=0; unsigned char r1,g1,b1; unsigned char r2,g2,b2; unsigned char r3,g3,b3; unsigned char r4,g4,b4; // now loop and average the source image data into the new one for(int h=0; h<height/miplevel; ++h) for(int w=0; w<width/miplevel; ++w) int dw=w*miplevel; int dh=h*miplevel; getrgb(sourceimage,dw,dh,r1,g1,b1,width); getrgb(sourceimage,dw+1,dh,r2,g2,b2,width); getrgb(sourceimage,dw,dh+1,r3,g3,b3,width); getrgb(sourceimage,dw+1,dh+1,r4,g4,b4,width); destimage[i]=sqrt ((r1*r1+r2*r2+r3*r3+r4*r4)/4); destimage[i+1]=sqrt ((g1*g1+g2*g2+g3*g3+g4*g4)/4); destimage[i+2]=sqrt ((b1*b1+b2*b2+b3*b3+b4*b4)/4); i+=3; // write out image and close Magick::Image output(width/miplevel,height/miplevel,"rgb",magick::charpixel,destimage); output.depth(16); char str[40]; static int f=0; sprintf(str,"%02dmipmap%dx%d.png",f++, ml,ml); output.write(str); miplevel*=2; // end of each mip

QImage Qt Has a built in image loading class called QImage It is built as a wrapper around other system image libraries a bit like ImageMagick It should load the same types of images as ImageMagick but not always. The following example loads in an image and uses the red channel to generate the height of the geometry.

In this example we get the width and height from the image and use this for the steps Then generate a series of points equally spaced in x,z but y is set to the value of the red channel.

// load our image and get size QImage image(m_imagename.c_str()); int imagewidth=image.size().width()-1; int imageheight=image.size().height()-1; std::cout<<"image size "<<imagewidth<<" "<<imageheight<<"\n"; // calculate the deltas for the x,z values of our point float wstep=_width/(float)imagewidth; float dstep=_depth/(float)imageheight; // now we assume that the grid is centered at 0,0,0 so we make // it flow from -w/2 -d/2 float xpos=-(_width/2.0); float zpos=-(_depth/2.0); // now loop from top left to bottom right and generate points std::vector <ngl::vec3> gridpoints; for(int z=0; z<=imageheight; ++z) for(int x=0; x<=imagewidth; ++x) // grab the colour and use for the Y (height) only use the red channel QColor c(image.pixel(x,z)); gridpoints.push_back(ngl::vec3(xpos,c.redf()*4,zpos)); // now store the colour as well gridpoints.push_back(ngl::vec3(c.redf(),c.greenf(),c.bluef())); // calculate the new position xpos+=wstep; // now increment to next z row zpos+=dstep; // we need to re-set the xpos for new row xpos=-(_width/2.0);

Indices Next we create a series of indices for the triangle strip. Once we have a complete row, we add a special index value that indicates that we are at the end of a row. this will be used later by the OpenGL restart command.

std::vector <GLuint> indices; // some unique index value to indicate we have finished with a row and // want to draw a new one GLuint restartflag=imagewidth*imageheight+9999; for(int z=0; z<imageheight; ++z) for(int x=0; x<imagewidth; ++x) // Vertex in actual row indices.push_back(z * (imagewidth+1) + x); // Vertex row below indices.push_back((z + 1) * (imagewidth+1) + x); // now we have a row of tri strips signal a re-start indices.push_back(restartflag);

// we could use an ngl::vertexarrayobject but in this case this will show how to // create our own as a demo / reminder // so first create a vertex array glgenvertexarrays(1, &m_vaoid); glbindvertexarray(m_vaoid); // now a VBO for the grid point data GLuint vboid; glgenbuffers(1, &vboid); glbindbuffer(gl_array_buffer, vboid); glbufferdata(gl_array_buffer,gridpoints.size()*sizeof(ngl::vec3),&gridpoints[0].m_x,gl_static_draw); // and one for the index values GLuint iboid; glgenbuffers(1, &iboid); glbindbuffer(gl_element_array_buffer, iboid); glbufferdata(gl_element_array_buffer, indices.size()*sizeof(gluint),&indices[0], GL_STATIC_DRAW); // setup our attribute pointers, we are using 0 for the verts (note the step is going to // be 2*Vec3 glvertexattribpointer(0,3,gl_float,gl_false,sizeof(ngl::vec3)*2,0); // this once is the colour pointer and we need to offset by 3 floats glvertexattribpointer(1,3,gl_float,gl_false,sizeof(ngl::vec3)*2,((float *)NULL + (3))); // enable the pointers glenablevertexattribarray(0); glenablevertexattribarray(1);

glenable(gl_primitive_restart); glprimitiverestartindex(restartflag); We now tell OpenGL to enable the primitive restart system An we tell it what index value should trigger a restart. This is very similar to the old glbegin / glend type commands but works on indexed buffer data When draw elements encounters the restartflag value it will re-start the draw.

Texture Mapping The realism of an image is greatly enhanced by adding surface textures to the various faces of a mesh object. In part a) images have been pasted onto each face of a box. Part b) shows an image which has been wrapped around a cylinder. The wall also appears to be made of bricks however it is just a flat plane with a repeated texture applied to it.

Basic Texture Techniques The basic technique begins with some texture function, texture(s,t) in texture space which is traditionally marked off by parameters named s and t. The function texture(s,t) produces a colour or intensity value for each value of s and t between 0 and 1. The Figure shows two examples of texture functions, where the value of texture(s,t) varies between 0 (dark) and 1(light). Part a shows a bitmap texture and part b shows a procedurally produced texture.

Bitmap Textures Textures are often formed from bitmap representations of images. Such a representation consists of an array of colour values such as texture[c][r] often called texels If the array has C columns and R rows, the indices c and r vary from 0 to C-1 and R-1 respectively In the simplest case the function texture(s,t) accesses samples in the array as in the code 1 Colour texture(float s, float t) 2 3 return texture[int (s*c)][(int) t *R]; 4

Bitmap Textures In this case Colour holds an RGB triple. For example if R=400 and C=600, then the texture(0.261,0.783) evaluates to texture[156][313] Note the variation in s from 0 to 1 encompasses 600 pixels whereas the same variation in t encompasses 400 pixels. To avoid distortion during rendering, this texture must be mapped onto a rectangle with aspect ration 6/4.

Procedural Textures An alternative way to define a texture is to use a mathematical function or Procedure. For instance the Spherical Shape that appear in the last image could be generated by the following code. 1 float fakesphere(float s, float t) 2 3 flat r=sqrt((s-0.5)*(s-0.5)+(t-0.5)*(t-0.5)); 4 if(r<0.3) return 1-r/0.3; 5 else return 0.2; 6 This function varies from 1 (white) at the center to 0 (black) at the edges of the apparent sphere. Anything that can be computed can provide a texture : smooth blend and swirls of colour, fractals, solid objects etc. This is the way most modern rendering tools provide their shaders

Pasting Textures onto a Flat Surface Since texture space is flat, it is simplest to paste texture onto a flat surface. The figure above shows a texture image mapped to a portion of a planar polygon F We must specify how to associate points on the texture with points on F In OpenGL 2.x we use the function gltexcoord2f() to associate a point in texture space, Pi=(si,ti) with each vertex Vi of the face. The function gltexcoord2f(s,t) sets the current texture coordinates to (s,t) and they are attached to subsequently defined vertices.

Pasting Textures onto a Flat Surface II Normally each call to glvertex3f is preceded by a call to gltexcoord2f so each vertex gets a new pair of texture coordinates. For example to define a quadrilateral face and to position a texture on it we send OpenGL four texture coordinates and four 3D points as follows 1 glbegin(gl_quads); 2 gltexcoord2f(0.0,0.0); glvertex(1.0,2.5,1.5); 3 gltexcoord2f(0.0,0.6); glvertex(1.0,3.7,1.5); 4 gltexcoord2f(0.8,0.6); glvertex(2.0,3.7,1.5); 5 gltexcoord2f(0.8,0.0); glvertex(2.0,2.5,1.5); 6 glend(); Attaching a Pi to each Vi is equivalent to prescribing a polygon P in texture space that has the same number of vertices as F. Usually P has the same shape as F so the mapping is linear and adds little distortion

OpenGL 3.x In OpenGL 3.2 and above we just past in the texture co-ordinates using attributes We then access these values in shader to determine the s,t values. Depending upon how these are created we may also have to do other transformations on the co-ordinates. struct vertdata GLfloat u; GLfloat v; GLfloat nx; GLfloat ny; GLfloat nz; GLfloat x; GLfloat y; GLfloat z; ; texture cords normals cords verts cords

Mapping a Square to a Rectangle The figure shows the common case in which the four corners of the texture square are associated with the four corners of a rectangle. In this example the texture is a 640 by 480 pixel bitmap, and it is pasted onto a rectangle with aspect ratio 640/480 so it appears without distortion. Note that the texture coordinates range from 0 to 1 still even though the size is 640-480.

Repeating Textures The above figure show the use of texture coordinates that tile the texture, making it repeat. To do this, some texture coordinates that lie outside of the interval [0,1] are used. When the rendering routine encounters a value of s and t outside of the unit square, such as s=2.67 it ignores the integral part and uses only the fractional part 0.67.

Repeating Textures II Thus the point on a face that requires (s,t)=(2.6,3.77) is textured with texture(0.66,0.77). By default OpenGL tiles textures this way; if desired, it may be set to clamp texture values instead. Thus, a coordinate pair (s,t) is sent down the pipeline along with each vertex of the face. The notion is that points inside F will be filled with texture values lying inside P by finding the internal coordinate values (s,t) through the use of interpolation.

OpenGL Texture Mapping Steps To use texture mapping, you perform the following steps 1.Create a texture object and specify a texture for that object 2.Indicate how the texture is to be applied to each pixel. 3.Enable texture mapping 4.Draw the scene, supplying both texture and geometric coordinates.

Creating a Texture Object A texture is usually thought of as being a 2D image but can also be either a 1D modulation value or a 3D volume data set The data describing the texture may consist of one, two, three or four elements per texel. Typically image data is loaded from an image file to represent either R,G,B or R,G,B,A data. However procedural texture functions may also be used as shown below 1 float fakesphere(float s, float t) 2 3 float r=sqrt((s-0.5)*(s-0.5)+(t-0.5)*(t-0.5)); 4 if(r<0.3) return 1-r/0.3; 5 else return 0.2; 6

Indicate how the Texture is to be applied to Each Pixel You can choose any of four possible functions for computing the final RGBA value from the fragment colour and the texture image data. One possibility is simply to use the texture colour as the final colour. (replace mode) Another method is to use the texture to modulate or scale the fragment's colour. In modern OpenGL this is done in the shader

Enable Texture Mapping Texture mapping must be enabled before drawing the scene with textures. Texturing is enabled or disabled using glenable() and gldisable(); The type of texturing to enable is then specified using either GL_TEXTURE_1D GL_TEXTURE_2D GL_TEXTURE_3D

Specifying a Texture 1 void glteximage2d( 2 GLenum Target, GLint level, 3 GLint internalformat, 4 GLsizei width, GLsizei height, 5 GLint border, GLenum format, 6 GLenum type, const GLvoid *texels); The function glteximage2d defines a 2D texture it takes several arguments as shown below The Target is set to either GL_TEXTURE_2D or GL_PROXY_TEXTURE_2D Level is used to specify the level of multiple images (mipmaps) if this is not used set to 0. The internalformat specifies the format of the data there are 38 different constants but most common are GL_RGB and GL_RGBA

Specifying a Texture II width and height specify the extents of the image and values must be a power of 2 (128, 256, 512 etc) border specifies the width of a border which is either 0 (no border) or 1 border format and type specify the format of the data type of the texture image data. format is usually is GL_RGB GL_RGBA GL_LUMINANCE type tells how the data in the image is actually stored (i.e. unsigned int float char etc) and is set using GL_BYTE GL_INT GL_FLOAT GL_UNSIGNED_BYTE etc. Finally texels contains the texture image data.

gltexparameter 1 gltexparameterif(glenum target, glenum pname, TYPE param); gltexparameter is used to specify how textures behave. It has many different parameters as follows The target parameter is GL_TEXTURE_[1D,2D,3D] depending on the texture type The pname and param types are shown in the following table

gltexparameter values Parameter GL_TEXTURE_WRAP_S Values GL_CLAMP, GL_CLAMP_TO_EDGE, GL_REPEAT GL_TEXTURE_WRAP_T GL_CLAMP, GL_CLAMP_TO_EDGE, GL_REPEAT GL_TEXTURE_WRAP_R GL_CLAMP, GL_CLAMP_TO_EDGE, GL_REPEAT GL_TEXTURE_MAG_FILTER GL_TEXTURE_MIN_FILTER GL_NEAREST, GL_LINEAR GL_NEAREST, GL_LINEAR, GL_NEAREST_MIPMAP_NEAREST,GL_NEAREST_MIPMAP_LINEAR,GL_LINEAR_MIPMAP_NE AREST,GL_LINEAR_MIPMAP_LINEAR GL_TEXTURE_BORDER_COLOR any four colour values in [0.0 1.0] GL_TEXTURE_PRIORITY [0.0, 1.0] for the current texture object GL_TEXTURE_MIN_LOD any floating point value GL_TEXTURE_MAX_LOD any floating point value GL_TEXTURE_BASE_LEVEL any non-negative integer GL_TEXTURE_MAX_LEVEL any non-negative integer

Creating a Texture Object with OpenGL 1 GLuint texturename; 2 float Data = some image data 3 4 glgentextures(1,&texturename); 5 glbindtexture(gl_texture_2d,texturename); 6 gltexparameteri(gl_texture_2d,gl_texture_mag_filter,gl_linear); 7 gltexparameteri(gl_texture_2d,gl_texture_min_filter,gl_linear); 8 gltexparameteri(gl_texture_2d,gl_texture_wrap_s,gl_clamp); 9 gltexparameteri(gl_texture_2d,gl_texture_wrap_t,gl_clamp); 10 glteximage2d(gl_texture_2d,0,gl_rgb,size,size,0,gl_rgb,gl_float,data); 11 gltexenvf(gl_texture_env, GL_TEXTURE_ENV_MODE, GL_REPLACE); In the above example texturename is the id of the texture object Data is an array of the RGB tuple data created for the texture (either procedurally or loaded in from a file)

Texture Co-ordinates GLfloat vertices[] = -1,1,-1,1,1,-1,1,-1,-1, -1,1,-1,-1,-1,-1,1,-1,-1, //back -1,1,1,1,1,1,1,-1,1, -1,-1,1, 1,-1,1,-1,1,1, //front -1,1,-1, 1,1,-1, 1,1,1, -1,1,1, 1,1,1, -1,1,-1, // top -1,-1,-1, 1,-1,-1, 1,-1,1, -1,-1,1, 1,-1,1, -1,-1,-1, // bottom -1,1,-1,-1,1,1,-1,-1,-1, -1,-1,-1,-1,-1,1,-1,1,1, // left 1,1,-1,1,1,1,1,-1,-1, 1,-1,-1,1,-1,1,1,1,1, // left ; GLfloat texture[] = 0,0,0,1,1,1,0,0,1,0,1,1, //back 0,1,1,0,1,1,0,0,1,0,0,1, // front 0,0,1,0,1,1,0,1,1,1,0,0, //top 0,0,1,0,1,1,0,1,1,1,0,0, //bottom 1,0,1,1,0,0,0,0,0,1,1,1, // left 1,0,1,1,0,0,0,0,0,1,1,1, // right ; The following example show how to specify Vertex and texture co-ordinates in OpenGL 3.x First we create an array of vertices and texture co-ordinates

// now we repeat for the UV data using the second VBO glbindbuffer(gl_array_buffer, vboid[1]); glbufferdata(gl_array_buffer, sizeof(texture)*sizeof(glfloat), texture, GL_STATIC_DRAW); glvertexattribpointer(1,2,gl_float,false,0,0); glenablevertexattribarray(1);... shader->bindattribute("textureshader",0,"invert"); shader->bindattribute("textureshader",1,"inuv");

Vertex Shader #version 400 /// @brief MVP passed from app uniform mat4 MVP; // first attribute the vertex values from our VAO layout (location=0) in vec3 invert; // second attribute the UV values from our VAO layout (location=1)in vec2 inuv; // we use this to pass the UV values to the frag shader out vec2 vertuv; void main() // pre-calculate for speed we will use this a lot // calculate the vertex position gl_position = MVP*vec4(inVert, 1.0); // pass the UV values to the frag shader vertuv=inuv.st;

Fragment Shader #version 400 // this is a pointer to the current 2D texture object uniform sampler2d tex; // the vertex UV smooth in vec2 vertuv; // the final fragment colour layout (location =0) out vec4 outcolour; void main () // set the fragment colour to the current texture outcolour = texture(tex,vertuv);

Loading Images There are many ways to load image data and a number of libraries are available under linux. Qt provides us with QImage and we can use this to load the image data Ultimately when dealing with images for OpenGL we need the data in a contiguous block of RGB(A) memory To get this data we can build a simple texture structure to load from file and store this data

Loading Images void GLWindow::loadTexture() QImage *image = new QImage(); bool loaded=image->load("textures/crate.bmp"); if(loaded == true) int width=image->width(); int height=image->height(); unsigned char *data = new unsigned char[ width*height*3]; unsigned int index=0; QRgb colour; for( int y=0; y<height; ++y) for( int x=0; x<width; ++x) colour=image->pixel(x,y); data[index++]=qred(colour); data[index++]=qgreen(colour); data[index++]=qblue(colour); glgentextures(1,&m_texturename); glbindtexture(gl_texture_2d,m_texturename); gltexparameteri(gl_texture_2d, GL_TEXTURE_MAG_FILTER, GL_NEAREST); gltexparameteri(gl_texture_2d, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST); glteximage2d(gl_texture_2d,0,gl_rgb,width,height,0,gl_rgb,gl_unsigned_byte,data); glgeneratemipmap(gl_texture_2d); // Allocate the mipmaps

Qt Texture Loading QImage has a method QGLWidget::convertToGLFormat This takes any QImage and returns a QImage suitable for OpenGL texturing, this is shown in the following code. // QImage has a method to convert itself to a format suitable for OpenGL // we call this and then load to OpenGL finalimage = QGLWidget::convertToGLFormat(finalImage); // the image in in RGBA format and unsigned byte load it ready for later glteximage2d(gl_texture_2d, 0, GL_RGBA, finalimage.width(), finalimage.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, finalimage.bits());

ngl::texture ngl has a very simple texture class which will load in an image file using QImage It will determine is the image is either RGB, or RGBA and allocate the correct texture data It will be default make the current active texture unit be texture 0 However we can set other texture units be calling setmultitexture before generating the textureid

MultiTexture In the previous examples only one texture unit is active at one time This can be quite limiting as we may have several texture maps we need to access in the shader at the same time. To do this we use the MultiTexture OpenGL features.

Normal Mapping In the following example we will be using three texture maps as shown One will be used for the base colour, one for normals and one for the specular highlights

// set our samplers for each of the textures this will correspond to the // multitexture id below shader->setshaderparam1i("tex",0); shader->setshaderparam1i("spec",1); shader->setshaderparam1i("normalmap",2); // load and set a texture for the colour ngl::texture t("textures/trollcolour.tiff"); t.setmultitexture(0); t.settexturegl(); // mip map the textures glgeneratemipmap(gl_texture_2d); // now one for the specular map ngl::texture spec("textures/2k_troll_spec_map.jpg"); spec.setmultitexture(1); spec.settexturegl(); // mip map the textures glgeneratemipmap(gl_texture_2d); // this is our normal map ngl::texture normal("textures/2k_ct_normal.tif"); normal.setmultitexture(2); normal.settexturegl(); // mip map the textures glgeneratemipmap(gl_texture_2d);

Normal Mapping In this case we are using normal maps generated from zbrush which are expressed in tangent space When normal mapping we calculate the normal, tangent and bi-tangent (sometime called binormal) of the current surface point so that all our calculations are done in the same space Thus we must do some calculations to our data

Normal Mapping We are going to load in our mesh from an obj file and use the normals in that to calculate the tangent and bitangent These will then be passed to our shader and used to transform our lights into tangent space for the shading calculations The following data structure will be passes to the shader for each vertex

// a simple structure to hold our vertex data struct vertdata GLfloat u; // tex cords from obj GLfloat v; // tex cords GLfloat nx; // normal from obj mesh GLfloat ny; GLfloat nz; GLfloat x; // position from obj GLfloat y; GLfloat z; GLfloat tx; // tangent GLfloat ty; GLfloat tz; GLfloat bx; // binormal GLfloat by; GLfloat bz; ;

ngl::obj The ngl::obj class will load in an obj file and allow us access to the stored vertex, uv, normal data It also gives us the face structure which we can access all of the data from. The following code shows the basic parsing of the face data std::vector <ngl::vec3> verts=mesh.getvertexlist(); std::vector <ngl::face> faces=mesh.getfacelist(); std::vector <ngl::vec3> tex=mesh.gettexturecordlist(); std::vector <ngl::vec3> normals=mesh.getnormallist();

for(unsigned int i=0;i<nfaces;++i) // now for each triangle in the face (remember we ensured tri above) for(int j=0;j<3;++j) // pack in the vertex data first d.x=verts[faces[i].m_vert[j]].m_x; d.y=verts[faces[i].m_vert[j]].m_y; d.z=verts[faces[i].m_vert[j]].m_z; d.nx=normals[faces[i].m_norm[j]].m_x; d.ny=normals[faces[i].m_norm[j]].m_y; d.nz=normals[faces[i].m_norm[j]].m_z; d.u=tex[faces[i].m_tex[j]].m_x; d.v=tex[faces[i].m_tex[j]].m_y;

Tangent Calculations // now we calculate the tangent / bi-normal (tangent) based on the article here // http://www.terathon.com/code/tangent.html ngl::vec3 c1 = normals[faces[i].m_norm[j]].cross(ngl::vec3(0.0, 0.0, 1.0)); ngl::vec3 c2 = normals[faces[i].m_norm[j]].cross(ngl::vec3(0.0, 1.0, 0.0)); ngl::vec3 tangent; ngl::vec3 binormal; if(c1.length()>c2.length()) tangent = c1; else tangent = c2; // now we normalize the tangent so we don't need to do it in the shader tangent.normalize(); // now we calculate the binormal using the model normal and tangent (cross) binormal = normals[faces[i].m_norm[j]].cross(tangent); // normalize again so we don't need to in the shader binormal.normalize(); d.tx=tangent.m_x; d.ty=tangent.m_y; d.tz=tangent.m_z; d.bx=binormal.m_x; d.by=binormal.m_y; d.bz=binormal.m_z; // finally add it to our mesh VAO structure vbomesh.push_back(d);

Vertex Shader #version 400 // first attribute the vertex values from our VAO layout (location =0) in vec3 invert; // second attribute the UV values from our VAO layout (location =1) in vec2 inuv; // third attribute the normals values from our VAO layout (location =2) in vec3 innormal; // forth attribute the Tangents values from our VAO layout (location =3) in vec3 intangent; // fith attribute the binormal values from our VAO layout (location =4) in vec3 inbinormal;.. void main() // calculate the vertex position gl_position = MVP*vec4(inVert, 1.0); // pass the UV values to the frag shader vertuv=inuv.st; vec4 worldposition = MV * vec4(invert, 1.0); // now fill the array of light pos and half vectors for the avaliable lights for (int i=0; i<3; ++i) vec3 lightdir = normalize(light[i].position.xyz - worldposition.xyz); // transform light and half angle vectors by tangent basis // this is based on code from here //http://www.ozone3d.net/tutorials/bump_mapping.php // as our values are already normalized we don't need to here lightvec[i].x = dot (lightdir, intangent ); lightvec[i].y = dot (lightdir, inbinormal); lightvec[i].z = dot (lightdir, innormal); vec3 halfvector = normalize(worldposition.xyz + lightdir); halfvec[i].x = dot (halfvector, intangent); halfvec[i].y = dot (halfvector, inbinormal); halfvec[i].z = dot (halfvector, innormal);

Fragment Shader /// @brief our output fragment colour out vec4 fragcolour; void main () // lookup normal from normal map, move from [0,1] to [-1, 1] range, normalize vec3 normal=normalize( texture(normalmap, vertuv.st).xyz * 2.0-1.0); // we need to flip the z as this is done in zbrush normal.z = -normal.z; // default material values to be accumulated float lamberfactor; vec4 diffusematerial = texture(tex, vertuv.st); // compute specular lighting vec4 specularmaterial=texture(spec, vertuv.st) ; float shininess ; for (int i=0; i<3; ++i) lamberfactor= max (dot (lightvec[i], normal), 0.0) ; // so light is hitting use here calculate and accumulate values if (lamberfactor > 0.0) // get the phong / blinn values shininess = pow (max (dot ( halfvec[i],normal), 0.0), specpower); fragcolour +=diffusematerial * light[i].diffuse * lamberfactor; //fragcolour += specularmaterial * light[i].specular * shininess;

References Computer Graphics With OpenGL 2nd Ed, F.S. Hill Jr The OpenGL Programming Guide 4th Ed Shreiner et-al http://www.ozone3d.net/tutorials/bump_mapping.php http://en.wikipedia.org/wiki/normal_mapping