CS452/552; EE465/505. Image Formation

Similar documents
Models and Architectures. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Programming with OpenGL Complete Programs Objectives Build a complete first program

Introduction to Computer Graphics with WebGL

Image Formation. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Objectives. Programming with WebGL Part 1: Background. Retained vs. Immediate Mode Graphics. Early History of APIs. PHIGS and X.

Programming with WebGL Part 1: Background. CS 432 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science

Introduction to Computer Graphics with WebGL

Introduction to Computer Graphics with WebGL

WebGL A quick introduction. J. Madeira V. 0.2 September 2017

3D graphics, raster and colors CS312 Fall 2010

CS452/552; EE465/505. Overview of Computer Graphics

Lecture 1. Computer Graphics and Systems. Tuesday, January 15, 13

CS452/552; EE465/505. Image Processing Frame Buffer Objects

Introduction to Computer Graphics with WebGL

Models and Architectures

CS452/552; EE465/505. Review & Examples

INTRODUCTION. Slides modified from Angel book 6e

Game Programming. Bing-Yu Chen National Taiwan University

Models and Architectures. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

OUTLINE. Learn the basic design of a graphics system Introduce pipeline architecture Examine software components for a graphics system

CSE4030 Introduction to Computer Graphics

Introduction to Computer Graphics with WebGL

Computer Graphics. Bing-Yu Chen National Taiwan University

Computer Graphics and Visualization. Graphics Systems and Models

Comp 410/510 Computer Graphics Spring Programming with OpenGL Part 2: First Program

Introduction to OpenGL/GLSL and WebGL GLSL

Survey in Computer Graphics Computer Graphics and Visualization

API Background. Prof. George Wolberg Dept. of Computer Science City College of New York

CS452/552; EE465/505. Models & Viewing

Today s Agenda. Basic design of a graphics system. Introduction to OpenGL

Graphics Systems and Models

VTU QUESTION PAPER SOLUTION UNIT -1 INTRODUCTION

CS452/552; EE465/505. Transformations

National Chiao Tung Univ, Taiwan By: I-Chen Lin, Assistant Professor

CS 432 Interactive Computer Graphics

Comp4422. Computer Graphics. Lab 02: WebGL API Prof. George Baciu. PQ838 x7272.

Over the past 15 years, OpenGL has become

Building Models. CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science

CS450/550. Pipeline Architecture. Adapted From: Angel and Shreiner: Interactive Computer Graphics6E Addison-Wesley 2012

We assume that you are familiar with the following:

WebGL: Hands On. DevCon5 NYC Kenneth Russell Software Engineer, Google, Inc. Chair, WebGL Working Group

Computer Graphics CS 543 Lecture 4 (Part 2) Building 3D Models (Part 2)

CS452/552; EE465/505. Shadow Mapping in WebGL

Graphics Programming. Computer Graphics, VT 2016 Lecture 2, Chapter 2. Fredrik Nysjö Centre for Image analysis Uppsala University

C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev

2D graphics with WebGL

Rendering Objects. Need to transform all geometry then

We will use WebGL 1.0. WebGL 2.0 is now being supported by most browsers but requires a better GPU so may not run on older computers or on most cell

COMP371 COMPUTER GRAPHICS

CS475/CS675 - Computer Graphics. OpenGL Drawing

Shaders. Slide credit to Prof. Zwicker

Rasterization-based pipeline

BCA611 Video Oyunları için 3B Grafik

Ulf Assarsson Department of Computer Engineering Chalmers University of Technology

CS452/552; EE465/505. Intro to Lighting

WebGL. Creating Interactive Content with WebGL. Media #WWDC14. Session 509 Dean Jackson and Brady Eidson WebKit Engineers

Building Models. Prof. George Wolberg Dept. of Computer Science City College of New York

CS451Real-time Rendering Pipeline

Building Models. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Graphics Programming

WebGL and GLSL Basics. CS559 Fall 2015 Lecture 10 October 6, 2015

CS452/552; EE465/505. Texture Mapping in WebGL

Copyright Khronos Group 2012 Page 1. Teaching GL. Dave Shreiner Director, Graphics and GPU Computing, ARM 1 December 2012

Computer Graphics Lecture 2

Comp 410/510 Computer Graphics Spring Programming with OpenGL Part 3: Shaders

WebGL (Web Graphics Library) is the new standard for 3D graphics on the Web, designed for rendering 2D graphics and interactive 3D graphics.

Introduction to Computer Graphics with WebGL

Comp 410/510 Computer Graphics Spring Programming with OpenGL Part 4: Three Dimensions

Shadows. Prof. George Wolberg Dept. of Computer Science City College of New York

WebGL and GLSL Basics. CS559 Fall 2016 Lecture 14 October

Ulf Assarsson Department of Computer Engineering Chalmers University of Technology

OPENGL RENDERING PIPELINE

Lecture 5 Vertex and Fragment Shaders-1. CITS3003 Graphics & Animation

Lecture 2. Shaders, GLSL and GPGPU

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Introduction to Shaders.

Programming with OpenGL Part 3: Shaders. Ed Angel Professor of Emeritus of Computer Science University of New Mexico

WebGL. Mike Bailey. Oregon State University. What is WebGL?

CSE 167: Introduction to Computer Graphics Lecture #7: GLSL. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2016

Computer Graphics - Chapter 1 Graphics Systems and Models

Building Models. Objectives Introduce simple data structures for building polygonal models. Vertex lists Edge lists

Real-Time Rendering (Echtzeitgraphik) Michael Wimmer

The Graphics Pipeline

Overview CS Plans for this semester. References. CS 4600 Fall Prerequisites

CSC 8470 Computer Graphics. What is Computer Graphics?

OpenGL pipeline Evolution and OpenGL Shading Language (GLSL) Part 2/3 Vertex and Fragment Shaders

CS130 : Computer Graphics Lecture 2: Graphics Pipeline. Tamar Shinar Computer Science & Engineering UC Riverside

CS 354R: Computer Game Technology

Converts geometric primitives into images Is split into several independent stages Those are usually executed concurrently

OPEN GL BACKGROUND. Objectives. Early History of APIs. SGI and GL. PHIGS and X. Modified from Angel 6e book slides

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Programming with OpenGL Shaders I. Adapted From: Ed Angel Professor of Emeritus of Computer Science University of New Mexico

Graphics Pipeline & APIs

Basic Graphics Programming

The Graphics Pipeline

CS 464 Review. Review of Computer Graphics for Final Exam

The Graphics Pipeline

Rasterization Overview

CS4621/5621 Fall Computer Graphics Practicum Intro to OpenGL/GLSL

Transcription:

CS452/552; EE465/505 Image Formation 1-15-15

Outline! Image Formation! Introduction to WebGL, continued Draw a colored triangle using WebGL Read: Angel, Chapters 2 & 3 Homework #1 will be available on github tomorrow at noon 2

Image Formation! Fundamental imaging notions! Physical basis for image formation Light Color Perception! Synthetic camera model! Other models

Image Formation! In computer graphics, we form images which are generally two dimensional using a process analogous to how images are formed by physical imaging systems Cameras Microscopes Telescopes Human visual system

Elements of Image Formation! Objects! Viewer! Light source(s)! Attributes that govern how light interacts with the materials in the scene! Note the independence of the objects, the viewer, and the light source(s)

Light! Light is the part of the electromagnetic spectrum that causes a reaction in our visual systems! Generally these are wavelengths in the range of about 350-750 nm (nanometers)! Long wavelengths appear as reds and short wavelengths as blues

Luminance and Color Images! Luminance Image Monochromatic Values are gray levels Analogous to working with black and white film or television! Color Image Has perceptional attributes of hue, saturation, and lightness Do we have to match every frequency in visible spectrum? No!

Three-Color Theory! Human visual system has two types of sensors Rods: monochromatic, night vision Cones Color sensitive Three types of cones Only three values (the tristimulus values) are sent to the brain! Need only match these three values Need only three primary colors

Additive and Subtractive Color! Additive color Form a color by adding amounts of three primaries CRTs, projection systems, positive film Primaries are Red (R), Green (G), Blue (B)! Subtractive color Form a color by filtering white light with Cyan (C), Magenta (M), and Yellow (Y) filters Light-material interactions Printing Negative film

Pinhole Camera Use trigonometry to find projection of point at (x,y,z) x p = -x/z/d y p = -y/z/d z p = d These are equations of simple perspective

Synthetic Camera Model projector p image plane projection of p center of projection

Advantages! Separation of objects, viewer, light sources! Two-dimensional graphics is a special case of three-dimensional graphics! Leads to simple software API Specify objects, lights, camera, attributes Let implementation determine image! Leads to fast hardware implementation

Global vs Local Lighting! Cannot compute color or shade of each object independently Some objects are blocked from light Light can reflect from object to object Some objects might be translucent

Ray Tracing and Geometric Optics! One way to form an image is to follow rays of light from a point source finding which rays enter the lens of the camera. However, each ray of light may have multiple interactions with objects before being absorbed or going to infinity. Can handle global effects: Multiple reflections, Translucent objects Slow Must have whole data base available at all times

Why not ray tracing?! Ray tracing seems more physically based so why don t we use it to design a graphics system?! Possible and is actually simple for simple objects such as polygons and quadrics with simple point sources! In principle, can produce global lighting effects such as shadows and multiple reflections but ray tracing is slow and not well-suited for interactive applications! Ray tracing with GPUs is close to real time

Practical Approach! Process objects one at a time in the order they are generated by the application Can consider only local lighting! Pipeline architecture application program! All steps can be implemented in hardware on the graphics card display 16

Immediate Mode Graphics! Geometry specified by vertices Locations in space( 2 or 3 dimensional) Points, lines, circles, polygons, curves, surfaces! Immediate mode Each time a vertex is specified in application, its location is sent to the GPU Old style uses glvertex Creates bottleneck between CPU and GPU Removed from OpenGL 3.1 and OpenGL ES 2.0

Retained Mode Graphics! Put all vertex attribute data in array! Send array to GPU to be rendered immediately! Almost OK but problem is we would have to send array over each time we need another render of it! Better to send array over and store on GPU for multiple renderings

Execution in Browser

Example: red triangle, revisited! How to draw lines? points?! How is the data sent to the GPU, and interpreted as the variables in the shaders?

Shaders! We assign names to the shaders that we can use in the JS file! These are trivial pass-through (do nothing) shaders that which set the two required built-in variables gl_position gl_fragcolor! Note both shaders are full programs! Note vector type vec4! Must set precision in fragment shader

Files../Common/webgl-utils.js: Standard utilities for setting up WebGL context in Common directory on website../common/initshaders.js: contains JS and WebGL code for reading, compiling and linking the shaders../common/mv.js: our matrix-vector package square.js: the application file

Example Code <!DOCTYPE html> <html> <head> <script id="vertex-shader" type="x-shader/x-vertex"> attribute vec4 vposition; void main(){ gl_position = vposition; } </script> <script id="fragment-shader" type="x-shader/x-fragment"> precision mediump float; void main(){ gl_fragcolor = vec4( 1.0, 0.0, 0.0, 1.0 ); } </script>

Notes onload: determines where to start execution when all code is loaded canvas gets WebGL context from HTML file vertices use vec4 type in MV.js JS array is not the same as a C or Java array object with methods vertices.length // 4 Values in clip coordinates

Notes initshaders used to load, compile and link shaders to form a program object Load data onto GPU by creating a vertex buffer object on the GPU Note use of flatten() to convert JS array to an array of float32 s Finally we must connect variable in program with variable in shader need name, type, location in buffer

JS File (cont) // Configure WebGL gl.viewport( 0, 0, canvas.width, canvas.height ); gl.clearcolor( 1.0, 1.0, 1.0, 1.0 ); // Load shaders and initialize attribute buffers var program = initshaders( gl, "vertex-shader", "fragment-shader" ); gl.useprogram( program ); // Load the data into the GPU var bufferid = gl.createbuffer(); gl.bindbuffer( gl.array_buffer, bufferid ); gl.bufferdata( gl.array_buffer, flatten(vertices), gl.static_draw );

JS File (cont) // Associate our shader variables with our data buffer }; var vposition = gl.getattriblocation( program, "vposition" ); gl.vertexattribpointer( vposition, 2, gl.float, false, 0, 0 ); gl.enablevertexattribarray( vposition ); render(); function render() { gl.clear( gl.color_buffer_bit ); gl.drawarrays( gl.triangles, 0, 3 ); }

Example: colored triangle! Let s make each vertex of the triangle a different color: red, green and blue what will this look like? 28

Example: colored triangle! Let s make each vertex of the triangle a different color: red, green and blue 29

(-1, 1) (0, 1) Normalized Device Coordinate System (1, 1) (-1, 0) (0, 0) (1, 0) (-1, -1) JEFF CHASTINE (0, -1) (1, -1) 30

Coordinates of our triangle (0.0f, 0.5f, 0.0f) (-0.5f, -0.5f, 0.0f) (0.5f, -0.5f, 0.0f) JEFF CHASTINE 31

ColorFillTriangle.html // ColorFillTriangle.html based on (c) 2012 matsuda <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>draw a triangle with different vertex colors</title> <script id="vertex-shader" type= x-shader/x-vertex"> // on next slide </script> <script id="fragment-shader" type= x-shader/x-fragment"> // next slide </script> <script type="text/javascript" src="../common/webgl-utils.js"></script> <script type="text/javascript" src="../common/initshaders.js"></script> <script type="text/javascript" src="colorfilltriangle.js"></script> </head>

Vertex Shader <script id="vertex-shader" type="x-shader/x-vertex"> attribute vec4 a_position; attribute vec4 a_color; varying vec4 v_color; void main() { } gl_position = a_position; v_color = a_color; </script>

Fragment Shader <script id="fragment-shader" type= x-shader/x-fragment"> precision mediump float; varying vec4 v_color; void main() { gl_fragcolor = v_color; } </script>

ColorFillTriangle.html, cont. <!DOCTYPE html> <html lang="en"> <head> // on previous slide </head> <body> <canvas id="gl-canvas" width="400" height="400"> Please use a browser that supports "canvas" </canvas> </body> </html>

ColorFillTriangle.js // ColorFillTriangle.js based on (c) 2012 matsuda window.onload = function init() { // Retrieve <canvas> element var canvas = document.getelementbyid( "gl-canvas" ); // Get the rendering context for WebGL var gl = WebGLUtils.setupWebGL(canvas); if (!gl) { console.log('failed to get the rendering context for WebGL'); return; } gl.viewport( 0, 0, canvas.width, canvas.height ); gl.clearcolor( 0.0, 0.0, 0.0, 1.0 );

ColorFillTriangle.js, cont. // Create shading program var program = initshaders( gl, "vertex-shader", "fragment-shader" ); gl.useprogram( program ); gl.program = program; // initvertexbuffers() is on the next slide var n = initvertexbuffers(gl); if (n < 0) { console.log('failed to set the vertex information'); return; } } // Specify the color for clearing <canvas> gl.clearcolor(0.0, 0.0, 0.0, 1.0); // Clear <canvas> gl.clear(gl.color_buffer_bit); // Draw the rectangle gl.drawarrays(gl.triangles, 0, n);

ColorFillTriangle.js, cont. function initvertexbuffers(gl) { var verticescolors = new Float32Array([ // Vertex coordinates and color 0.0, 0.5, 1.0, 0.0, 0.0, -0.5, -0.5, 0.0, 1.0, 0.0, 0.5, -0.5, 0.0, 0.0, 1.0, ]); var n = 3; // Create a buffer object var vertexcolorbuffer = gl.createbuffer(); if (!vertexcolorbuffer) { console.log('failed to create the buffer object'); return false; } // continued on next slide

ColorFillTriangle.js, cont. // Bind the buffer object to target gl.bindbuffer(gl.array_buffer, vertexcolorbuffer); gl.bufferdata(gl.array_buffer, verticescolors, gl.static_draw); var FSIZE = verticescolors.bytes_per_element; //Get the storage location of a_position, assign & enable buffer var a_position = gl.getattriblocation(gl.program, 'a_position'); if (a_position < 0) { console.log('failed to get the storage location of a_position'); return -1; } gl.vertexattribpointer(a_position, 2, gl.float, false, FSIZE * 5, 0); gl.enablevertexattribarray(a_position); // Enable the assignment of the buffer object

ColorFillTriangle.js, cont. // Get the storage location of a_color, assign buffer and enable var a_color = gl.getattriblocation(gl.program, 'a_color'); if(a_color < 0) { console.log('failed to get the storage location of a_color'); return -1; } gl.vertexattribpointer(a_color, 3, gl.float, false, FSIZE * 5, FSIZE * 2); gl.enablevertexattribarray(a_color); // Enable the assignment of the buffer object // Unbind the buffer object gl.bindbuffer(gl.array_buffer, null); } return n;

Vertex Processing! Much of the work in the pipeline is in converting object representations from one coordinate system to another Object coordinates Camera (eye) coordinates Screen coordinates! Every change of coordinates is equivalent to a matrix transformation! Vertex processor also computes vertex colors

Projection! Projection is the process that combines the 3D viewer with the 3D objects to produce the 2D image Perspective projections: all projectors meet at the center of projection Parallel projection: projectors are parallel, center of projection is replaced by a direction of projection

Primitive Assembly Vertices must be collected into geometric objects before clipping and rasterization can take place Line segments Polygons Curves and surfaces

Clipping Just as a real camera cannot see the whole world, the virtual camera can only see part of the world or object space Objects that are not within this volume are said to be clipped out of the scene

Rasterization! If an object is not clipped out, the appropriate pixels in the frame buffer must be assigned colors! Rasterizer produces a set of fragments for each object! Fragments are potential pixels Have a location in frame bufffer Color and depth attributes! Vertex attributes are interpolated over objects by the rasterizer

Fragment Processing! Fragments are processed to determine the color of the corresponding pixel in the frame buffer! Colors can be determined by texture mapping or interpolation of vertex colors! Fragments may be blocked by other fragments closer to the camera Hidden-surface removal

Lack of Object Orientation! All versions of OpenGL are not object oriented so that there are multiple functions for a given logical function! Example: sending values to shaders gl.uniform3f gl.uniform2i gl.uniform3dv! Underlying storage mode is the same

WebGL function format function name dimension gl.uniform3f(x,y,z) belongs to WebGL canvas x,y,z are variables gl.uniform3fv(p) p is an array

WebGL constants! Most constants are defined in the canvas object In desktop OpenGL, they were in #include files such as gl.h! Examples desktop OpenGL glenable(gl_depth_test); WebGL gl.enable(gl.depth_test) gl.clear(gl.color_buffer_bit)

WebGL and GLSL! WebGL requires shaders and is based less on a state machine model than a data flow model! Most state variables, attributes and related pre 3.1 OpenGL functions have been deprecated! Action happens in shaders! Job of application is to get data to GPU

GLSL! OpenGL Shading Language! C-like with Matrix and vector types (2, 3, 4 dimensional) Overloaded operators C++ like constructors! Similar to Nvidia s Cg and Microsoft HLSL! Code sent to shaders as source code! WebGL functions compile, link and get information to shaders

Coordinate Systems! The units in points are determined by the application and are called object, world, model or problem coordinates! Viewing specifications usually are also in object coordinates! Eventually pixels will be produced in window coordinates! WebGL also uses some internal representations that usually are not visible to the application but are important in the shaders! Most important is clip coordinates

Coordinate Systems and Shaders! Vertex shader must output in clip coordinates! Input to fragment shader from rasterizer is in window coordinates! Application can provide vertex data in any coordinate system but shader must eventually produce gl_position in clip coordinates! Simple example uses clip coordinates

WebGL Camera! WebGL places a camera at the origin in object space pointing in the negative z direction! The default viewing volume is a box centered at the origin with sides of length 2

Viewports! Do not have use the entire window for the image: gl.viewport(x,y,w,h)! Values in pixels (window coordinates)

Next Topic! Transformations & Viewing! User Interaction