(Joseph Hocking, Unity in Action, 2015, p.70) textures + shaders. general appl. matrix & other computations

Similar documents
CS451Real-time Rendering Pipeline

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Rendering Objects. Need to transform all geometry then

Pipeline Operations. CS 4620 Lecture 14

CS 354R: Computer Game Technology

LOD and Occlusion Christian Miller CS Fall 2011

Real-Time Rendering (Echtzeitgraphik) Michael Wimmer

Computer Graphics Introduction. Taku Komura

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Ulf Assarsson Department of Computer Engineering Chalmers University of Technology

TSBK03 Screen-Space Ambient Occlusion

Spring 2009 Prof. Hyesoon Kim

Spring 2011 Prof. Hyesoon Kim

C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev

Real-Time Shadows. Last Time? Textures can Alias. Schedule. Questions? Quiz 1: Tuesday October 26 th, in class (1 week from today!

Adaptive Point Cloud Rendering

Pipeline Operations. CS 4620 Lecture 10

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

CS130 : Computer Graphics. Tamar Shinar Computer Science & Engineering UC Riverside

Computer Graphics 10 - Shadows

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

CS4620/5620: Lecture 14 Pipeline

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Transforms 3: Projection Christian Miller CS Fall 2011

Computer Graphics Fundamentals. Jon Macey

CS130 : Computer Graphics Lecture 2: Graphics Pipeline. Tamar Shinar Computer Science & Engineering UC Riverside

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

Real-Time Shadows. Last Time? Today. Why are Shadows Important? Shadows as a Depth Cue. For Intuition about Scene Lighting

PowerVR Hardware. Architecture Overview for Developers

The Graphics Pipeline

3D Rendering Pipeline

CSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018

Real - Time Rendering. Graphics pipeline. Michal Červeňanský Juraj Starinský

Rasterization Overview

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Computer Graphics. Lecture 02 Graphics Pipeline. Edirlei Soares de Lima.

The Application Stage. The Game Loop, Resource Management and Renderer Design

Nonphotorealism. Christian Miller CS Fall 2011

Computer Graphics and Visualization. Graphics Systems and Models

Deferred Rendering Due: Wednesday November 15 at 10pm

Shaders. Slide credit to Prof. Zwicker

Computer Graphics. Chapter 1 (Related to Introduction to Computer Graphics Using Java 2D and 3D)

Introduction to Computer Graphics with WebGL

Bringing AAA graphics to mobile platforms. Niklas Smedberg Senior Engine Programmer, Epic Games

CS230 : Computer Graphics Lecture 4. Tamar Shinar Computer Science & Engineering UC Riverside

CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015

Computer Graphics. Lecture 9 Environment mapping, Mirroring

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Scanline Rendering 2 1/42

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming

E.Order of Operations

Optimizing and Profiling Unity Games for Mobile Platforms. Angelo Theodorou Senior Software Engineer, MPG Gamelab 2014, 25 th -27 th June

CS488. Visible-Surface Determination. Luc RENAMBOT

Could you make the XNA functions yourself?

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

Why modern versions of OpenGL should be used Some useful API commands and extensions

Game Architecture. 2/19/16: Rasterization

Shadows in the graphics pipeline

COMPUTER GRAPHICS COURSE. Rendering Pipelines

For Intuition about Scene Lighting. Today. Limitations of Planar Shadows. Cast Shadows on Planar Surfaces. Shadow/View Duality.

Two basic types: image-precision and object-precision. Image-precision For each pixel, determine which object is visable Requires np operations

Last Time. Why are Shadows Important? Today. Graphics Pipeline. Clipping. Rasterization. Why are Shadows Important?

POWERVR MBX. Technology Overview

Lecture 2. Shaders, GLSL and GPGPU

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03

Chapter 1 Introduction

CMSC427 Transformations II: Viewing. Credit: some slides from Dr. Zwicker

Graphics Processing Unit Architecture (GPU Arch)

Computer Graphics. Shadows

CS452/552; EE465/505. Clipping & Scan Conversion

Graphics Hardware. Instructor Stephen J. Guy

CHAPTER 1 Graphics Systems and Models 3

CS 381 Computer Graphics, Fall 2012 Midterm Exam Solutions. The Midterm Exam was given in class on Tuesday, October 16, 2012.

CIS 581 Interactive Computer Graphics

Shaders (some slides taken from David M. course)

The Rasterization Pipeline

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

Homework #2 and #3 Due Friday, October 12 th and Friday, October 19 th

Graphics Hardware and Display Devices


Computer Graphics Shadow Algorithms

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker

Ulf Assarsson Department of Computer Engineering Chalmers University of Technology

CS 464 Review. Review of Computer Graphics for Final Exam

Computer Graphics Lecture 2

9. Visible-Surface Detection Methods

Lecture 17: Shading in OpenGL. CITS3003 Graphics & Animation

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

We assume that you are familiar with the following:

Hidden Surface Removal

VANSTEENKISTE LEO DAE GD ENG UNFOLD SHADER. Introduction

WebGL (Web Graphics Library) is the new standard for 3D graphics on the Web, designed for rendering 2D graphics and interactive 3D graphics.

Drawing Fast The Graphics Pipeline

Models and Architectures. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Graphics Hardware, Graphics APIs, and Computation on GPUs. Mark Segal

Models and Architectures

PowerVR Series5. Architecture Guide for Developers

Introduction to Computer Graphics. Knowledge basic concepts 2D and 3D computer graphics

Transcription:

Outline Introduction to Game Programming Autumn 2017 04. Graphics for games Juha Vihavainen University of Helsinki Creating and drawing game entities from 2D images (sprites sprites) ) to 3D models the position, dimensions, and visual data of game objects represented for computer graphics 3D models ~ triangle meshes ~ geometry/shape (plus other info) On rendering pipeline Creating and using virtual (synthetic) in-world camera 2 Literature on 3D graphics (Madhav, 2014) Ch. 2 2D Graphics,, Ch. 4 3D Graphics, Ch. 8 Cameras (Gregory, 2014) Ch. 10 The Rendering Engine, pp. 443-541; 10.2 The Rendering Pipeline, p. 489. Dunn Fletcher, Parberry Ian: 3D Math Primer for Graphics and Game Development.. A K Peters/CRC Press, 2011. Edward Angel, Dave Shreiner,, Interactive Computer Graphics: A Top-Down Approach with Shader-Based OpenGL, 6th Ed. Addison- Wesley, 2011. Tomas Akenine-Moller, Eric Haines, Naty Hoffman, Real-Time Rendering,, 3rd Ed. A K Peters/CRC Press, 2008. Junghyan Han, 3D Graphics for Game Programming.. CRC, 2011 Motivation for linear algebra A game is (essentially) a mathematical model of a virtual world simulated on a computer a game engine needs to keep track of the positions, orientations, and scales of objects, animate them in the game world, and transform them into screen space so they can be rendered on screen 3D objects are (almost always) made up of triangles, the vertices of which are represented by vectors/points Mathematics pervades everything in game development games make use many branches of mathematics, e.g., trigonometry, algebra, and calculus (often probability, too) the most important kind of mathematics for a game programmer is vector and matrix math,, i.e., linear algebra 3 4

Some 3D terminology Types of art assets (Joseph Hocking, Unity in Action, 2015, p.70) Rasterization set of algorithms that draw 3D objects into a 2D color buffer (part of frame buffer with other data) Modern computers use a graphics processing unit (GPU) that can do most rasterization but need to tell the GPU what we want to draw and how textures + shaders Due to limited resources, can't achieve photo-realistic images graphic (visual) artifact = some unwanted result (anomaly) from digital image processing (of using approximate algorithms, or imprecise or corrupted data) 5 6 A graphics system (sketch) general appl. matrix & other computations Consists of: input devices, CPU, GPU, memory, frame buffer, output devices E.g., PCs, workstations, mobile phones, video game consoles, GPS systems, etc. CRT monitor basics 2D image = an array of picture elements known as pixels for color displays, each pixel contains a red, green,, and blue sub-pixel an RGBA value - for Red, Green, and Blue plus Alpha (alpha = 0 => totally transparent image/part) Resolution width x height determines the number of pixels 300 x 200 means each row (or scan line) has 300 pixels and there's a total of 200 rows. CRTs used an electron gun to activate the various pixels Modern display technologies: plasma, LCD (liquid-crystal display), etc. replaced CRT displays in (most) applications 7 8

CRT monitor basics (historically) Color buffer Electron gun draws one scan line at a time When it gets to bottom-right corner, it takes some time to get the aim back to top left (vertical blank interval or VBLANK) a VSYNC signal reports that the drawing is complete Basic CRT drawing A buffer is memory that stores the color data for all the pixels on the screen With only a single color buffer, writing to it while still displaying the old image results in screen tearing Screen tearing caused by updating video data while drawing LCD displays require the timing signals of their own: VSYNC (reset row pointer to top), HSYNC (reset column pointer to edge), LCDCLK (LCD clock to control display refresh rate) new image Meshes up info from two or more frames in a single screen draw old contents 9 10 Double buffering (as a game pattern) Use two color buffers A and B: while buffer A is written to, display buffer B then next frame, write to B and display A An interrupt service routine can run to modify data in the video display memory while it is not being read (by a special signal from the device) Double Buffer game programming pattern cause a series of sequential operations to appear instantaneous or simultaneous (i.e., as atomic) e.g., in game graphics, the scene must update smoothly and quickly, displaying a series of complete frames, each appearing instantly Sometimes, even triple buffering (three color buffers) if the back buffer becomes filled while display is not ready => start filling a third buffer potential drawback: input lag What is a graphic engine? An API for real-time 2D and 3D applications (games) often runs the "main loop" => a software framework Provides development environment for programmers may target multiple platforms, e.g., PC, consoles, smartphones, tablets A graphics engine provides a level of abstraction wraps DirectX, OpenGL or Vulkan inside the provided API API calls may still closely mirror the low-level level C API shader programs are written and run on the GPU 11 12

"Getting started is easier.." Writing graphic applications can be hard to get started create your "graphics device" lots of low-level level initialization and set-up code handle input and other events (in a platform-independent way) load/unload various file formats of graphic/audio resources A graphics engine tries to hide (most of) the complexity the framework provides already running "empty" application Ideally, tools are integrated (XNA/MonoGame, Unreal, Unity) games are written and debugged within an IDE or builder separate project kinds/builds for consoles, smartphones content pipeline modifies assets (textures, 3D models) to a form more suitable for the game program (e.g., part of import or build phase) plus audio tools to author, organize, and play audio assets, etc Implementation of sprites Sprites are 2D visual objects that represent game characters other dynamic objects for simple games, backgrounds, too 13 Making a 2D game entity ("sprite") A"sprite" is often used to mean an element of a 2D game usually both a texture and a position on the screen (state) the texture is often shared but the position changes possibly using "speed speed" " variable (say, new vector2 2 (3, 3)) The position of a 2D object on the screen is given as coordinates based on screen pixels some appropriate window size (e.g., 800 * 480 pixels) these define the range for display coordinates Specify source rectangle (from a sprite sheet) ) if want to draw only a part of an image Scale an image (uniform, non-uniform along x and y, rectangle) Sprite depth (float)) between 0 (= front) and 1 (= back) We can draw things off the screen but then nothing is seen Painter's algorithm Sprites are drawn from back to front: nearest images are placed at the top so that background images become hidden (more or less) 14 The corresponding image format can vary depending on the platform (PNG, TGA, and PVR are potential options) class Sprite ImageFile image; // (current) texture int draworder; // using an int value, here int x, y; // specify location (e.g., top left corner) function Draw ( ) // draw the image at the correct (x, y) ).. Painter s algorithm applied to a 2D space scene 15 16

From 2D to 3D Left-- vs. right Left right--handed coordinate systems left-handed Definitely more complex than 2D graphics 2D is like painting on a canvas: placement of sprites on a 2D screen 3.3.2017 Juha Vihavainen / University of Helsinki 17 3.3.2017 18 Defined by position position,, direction vector, up vector, field of view (angle angle), ), near and far plane Used by OpenGL OpenGL,, DirectX DirectX,, ray tracing image processing, etc. Create the image of those shapes located inside the frustum "geometry" (triangles, lines, points, and more) material surface properties of geometry how to draw it (a shader program) textures (images to glue onto the geometry) far world 2D projection light sources and their properties A triangle consists of 3 vertices a vertex is 3D position, and may include normals and more To take a picture, we need a camera a virtual one decides what will end up in the final image 3.3.2017 Juha Vihavainen / University of Helsinki "Virtual" (synthetic) 3D camera A 3D scene is OpenGL XNA/MonoGame Unreal (z-up world ) "3D scene" scene" in computer graphics item DirectX Unity 3D simulates more more--or or--less "solid" game entities located in a 3D world recording them with a freelyfreely-moving video camera right-handed Juha Vihavainen / University of Helsinki near fov(angle) dir point 19 3.3.2017 Juha Vihavainen / University of Helsinki 20

3D camera (cont.) Can have objects in a scene that aren t (currently) visible objects can be rotated around three axes a freely moving camera can also be rotated around all 3 axes the camera determines what shows up on screen of course, if the camera is pointing in the opposite direction from an object, we won't see it Models in 3D games 3D games consist of large numbers of visible objects In practice, these objects are not created by game code by defining each individual primitive (triangle) Instead, artists create objects for a game world, game designers focus on behavior of those objects, and programmers mostly concentrate on support systems Special 3D modeling tools are used, e.g., Blender,, 3ds Max, Maya... modeling tools permit separate editing of models plug-in tools allow models to be saved in a number of different file formats 21 22 A sphere made of triangles Drawing the triangles in GPU In 3D graphics engines, all rendering is ultimately handled by the GPU and shader programs shaders are small special-purpose purpose programs that execute inside the GPU and define how the data received from a game program is processed in the programmable stages of the rendering pipeline shader programs are defined using a high level shader language (e.g., Microsoft's HLSL, OpenGL GLSL) Vulkan supports shaders in a bytecode format, called SPIR-V, as opposed to human-readable syntax like GLSL and HLSL provide wide range of visual effects not part of this course see (Gregory) for explanations and examples of GPU programming Luckily, the game IDE / tools often provide many predefined default shaders 23 24

Rendering data Drawing shapes as sets of triangles Shared data between the CPU and the GPU 3D model data: streams of vertices specifying triangles.. matrices that represent object transforms and skeletal animation lighting parameters, and other kinds of "shader constants" used for computations Data (almost exclusively) produced and managed by the GPU vertex and index buffers, frame buffers 3D shapes (in games) are (ultimately) composed of triangles A triangle has 3 points, one for each corner is the minimum number of vertices (points) to define a plane these vertices need to be sent to GPU for processing vert01 = new vertexpositioncolor (new vector3 3 (0, 1, 0), many alternatives Color.Blue); represents the location of one vertex in a triangle defines the color of a vertex (to be interpolated over) A vertex structure may include other relevant info processed by shader programs.. 25 26 Typical vertex structures (used by graphics API) 1. VertexPositionColor x, y, z plus a color for each vertex useful for basic shapes, basic colors 2. VertexPositionTexture x, y, z plus relative u, v coordinates into a texture (bitmap) can overlay a bitmap texture onto a shape 3. VertexPositionNormalTexture x, y, z plus u, v,, coordinates plus a normal vector normal permits calculating lighting effects 4. VertexPositionColorTexture x, y, z plus u, v coordinates plus color color information changes color of bitmap texture can potentially reuse same texture in different contexts What are texture coordinates? vertexpositiontexture takes two parameters vector3 represents the position (x,y,z) of a vertex vector2 represents relative texture coordinates u and v telling which image point (texel)) hits the vertex u is a horizontal coordinate in the range 0.0.. 1.0 v is a vertical coordinate in the range 0.0.. 1.0 top-left corner = { 0, 0 } a point exactly in the middle of a texture = { 0.5, 0.5 } bottom-right corner = { 1, 1 } Note that these relative (u, v) ) coordinates are now independent of the size of the image (good for potential scaling or other changes) 27 28

What is a virtual game camera In game graphics, a virtual camera is implemented by software to simulate the way a real camera (or an eye) would work in real- world situations In the game/engine, the virtual camera is made up from mathematical calculations that determine how objects will be finally rendered based on the location and angle of the virtual camera in the program 3D is like a video camera recording - what camera sees (inside its viewing frustum) ) is shown on screen Implementing 3D graphics system (GPU) Vertex and index buffers are used to stream lots of 3D points (vertices) to the graphics device for rendering indices are just indirect pointers to shared vertices (to save space and bandwidth) Separately must tell how consequent vertices are to be interpreted as primitive shapes (e.g., triangles forming a 3D object): whether adjacent triangles use shared vertices or not (lists, strips, fans) Shaders and related data structures (registers, memory) in GPU processing of vertices, primitives, and pixels nowadays, always need to use shaders (default/custom) apply matrix transformations (Transform component) to vertices translation (i.e., movement), rotation,, and scale apply colors and textures to triangles specify lighting, its location (plus other properties) 29 30 Painter's algorithm (again) Painter's algorithm doesn't work well for 3D scenes We would have to sort every triangle/pixel back to front worse, the camera is dynamic, so we'd have to re-sort! In 3D, there is a possibility that no one triangle is the furthest back Overlapping- triangles-failure case The painter's algorithm has a lot of overdraw unnecessary redrawing the same pixel multiple times per frame => inefficient 31 Solution: Z-buffering Which elements (parts) of a rendered scene are visible, and which are hidden Create a new screen-sized sized buffer called the z-buffer (depth buffering), to determine what is closer to the camera It stores the depth (z coordinate) of every pixel that is drawn (or actually, of the last one) As we draw pixels in the scene, only draw it if the depth stored in the z-buffer is greater than the depth of the current pixel (and so located further out in the scene) => pixels can be drawn in any appropriate order, simplifying the overall processing of triangles A sample scene and its corresponding z-buffer (Wikipedia) 32

On the rendering pipeline Note: "rasterization" - a scene with 3D models (shapes) converted into a raster image (consisting of pixels or dots) need only to represent the surface/outer boundary of an object the most popular technique for producing real-time 3D computer graphics; nearly all visual models used in game graphics colors and textures can be applied A pipeline is a sequence of stages operating in a fixed order (and potentially in parallel), e.g. vertex processing, rasterization, pixel (fragment) processing,.. each stage receives its input from the prior stage and sends its output to the subsequent stage A 3D application (game) sends to the GPU a sequence of vertices batched into geometric "primitives", typically triangles (in buffers) 3.3.2017 Juha Vihavainen / University of Helsinki 33 [Gregory, 10.2 The Rendering Pipeline, p. 489] Here, the pipeline includes tools used to create scenes and game entities 1. Tools stage (offline): defining geometry and surface properties (materials), animations, audio resources, etc. 2. Asset conditioning (offline): geometry and materials are processed by asset conditioning pipeline (ACP) into an engine-ready format 3. Application stage (CPU): potentially visible mesh instances are identified and submitted to the graphics hardware along with their materials for rendering 4. Geometry processing (GPU): vertices are transformed and lit and projected into (homogeneous) clip space; triangles are processed by the optional geometry shader and then clipped to the frustum (lines and surfaces outside the view volume are removed) 5. Rasterization stage (GPU): triangles are converted into fragments that are shaded, passed through various tests (z test, alpha test, stencil test, etc.) and finally blended into the frame buffer 34 The rendering pipeline (simplified overview) Rendering pipeline transforms data (1) (offline) (2) (offlline) (3) (CPU) (4) (GPU) <= For explanations, see the previous slide ACP = Asset Conditioning Pipeline fragment = potential pixel (OpenGL) (5) (GPU) [Greg, Fig. 10.41, p. 491] 35 The tools and asset conditioning stages deal with meshes and materials The application stage deals in terms of mesh instances and submeshes, each of which is associated with a single material During the geometry stage, each submesh is broken down into individual vertices, which are processed largely in parallel; at its conclusion, the triangles are reconstructed from the fully transformed and shaded vertices In the rasterization stage, each triangle is broken into fragments, and these fragments are either discarded, or they are eventually written into the frame buffer as colors 36

Z-buffering in GPU hardware When using z-buffering, we don't draw triangles back-to-front,, or many draws will become waisted (when overridden) Transparent objects, such as water, need some special arrangements first draw all non-transparent objects, and then transparent ones (partly disabling z-buffering - to blend) otherwise transparent objects would totally hide objects behind them (background opaque ones would not be drawn) Occlusion culling tehniques eliminate, at application stage, entire scene objects that cannot be visible on a current frame e.g., a tree totally occluded by a building uses spatial partitioning for efficiency An algorithm in pseudo-language, see next slide 37 // clear the frame and depth buffers fillcolorbuffer (backgroundcolor); filldepthbuffer (infinity); (3D Math Primer for Graphics and Game Development, 2011), p. 346 // outer loop iterates over all the primitives (usually triangles) for each geometric primitive do // first rasterize the primitive.. for each pixel x, y in the projection of the primitive do // test the depth buffer, to see if a closer pixel has been written float primdepth = getdepthofprimitiveatpixel (x, y) ; // pixel of this primitive is obscured, discard it if primdepth > readdepthbuffer (x, y) continue; // determine primitive color at this pixel Color c = getcolorofprimitiveatpixel (x, y) // update the color and depth buffers writecolorbuffer (x, y, c) writedepthbuffer (x, y, primdepth)