OpenGL and RenderMan Rendering Architectures

Size: px
Start display at page:

Download "OpenGL and RenderMan Rendering Architectures"

Transcription

1 HELSINKI UNIVERSITY OF TECHNOLOGY Telecommunications Software and Multimedia Laboratory T Seminar on Computer Graphics Spring 2002: Advanced Rendering Techniques OpenGL and RenderMan Rendering Architectures Piia Pulkkinen 51009R

2 OpenGL and RenderMan Rendering Architectures Piia Pulkkinen HUT, Telecommunications Software and Multimedia Laboratory Abstract OpenGL and RenderMan are two different rendering architectures. OpenGL is used in real-time applications and RenderMan in movies. OpenGL is a low-level, platform independent API that offers access to the graphics hardware functions. RenderMan is based on the REYES architecture, which defines a photorealistic-quality image production algorithm. The high image quality is mainly achieved with programmable shaders and micropolygons, which are sub-pixel sized quadrilaterals. The lack of programmability restricts the image quality improvement in OpenGL but so far it has not been possible to use RenderMan in real-time rendering. 1 INTRODUCTION Computer graphics is nowadays used in two main fields: interactive applications and motion pictures. Both are demanding and challenging subjects, but they differ from each other in one particular sense. Interactive applications, such as computer games, require real-time rendering, even with the cost of reduced image quality, while movies demand high image quality and do not pay that much attention on rendering speed. So far there is not a rendering architecture implementation that would meet the needs of the both parties, so real-time applications and movies use different rendering architectures. The best-known real-time rendering architecture is OpenGL. It is an application programming interface (API) to graphics hardware and designed to work on several different hardware platforms (Segal and Akeley, 1994). The core OpenGL is a low-level API that provides commands only for basic primitives. The amount of commands is about 250 but it can be extended with optional libraries, which may include also higherlevel commands (Akeley and Hanrahan, 2001a). OpenGL satisfies the demand for realtime rendering, but the image quality, although getting better all the time, is not nearly photorealistic. RenderMan is the rendering architecture used in movies. RenderMan API is more compact than the OpenGL API, because it contains only about 100 commands (Akeley and Hanrahan, 2001b). However, it is more flexible than OpenGL, and therefore it is capable of producing photorealistic imagery. RenderMan is not used in real-time applications because it does not meet the requirements of real-time rendering. 1

3 This paper discusses OpenGL and RenderMan architectures and their differences. Section 2 tells about their history and development and Section 3 presents briefly both architectures. Section 4 concentrates on rendering quality issues and describes the fundamental features of both architectures. Finally, Section 5 offers some conclusions of the current state and of the future of the architectures. 2 HISTORICAL AND PHILOSOPHICAL BACKGROUND OpenGL and RenderMan were developed for different purposes and from different starting points. Both architectures have stayed alive for quite a long time, and they are nowadays remarkably popular in their own fields. 2.1 OpenGL and real-time rendering The history of OpenGL dates back to the late 1970 s when its progenitor was written. Silicon Graphics IRIS GL was based on that early graphics library, and when Silicon Graphics started the development of OpenGL in 1990, IRIS GL was taken as the design model. The OpenGL designers could have designed a totally new API, but they wanted to take all possible advantage of the working and widely accepted system. As the result OpenGL is very much like improved IRIS GL. OpenGL was intended to become a standard for real-time rendering architecture. Kurt Akeley, one of the designers of OpenGL, stated in the lecture slides of Stanford University Computer graphics course (Akeley and Hanrahan, 2001a) that the design goals for OpenGL were industry-wide acceptance, consistent and innovative implementations, innovative and differentiated applications, long life and high quality. What OpenGL was not designed for, according to Akeley, was to make graphics programming easy or to integrate digital media and 3D graphics. Originally OpenGL was developed by Silicon Graphics, who still owns the trademark OpenGL, but the current development is controlled by the ARB, the Architecture Review Board. The ARB members include companies like IBM, Intel, Microsoft, SGI, Sun and nvidia (Akeley and Hanrahan, 2001a). A committee driven development helps to avoid compatibility problems, but it also leads to slow progression in development, because the members must approve all actions. On the other hand, when the decisions are finally made, they tend to be good, because they really are deliberated. The latest version of OpenGL was released in 2001 under the version number 1.3. One important goal for OpenGL was to offer complete access to graphics hardware functions and at the same time to be device independent (Segal and Akeley, 1994). OpenGL has redeemed this promise pretty well. OpenGL implementations can be easily used on different platforms and on different levels of hardware. 2.2 RenderMan and photorealism RenderMan is based on the REYES architecture, or to be exact, RenderMan is an implementation of the REYES architecture. REYES, which stands for Renders Everything You Ever Saw, was developed by Lucasfilm, nowadays Pixar, in the mid- 2

4 1980s. The researches at Lucasfilm felt that there was no rendering algorithm that would be capable of creating that kind of special effects for movies they wanted, so they decided to write an algorithm of their own. They came up with an idea of a scanline rendering algorithm that they named the REYES architecture (Apodaca and Gritz, 2000). The REYES architecture was designed to solve the problems that the algorithms of the time suffered, so the philosophy behind it was something totally new. The other rendering algorithms assumed that the world consisted of polygons, which can be proved wrong just by looking around. REYES took into account that in fact the world contains a vast number of complex geometric primitives that cannot be modelled realistically with polygons. In order to achieve realism the complexity of the rendered image needs to be as large as in the photograph, which makes great demands on geometry, lighting and texturing (Akeley and Hanrahan, 2001b). However, because the beauty is in the eye of the beholder, the rendered images need not to be wholly realistic, only to look like it, which allows surprisingly much cheating. The image quality was the most important issue in the development of REYES. The rendered images were intended for huge screens that would reveal every incorrectly rendered pixel, so the rendering quality was greatly improved. In addition, motion blur was an effect that had earlier been forgotten, but REYES corrected also this defect (Apodaca and Gritz, 2000). Apodaca and Gritz define (Apodaca and Gritz, 2000) the RenderMan Interface a standard communications protocol between modeling programs and rendering programs capable of producing photorealistic-quality images. The specification was highly ambitious when it was published and it has not lost its ambitiousness in the course of time, because after more than ten years the specification still contains features that no other open specification for scene description has (Apodaca and Gritz, 2000). In the world of motion pictures RenderMan is a concept and it has been used in various different types of movies, from Bug s Life to Mulan and from Stuart Little to Episode I. The importance of RenderMan for the film industry was recognised in 2001 when the designers of REYES, Rob Cook, Loren Carpenter and Ed Catmull, were honored with a Scientific and Technical Oscar. The award was given for significant advancements to the field of motion picture rendering as exemplified in Pixar s RenderMan (Pixar, 2001). 3 ARCHITECTURES Differences between the architectures are based on their philosophical backgrounds. OpenGL was designed to be fast, RenderMan to be accurate. 3.1 OpenGL architecture OpenGL architecture is based on a state machine model. The state machine can be put into several different modes or states, which remain in effect until they are particularly changed. The state variables include colour, viewing and projection transformations, lighting conditions and the object material definitions (Woo et al., 1999). The functionality of the OpenGL architecture can be illustrated with the OpenGL block diagram. Most OpenGL implementations perform the rendering operations in the order 3

5 shown in Figure 1, although the ordering is not restrictive. Geometric (vertex) data, which consists of vertices, lines and polygons, takes a different route in some parts of the pipeline than pixel data consisting of pixels, images and bitmaps. The final steps, rasterization, per-fragment operations and writing into the framebuffer, are same for both types of data. Vertex Data Display List Evaluator Per-Vertex Operations Primitive Assembly Per- Fragment Operations Rasterization Framebuffer Pixel Operations Texture Memory Pixel Data Figure 1: The OpenGL block diagram (Segal and Akeley, 1994). The diagram describes the order of operations in OpenGL rendering pipeline. The OpenGL block diagram consists of the framebuffer and seven other main parts (Woo et al., 1999): Display list It is not always possible or desired to process data immediately. Both vertex and pixel data can be stored into display lists for future or current use. The same display list can be executed countless of times, which is especially useful in realtime applications that often need to render the same data for several frames. Evaluator All geometric primitives need to be described by vertices. However, parametric curves and surfaces may be described by control points, and evaluators are used for deriving the vertices from the control points. Per-vertex operations and primitive assembly Vertices are converted into geometric primitives in the per-vertex operations stage. In the same stage the lighting parameters are used for calculating the right colour value and, if textures are used, also texture coordinates can be generated. Primitive assembly consists mainly of clipping but also of perspective division, viewport and depth operations and culling. Pixel operations The pixel data read from the system memory is converted into the right size and format in the pixel operations stage. The result is then written into texture memory or moved to the next stage. Pixel data can also be read from the framebuffer, when data is stored into the system memory after operations. Texture memory All textures are stored into the texture memory. Rasterization In the rasterization stage both types of data are converted into fragments. The conversion has to be done in order to store data into the framebuffer, because one pixel in the framebuffer is represented by one fragment. Before leaving the stage each fragment square gets its own colour and depth values. 4

6 Per-fragment operations In the final stage before the framebuffer the fragment may go through scissor, alpha, stencil and depth-buffer test. If texturing is used, the texel is applied to the fragment. Also fog calculations, blending and dithering are performed. 3.2 RenderMan architecture The REYES architecture defines a rendering pipeline that RenderMan uses. If compared to the pipelines in the modern graphics hardware, the REYES pipeline has special geometric operations and it also gathers and stores detailed information about the image being rendered, which helps to achieve a high image quality. Otherwise the REYES pipeline and hardware pipelines do not differ that much from each other (Apodaca and Gritz, 2000). The outline of the algorithm, as seen in Figure 2, is quite straightforward. MODEL Bound Split NO Cull On screen? NO YES Diceable? YES Dice Textures Shade Sample Visibility Filter IMAGE Figure 2: The REYES rendering pipeline (Cook et al., 1987) consists of the main pipeline and one splitting loop. At first the primitive is bounded, which means that it is surrounded with a cameraspaced axis-aligned box. All RenderMan primitives are finite, so every primitive can be bound. The bounding box is then checked, whether it is on-screen or wholly outside it. If it is outside or the primitive is one-sided and totally back facing, it is culled, otherwise it continues in the pipeline. The algorithm does not perform any global illumination calculations or such, so although the primitive would somehow affect the visible scene, it will still be culled if the bounding box is outside the screen. In the diceable test the size of the primitive is tested. To proceed to the dicing phase the primitive needs to be small enough. If it is not, it is split into smaller parts that are sent back to the bounding stage. Dicing means converting the primitive into a grid consisting of micropolygons. Micropolygons are quadrilaterals, whose size is about ¼ 5

7 of pixel area, and they are the basic geometric units in RenderMan. Half a pixel is the Nyquist limit for images, which makes the shading of micropolygons easy. Dicing must not produce a grid with too many or numerous variable sized micropolygons; that is why the diceable test is needed. The shaders used in the shading phase are defined in RenderMan Shading Language files. Actually the whole shading system is an interpreter for the Shading Language. There are several different types of shaders, which are applied in a specific order to the grid. Texture information is also used in the shading calculations. After the grid is shaded, the sampling phase separates the micropolygons from each other and passes them one by one to the mini-version of the primitive loop. Micropolygons are bounded, tested for on-screen visibility and culled if needed. As micropolygons are very small, they do not have to be split in any case, because they are either inside or outside the screen. Next, in the visibility phase the location of the micropolygon inside the pixels is determined. If the micropolygon covers any of the pre-determined sampling points, the colour, opacity and depth of the micropolygon are stored as a visible point for that location. When all primitives that affect a certain pixel are processed, a reconstruction filter is used for generating the final pixel colour based on the visible point values. As shown in Figure 2, in the REYES pipeline shading is done before determining visibility. The feature is not common in rendering pipelines, because usually the hidden pixels are removed before shading anything. However, shading before visibility determination has proved to have several advantages. One of them is enabling displacement shading (Apodaca and Gritz, 2000). The shader is free to move the points wherever it needs because no visibility calculations are made yet. This would not be possible after the visibility is determined, since moving a point could have an effect on the pixel visibility. The most serious disadvantage is that depending on the depth complexity of the scene, the amount of pixels that are shaded but later covered by other pixels can be high. However, this is not usually a big problem in movies, where RenderMan is mostly used nowadays, because the depth complexity is seldom large. The viewing angles have been defined beforehand, and hidden objects are probably not even modelled. The situation is totally different in interactive applications such as computer games where, in general, all viewing angles are possible and chosen by user. As the Z-buffer algorithms handle all pixels, no matter if the whole object was occluded by another object, the large depth complexity causes problems in computer games. 4 QUALITY ISSUES Due to the architectural differences and details the images rendered with OpenGL and RenderMan do not look much alike. Although both architectures have some same features, they are implemented in different ways in OpenGL with rendering speed in mind, which has reduced image quality. 4.1 Fragments and micropolygons The OpenGL block diagram and the REYES rendering pipeline show that image information is processed in a different way in OpenGL and RenderMan. In the 6

8 rasterization phase of the OpenGL block diagram both pixel and geometric data are converted into fragments and these fragments are the smallest primitives OpenGL can handle (Woo et al., 1999). One fragment corresponds to one pixel in the framebuffer, which is actually quite a large area for images with small details. For RenderMan the pixel-sized fragments are too coarse, so the primitives are cut into micropolygons, which are smaller than pixels. The operation in the REYES pipeline is called dicing and it creates a two-dimensional array of micropolygons called a grid (Cook et al., 1987). The detail of dicing depends on the primitive and the estimate of the primitive size on the screen. The detail-level of dicing defines how many micropolygons to create of the primitive. On one hand to avoid the creation of unnecessary and too tiny micropolygons, and on the other hand to ensure that all visual details are captured, RenderMan uses adaptive subdivision in creation of micropolygons (Apodaca and Gritz, 2000). It means that the sizes of the micropolygons in the raster space are nearly equal, although their sizes in the parametric space can vary. Thus, the objects far away from the camera have less micropolygons than the objects close to the camera, which often leads to a situation where two adjacent grids have a different number of micropolygons along their common edge. This can also happen inside one primitive when it is divided into smaller subprimitives before dicing, because the adjacent subprimitives may project to different sizes on the screen and therefore have different tessellation rates in the dicing phase. The result is that the micropolygons in the adjacent grids are different in size in the parametric space. The filtering calculations that use the parametric size of the micropolygons get messed up on the grid boundary, which causes artifacts that can be visible in the final image. In addition, only primitives that have common vertices and common edges between them are defined connected. If the adjacent grids are different in size, this is not the case. Therefore, the edges and vertices that should be connected may diverge from each other, which causes a hole, a crack, between them. These problems are nearly solved in the current version of RenderMan that uses slightly varying sized micropolygons in the parametric space. The grid boundaries can be glued together easier, when the calculations are done with ideal approximations of the real sizes. The shading rate specifies how many shading samples per pixel must be taken to adequately achieve the colour variations of the primitive (Apodaca and Gritz, 2000). In the dicing phase the shading rate is used in determining the number of micropolygons in the grid, so that the final shading result would be as good as possible. Shading is done to all micropolygons in a grid at the same time. Because micropolygons are very small, Nyquist sized, only one colour per micropolygon is enough for an adequate shading result (Cook et al., 1987). In practise this means that flat shading can be used with micropolygons. When the micropolygons are transformed to screen space, the area of each pixel is stochastically sampled to determine the rendering parameters for the pixel. Stochastic sampling (Cook, 1986) means supersampling where the samples are not regularly but stochastically spaced. A sufficient randomness in spacing is achieved by adding noise separately to the x and y locations of each sample point. This process is also called jittering. In supersampling more than one sample is taken from the area of the image that represents one pixel on the screen. Stochastically spaced samples give a better result or the human eye considers it better than regularly spaced samples, although the amount of samples is equal. Figure 3 shows the micropolygon grid and the jittered 7

9 samples taken from the area of one pixel. The final pixel values are calculated as a weighted average of the supersamples. The averaging is done with a reconstruction filter and the result depends on the filter parameters values. The same samples produce different outputs with different reconstruction filters (Apodaca and Gritz, 2000). Flat shading makes the implementation of stochastic sampling efficient. Without it the implementation is far less useful. pixels micropolygon grid jittered samples Figure 3: Transforming the micropolygon grid to screen space (Cook et al., 1987). The area of each pixel is stochastically sampled. 4.2 Visibility The depth-buffer method used in OpenGL has a serious restriction. The Z-buffer can only deal with opaque surfaces, which in practise means that each pixel can have only one visible surface (Hearn and Baker, 1997). Transparent and semi-transparent surfaces have to be implemented in another way. RenderMan uses the A-buffer method, where one pixel can have several surfaces. This is implemented with a per pixel linked list, which contains the parameters of all surfaces. The A-buffer handles transparent and semi-transparent surfaces and also object antialiasing (Hearn and Baker, 1997). 4.3 Programmability As the images get more detailed all the time, programmability will become even more important feature of a rendering software than it is now. However, the programmability generally increases the rendering complexity, which is not always suitable. OpenGL architecture does not include a programming language. OpenGL was designed for interactive applications running in ordinary home computers with limited processing power and capacity. One design goal was to keep OpenGL API close to the hardware and that way to maximise the performance, and a programming language would have conflicted with this goal (Segal and Akeley, 1994). In the designers opinion replacing fixed operation order in graphics hardware with constantly changing algorithms would have separated the API from the hardware and reduced performance. 8

10 The lack of a programming language makes OpenGL restricted. All functions are defined in advance, which gives the user the opportunity only to turn the operations on or off, and change the parameter values (Segal and Akeley, 1994). Therefore, the images created with OpenGL are actually only combinations of pre-defined, fixed features. RenderMan, on the other hand, has a programming language. The Shading Language, as it is called, looks almost like C, and it can be used for describing the behaviour of lights and surfaces. The RenderMan Interface Specification lists five different types of these descriptions, shaders (Apodaca and Gritz, 2000). Surface shaders define the characteristics of the surface and the behaviour of light on the surface. Displacement shaders describe the bumpiness and inequality of the surfaces, and the lighting details are specified with light shaders. Volume shaders, also called atmosphere shaders, are used for creating realistic-looking open air. They define what happens to light when it passes through smoke or fog, for example. The fifth shaders, imager shaders, describe the final colour value transformations to the pixel before it is rendered. However, programmable imager shaders are not supported by all RenderMan architecture implementations. The most popular shaders are surface shaders, whereas the other shaders are used more occasionally. Volume shaders are an important feature when the rendered images have to look realistic. OpenGL does not pay any attention to air and the behaviour of light in it (Woo et al., 1999). It is possible to have fog in the certain distance from the camera, but the thickness of the fog is calculated using a fixed linear or an exponential function, which does not result in anything realistic-looking. Additionally, the light does not react to the fog in other way than attenuating when the fog gets thicker. In the same situation RenderMan uses volume shaders (Apodaca and Gritz, 2000), which describe the behaviour of light in the foggy air. As can be seen in the Figure 3, volume shaders result in highly realistic-looking images. Shaders can also be used for producing effects like clouds, fire and explosions, which are not easy to construct with the ordinary computer graphics. Figure 3: Smoky air is easy to implement in RenderMan with shaders (Apodaca and Gritz, 2000). 9

11 The problem with OpenGL is that it has a fixed shading model. This can be clearly seen in the missing existence of air and also in the materials of the surfaces. In OpenGL all surfaces look more or less the same. Usually the material is like plastic, sometimes like some kind of metal, depending on the given parameter values. Because the shading model cannot be changed, OpenGL is unable to present any other kinds of surface materials. RenderMan can use different surface shaders to every object which usually is the case and make the surface materials look at least nearly realistic (Apodaca and Gritz, 2000). Of course, there are some surface materials, like the reflecting side of a CD, which simply cannot be rendered even with shaders because the lighting calculations would be prohibitively complex. However, the importance of programmability is self-evident with all renderable materials. 4.4 Motion blur and depth of field In order to make a scene look realistic some special effects are needed. Two of the most important ones are motion blur and depth of field. Both of them are effects that the viewer does not actively recognise if they exist but if they are not there, the viewer notices instantly that the image cannot be real. When the filming is done with a real camera, fast moving objects seem to leave a blurry streak behind them. The effect is caused by the properties of the camera and it is called motion blur. The faster the object moves the stronger the effect is. Nowadays it is also possible to implement this with rendering software. The lamp in Figure 5 is evidently jumping, not hanging in the air, thanks to the motion blur effect. Figure 5: Motion blur effect appears clearly in the lamp stand when the lamp is jumping (Lasseter and Ostby, 1986). The depth of field effect is also caused by the properties of the camera. When the camera is focused at a certain distance, objects near the focusing point are delineated sharp but the objects far from the focusing point are blurry. In OpenGL both effects are implemented with an accumulation buffer (Segal and Akeley, 1994). The same scene is rendered several times from slightly different viewing 10

12 angles into the accumulation buffer, and the combination of the scenes results in the final picture. The accumulation buffer method is one application of the multipass algorithms, whose basic idea is to render the same primitive more than once. Multipass algorithms are suitable for OpenGL because they are easy to implement in OpenGL architecture. They handle only small number of parameters whose values can be efficiently changed without any effect on the other parameter values. RenderMan uses stochastic sampling to implement motion blur and depth of field effects (Apodaca and Gritz, 2000). The stochastic sampling algorithm is used in the visibility phase of the rendering pipeline, which requires that in the motion blur effect the whole motion path of the object has to be included in the bounding box calculations. Motion blur is created by taking samples at different times during one frame (Cook, 1986). The frame time is divided into slices and each sample point is assigned one randomly chosen slice. The exact sampling time is defined by jittering. If motion blur is applied the primitives can move only linearly and the shaded micropolygons cannot change colour during the motion. In depth of field the lens parameters and focusing equations define the amount of confusion for each primitive and stochastic sampling is used for calculating which blurry micropolygons are visible. The sampling algorithm needs to handle a huge amount of samples, because usually a large number of objects need to be blurred. Creating an accurate depth of field requires quite much processing capacity, and therefore a simple approximation of the effect is more popular despite some of its restrictions. The approximation is implemented with blurred composited layers and it is much faster than the real depth of field. 5 CONCLUSIONS The problem in the real-time graphics is that there is no rendering technique that would produce photorealistic-quality images and at the same time be capable of real-time rendering. RenderMan would solve the image quality problems in a twinkling of an eye, but currently certain parts of RenderMan are highly impractical to implement on most graphic accelerators (Segal and Akeley, 1994). Although this may well change in the not so distant future, there are still lots to do before the first working implementation of real-time RenderMan is out in the market. Improving the image quality of OpenGL is not an easy task either. The most obvious extension, which would make the images look better, is programmability. Programmability was dropped out of the original design of OpenGL for the performance reasons but actually there is nothing in the OpenGL structure that would prevent adding it. The design of OpenGL is heading towards programmability, and it is possible that the next version of OpenGL (2.0) even has some programmable features. Programmability would give an opportunity to add some features of RenderMan to OpenGL. One possible implementation would be to add programmable processors in the different parts of the OpenGL rendering pipeline. In the evaluator stage a programmable processor could be used for creating the same kind of shading as with displacement shaders in RenderMan. Fast parametric surfaces could be also generated. Surface shader-like shading could be applied in the per-fragment operations stage, and a 11

13 programmable framebuffer might give an opportunity to use imager shaders. In addition, if the lighting calculations were not done in the per-vertex operations stage but in the per-fragment operations stage with programmable processor the used shading model could be more freely defined. This would be a great improvement in the way of getting rid of the fixed shading model. Whether the both architectures continue developing on their own or try to take influence on each other somehow, the expectations of the audience cause the same problem for both of them. Although Moore s law seems to apply to the development of the computing hardware, the viewers of the movies and the users of the applications also become more demanding all the time. They want more detailed rendering, more special effects, more reality. The appetite for details cannot grow infinitely, but in the mean time film studios, programmers, designers, and all involved should keep their feet on the ground and not let the growth of rendering complexity to exceed the limit of Moore s law. REFERENCES Akeley, K.; Hanrahan, P. 2001a. The OpenGL Graphics System. Lecture slides of the course cs448a Real-Time Graphics Architecture, Stanford University. Autumn Akeley, K.; Hanrahan, P. 2001b. The Design of RenderMan. Lecture slides of the course cs448a Real-Time Graphics Architecture, Stanford University. Autumn Apodaca, A.A.; Gritz, L Advanced RenderMan. First edition. San Diego, California, USA. Morgan Kaufmann Publishers. 543 p. Cook, R. L Stochastic Sampling in Computer Graphics. ACM Transactions on Graphics. Vol. 5. No. 1, January pp Cook, R. L.; Carpenter, L.; Catmull, E The Reyes Image Rendering Architecture. Proceedings of SIGGRAPH 87. Anaheim, USA, July , ACM Computer Graphics. Volume 21, Number 4, July 1987, pp Hearn, D.; Baker, M. P Computer Graphics C Version. Second edition. Upper Saddle River, New Jersey, USA. Prentice Hall, Inc. 652 p. Lasseter, J.; Ostby, E Pixar Christmas Card. Pixar Animation Studios Pixar s Catmull, Carpenter & Cook Receive Academy Award of Merit. Press Release. Emeryville, California, USA, March 5., Segal, M.; Akeley, K The Design of the OpenGL Graphics Interface. Mountain View, California, USA. Silicon Graphics Computer Systems. 10 p. Woo, M.; Neider, J.; Davis, T.; Shreiner, D OpenGL Programming Guide. Third edition. USA. Addison-Wesley. 730 p. 12

Lecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 13: Reyes Architecture and Implementation Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) A gallery of images rendered using Reyes Image credit: Lucasfilm (Adventures

More information

Comparing Reyes and OpenGL on a Stream Architecture

Comparing Reyes and OpenGL on a Stream Architecture Comparing Reyes and OpenGL on a Stream Architecture John D. Owens Brucek Khailany Brian Towles William J. Dally Computer Systems Laboratory Stanford University Motivation Frame from Quake III Arena id

More information

REYES REYES REYES. Goals of REYES. REYES Design Principles

REYES REYES REYES. Goals of REYES. REYES Design Principles You might be surprised to know that most frames of all Pixar s films and shorts do not use a global illumination model for rendering! Instead, they use Renders Everything You Ever Saw Developed by Pixar

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

Real-Time Reyes: Programmable Pipelines and Research Challenges. Anjul Patney University of California, Davis

Real-Time Reyes: Programmable Pipelines and Research Challenges. Anjul Patney University of California, Davis Real-Time Reyes: Programmable Pipelines and Research Challenges Anjul Patney University of California, Davis Real-Time Reyes-Style Adaptive Surface Subdivision Anjul Patney and John D. Owens SIGGRAPH Asia

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming Chapter IV Fragment Processing and Output Merging Fragment Processing The per-fragment attributes may include a normal vector, a set of texture coordinates, a set of color values, a depth, etc. Using these

More information

Grafica Computazionale: Lezione 30. Grafica Computazionale. Hiding complexity... ;) Introduction to OpenGL. lezione30 Introduction to OpenGL

Grafica Computazionale: Lezione 30. Grafica Computazionale. Hiding complexity... ;) Introduction to OpenGL. lezione30 Introduction to OpenGL Grafica Computazionale: Lezione 30 Grafica Computazionale lezione30 Introduction to OpenGL Informatica e Automazione, "Roma Tre" May 20, 2010 OpenGL Shading Language Introduction to OpenGL OpenGL (Open

More information

Pipeline Operations. CS 4620 Lecture 14

Pipeline Operations. CS 4620 Lecture 14 Pipeline Operations CS 4620 Lecture 14 2014 Steve Marschner 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives

More information

E.Order of Operations

E.Order of Operations Appendix E E.Order of Operations This book describes all the performed between initial specification of vertices and final writing of fragments into the framebuffer. The chapters of this book are arranged

More information

Real-Time Reyes Programmable Pipelines and Research Challenges

Real-Time Reyes Programmable Pipelines and Research Challenges Real-Time Reyes Programmable Pipelines and Research Challenges Anjul Patney University of California, Davis This talk Parallel Computing for Graphics: In Action What does it take to write a programmable

More information

CS452/552; EE465/505. Clipping & Scan Conversion

CS452/552; EE465/505. Clipping & Scan Conversion CS452/552; EE465/505 Clipping & Scan Conversion 3-31 15 Outline! From Geometry to Pixels: Overview Clipping (continued) Scan conversion Read: Angel, Chapter 8, 8.1-8.9 Project#1 due: this week Lab4 due:

More information

A Real-time Micropolygon Rendering Pipeline. Kayvon Fatahalian Stanford University

A Real-time Micropolygon Rendering Pipeline. Kayvon Fatahalian Stanford University A Real-time Micropolygon Rendering Pipeline Kayvon Fatahalian Stanford University Detailed surfaces Credit: DreamWorks Pictures, Shrek 2 (2004) Credit: Pixar Animation Studios, Toy Story 2 (1999) Credit:

More information

DEFERRED RENDERING STEFAN MÜLLER ARISONA, ETH ZURICH SMA/

DEFERRED RENDERING STEFAN MÜLLER ARISONA, ETH ZURICH SMA/ DEFERRED RENDERING STEFAN MÜLLER ARISONA, ETH ZURICH SMA/2013-11-04 DEFERRED RENDERING? CONTENTS 1. The traditional approach: Forward rendering 2. Deferred rendering (DR) overview 3. Example uses of DR:

More information

OpenGl Pipeline. triangles, lines, points, images. Per-vertex ops. Primitive assembly. Texturing. Rasterization. Per-fragment ops.

OpenGl Pipeline. triangles, lines, points, images. Per-vertex ops. Primitive assembly. Texturing. Rasterization. Per-fragment ops. OpenGl Pipeline Individual Vertices Transformed Vertices Commands Processor Per-vertex ops Primitive assembly triangles, lines, points, images Primitives Fragments Rasterization Texturing Per-fragment

More information

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic. Shading Models There are two main types of rendering that we cover, polygon rendering ray tracing Polygon rendering is used to apply illumination models to polygons, whereas ray tracing applies to arbitrary

More information

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload)

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload) Lecture 2: Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload) Visual Computing Systems Today Finishing up from last time Brief discussion of graphics workload metrics

More information

Lecturer Athanasios Nikolaidis

Lecturer Athanasios Nikolaidis Lecturer Athanasios Nikolaidis Computer Graphics: Graphics primitives 2D viewing and clipping 2D and 3D transformations Curves and surfaces Rendering and ray tracing Illumination models Shading models

More information

Graphics and Interaction Rendering pipeline & object modelling

Graphics and Interaction Rendering pipeline & object modelling 433-324 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering

More information

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

VU Rendering SS Unit 9: Renderman

VU Rendering SS Unit 9: Renderman VU Rendering SS 2012 Unit 9: Renderman Overview 1. Pixar RenderMan / REYES Highly complex software system used for a large portion of today's industrial CG work 2. Software shaders Technology behind complex

More information

! Pixar RenderMan / REYES. ! Software shaders. ! Pixar Photorealistic RenderMan (PRMan) ! Basically a sophisticated scanline renderer

! Pixar RenderMan / REYES. ! Software shaders. ! Pixar Photorealistic RenderMan (PRMan) ! Basically a sophisticated scanline renderer Overview VO Rendering SS 2010 Unit 9: Pixar RenderMan Pixar RenderMan / REYES Highly complex software system used for a large portion of today's industrial CG work Software shaders Technology behind complex

More information

Introduction to Visualization and Computer Graphics

Introduction to Visualization and Computer Graphics Introduction to Visualization and Computer Graphics DH2320, Fall 2015 Prof. Dr. Tino Weinkauf Introduction to Visualization and Computer Graphics Visibility Shading 3D Rendering Geometric Model Color Perspective

More information

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload)

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload) Lecture 2: Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload) Visual Computing Systems Analyzing a 3D Graphics Workload Where is most of the work done? Memory Vertex

More information

Rasterization Overview

Rasterization Overview Rendering Overview The process of generating an image given a virtual camera objects light sources Various techniques rasterization (topic of this course) raytracing (topic of the course Advanced Computer

More information

Advanced Shading and Texturing

Advanced Shading and Texturing Real-Time Graphics Architecture Kurt Akeley Pat Hanrahan http://www.graphics.stanford.edu/courses/cs448a-01-fall Advanced Shading and Texturing 1 Topics Features Bump mapping Environment mapping Shadow

More information

Pipeline Operations. CS 4620 Lecture 10

Pipeline Operations. CS 4620 Lecture 10 Pipeline Operations CS 4620 Lecture 10 2008 Steve Marschner 1 Hidden surface elimination Goal is to figure out which color to make the pixels based on what s in front of what. Hidden surface elimination

More information

National Chiao Tung Univ, Taiwan By: I-Chen Lin, Assistant Professor

National Chiao Tung Univ, Taiwan By: I-Chen Lin, Assistant Professor Computer Graphics 1. Graphics Systems National Chiao Tung Univ, Taiwan By: I-Chen Lin, Assistant Professor Textbook: Hearn and Baker, Computer Graphics, 3rd Ed., Prentice Hall Ref: E.Angel, Interactive

More information

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows are what we normally see in the real world. If you are near a bare halogen bulb, a stage spotlight, or other

More information

Drawing Fast The Graphics Pipeline

Drawing Fast The Graphics Pipeline Drawing Fast The Graphics Pipeline CS559 Spring 2016 Lecture 10 February 25, 2016 1. Put a 3D primitive in the World Modeling Get triangles 2. Figure out what color it should be Do ligh/ng 3. Position

More information

Reyes Rendering on the GPU

Reyes Rendering on the GPU Reyes Rendering on the GPU Martin Sattlecker Graz University of Technology Markus Steinberger Graz University of Technology Abstract In this paper we investigate the possibility of real-time Reyes rendering

More information

Lecture 12: Advanced Rendering

Lecture 12: Advanced Rendering Lecture 12: Advanced Rendering CSE 40166 Computer Graphics Peter Bui University of Notre Dame, IN, USA November 30, 2010 Limitations of OpenGL Pipeline Rendering Good Fast, real-time graphics rendering.

More information

Next-Generation Graphics on Larrabee. Tim Foley Intel Corp

Next-Generation Graphics on Larrabee. Tim Foley Intel Corp Next-Generation Graphics on Larrabee Tim Foley Intel Corp Motivation The killer app for GPGPU is graphics We ve seen Abstract models for parallel programming How those models map efficiently to Larrabee

More information

Rendering Grass with Instancing in DirectX* 10

Rendering Grass with Instancing in DirectX* 10 Rendering Grass with Instancing in DirectX* 10 By Anu Kalra Because of the geometric complexity, rendering realistic grass in real-time is difficult, especially on consumer graphics hardware. This article

More information

Graphics and Imaging Architectures

Graphics and Imaging Architectures Graphics and Imaging Architectures Kayvon Fatahalian http://www.cs.cmu.edu/afs/cs/academic/class/15869-f11/www/ About Kayvon New faculty, just arrived from Stanford Dissertation: Evolving real-time graphics

More information

Introduction to Computer Graphics. Knowledge basic concepts 2D and 3D computer graphics

Introduction to Computer Graphics. Knowledge basic concepts 2D and 3D computer graphics Introduction to Computer Graphics Knowledge basic concepts 2D and 3D computer graphics 1 Introduction 2 Basic math 3 2D transformations 4 3D transformations 5 Viewing 6 Primitives 7 Geometry 8 Shading

More information

COMP30019 Graphics and Interaction Rendering pipeline & object modelling

COMP30019 Graphics and Interaction Rendering pipeline & object modelling COMP30019 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering

More information

Lecture outline. COMP30019 Graphics and Interaction Rendering pipeline & object modelling. Introduction to modelling

Lecture outline. COMP30019 Graphics and Interaction Rendering pipeline & object modelling. Introduction to modelling Lecture outline COMP30019 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Introduction to Modelling Polygonal geometry The rendering

More information

Practical Shadow Mapping

Practical Shadow Mapping Practical Shadow Mapping Stefan Brabec Thomas Annen Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany Abstract In this paper we propose several methods that can greatly improve

More information

Other Rendering Techniques CSE 872 Fall Intro You have seen Scanline converter (+z-buffer) Painter s algorithm Radiosity CSE 872 Fall

Other Rendering Techniques CSE 872 Fall Intro You have seen Scanline converter (+z-buffer) Painter s algorithm Radiosity CSE 872 Fall Other Rendering Techniques 1 Intro You have seen Scanline converter (+z-buffer) Painter s algorithm Radiosity 2 Intro Some more Raytracing Light maps Photon-map Reyes Shadow maps Sahdow volumes PRT BSSRF

More information

Drawing Fast The Graphics Pipeline

Drawing Fast The Graphics Pipeline Drawing Fast The Graphics Pipeline CS559 Fall 2015 Lecture 9 October 1, 2015 What I was going to say last time How are the ideas we ve learned about implemented in hardware so they are fast. Important:

More information

Some Resources. What won t I learn? What will I learn? Topics

Some Resources. What won t I learn? What will I learn? Topics CSC 706 Computer Graphics Course basics: Instructor Dr. Natacha Gueorguieva MW, 8:20 pm-10:00 pm Materials will be available at www.cs.csi.cuny.edu/~natacha 1 midterm, 2 projects, 1 presentation, homeworks,

More information

Displacement Mapping

Displacement Mapping HELSINKI UNIVERSITY OF TECHNOLOGY 16.4.2002 Telecommunications Software and Multimedia Laboratory Tik-111.500 Seminar on computer graphics Spring 2002: Rendering of High-Quality 3-D Graphics Displacement

More information

COMP environment mapping Mar. 12, r = 2n(n v) v

COMP environment mapping Mar. 12, r = 2n(n v) v Rendering mirror surfaces The next texture mapping method assumes we have a mirror surface, or at least a reflectance function that contains a mirror component. Examples might be a car window or hood,

More information

Visualizer An implicit surface rendering application

Visualizer An implicit surface rendering application June 01, 2004 Visualizer An implicit surface rendering application Derek Gerstmann - C1405511 MSc Computer Animation NCCA Bournemouth University OVERVIEW OF APPLICATION Visualizer is an interactive application

More information

COMPUTER GRAPHICS COURSE. Rendering Pipelines

COMPUTER GRAPHICS COURSE. Rendering Pipelines COMPUTER GRAPHICS COURSE Rendering Pipelines Georgios Papaioannou - 2014 A Rendering Pipeline Rendering or Graphics Pipeline is the sequence of steps that we use to create the final image Many graphics/rendering

More information

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2 Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Image Processing Graphics concerned

More information

Computer Graphics Fundamentals. Jon Macey

Computer Graphics Fundamentals. Jon Macey Computer Graphics Fundamentals Jon Macey jmacey@bournemouth.ac.uk http://nccastaff.bournemouth.ac.uk/jmacey/ 1 1 What is CG Fundamentals Looking at how Images (and Animations) are actually produced in

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Reading for Today A Practical Model for Subsurface Light Transport, Jensen, Marschner, Levoy, & Hanrahan, SIGGRAPH 2001 Participating Media Measuring BRDFs

More information

Real-Time Graphics Architecture

Real-Time Graphics Architecture Real-Time Graphics Architecture Kurt Akeley Pat Hanrahan http://www.graphics.stanford.edu/courses/cs448a-01-fall Rasterization Outline Fundamentals Examples Special topics (Depth-buffer, cracks and holes,

More information

Computer Graphics: Programming, Problem Solving, and Visual Communication

Computer Graphics: Programming, Problem Solving, and Visual Communication Computer Graphics: Programming, Problem Solving, and Visual Communication Dr. Steve Cunningham Computer Science Department California State University Stanislaus Turlock, CA 95382 copyright 2002, Steve

More information

Overview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality.

Overview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality. Overview A real-time shadow approach for an Augmented Reality application using shadow volumes Introduction of Concepts Standard Stenciled Shadow Volumes Method Proposed Approach in AR Application Experimental

More information

CS130 : Computer Graphics Lecture 2: Graphics Pipeline. Tamar Shinar Computer Science & Engineering UC Riverside

CS130 : Computer Graphics Lecture 2: Graphics Pipeline. Tamar Shinar Computer Science & Engineering UC Riverside CS130 : Computer Graphics Lecture 2: Graphics Pipeline Tamar Shinar Computer Science & Engineering UC Riverside Raster Devices and Images Raster Devices - raster displays show images as a rectangular array

More information

CS4620/5620: Lecture 14 Pipeline

CS4620/5620: Lecture 14 Pipeline CS4620/5620: Lecture 14 Pipeline 1 Rasterizing triangles Summary 1! evaluation of linear functions on pixel grid 2! functions defined by parameter values at vertices 3! using extra parameters to determine

More information

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

CS450/550. Pipeline Architecture. Adapted From: Angel and Shreiner: Interactive Computer Graphics6E Addison-Wesley 2012

CS450/550. Pipeline Architecture. Adapted From: Angel and Shreiner: Interactive Computer Graphics6E Addison-Wesley 2012 CS450/550 Pipeline Architecture Adapted From: Angel and Shreiner: Interactive Computer Graphics6E Addison-Wesley 2012 0 Objectives Learn the basic components of a graphics system Introduce the OpenGL pipeline

More information

Shadows. COMP 575/770 Spring 2013

Shadows. COMP 575/770 Spring 2013 Shadows COMP 575/770 Spring 2013 Shadows in Ray Tracing Shadows are important for realism Basic idea: figure out whether a point on an object is illuminated by a light source Easy for ray tracers Just

More information

Computer Graphics Introduction. Taku Komura

Computer Graphics Introduction. Taku Komura Computer Graphics Introduction Taku Komura What s this course all about? We will cover Graphics programming and algorithms Graphics data structures Applied geometry, modeling and rendering Not covering

More information

First Steps in Hardware Two-Level Volume Rendering

First Steps in Hardware Two-Level Volume Rendering First Steps in Hardware Two-Level Volume Rendering Markus Hadwiger, Helwig Hauser Abstract We describe first steps toward implementing two-level volume rendering (abbreviated as 2lVR) on consumer PC graphics

More information

Shading Languages. Seminar Computer Graphics. Markus Kummerer

Shading Languages. Seminar Computer Graphics. Markus Kummerer Shading Languages Markus Kummerer ABSTRACT Shading Languages provide a highly flexible approach for creating visual structures in computer imagery. The RenderMan Interface provides an API for scene description,

More information

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL International Edition Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL Sixth Edition Edward Angel Dave Shreiner Interactive Computer Graphics: A Top-Down Approach with Shader-Based

More information

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves CSC 418/2504: Computer Graphics Course web site (includes course information sheet): http://www.dgp.toronto.edu/~elf Instructor: Eugene Fiume Office: BA 5266 Phone: 416 978 5472 (not a reliable way) Email:

More information

Lecture 2. Shaders, GLSL and GPGPU

Lecture 2. Shaders, GLSL and GPGPU Lecture 2 Shaders, GLSL and GPGPU Is it interesting to do GPU computing with graphics APIs today? Lecture overview Why care about shaders for computing? Shaders for graphics GLSL Computing with shaders

More information

CS451Real-time Rendering Pipeline

CS451Real-time Rendering Pipeline 1 CS451Real-time Rendering Pipeline JYH-MING LIEN DEPARTMENT OF COMPUTER SCIENCE GEORGE MASON UNIVERSITY Based on Tomas Akenine-Möller s lecture note You say that you render a 3D 2 scene, but what does

More information

Case Study: The Pixar Story. By Connor Molde Comptuer Games & Interactive Media Year 1

Case Study: The Pixar Story. By Connor Molde Comptuer Games & Interactive Media Year 1 Case Study: The Pixar Story By Connor Molde Comptuer Games & Interactive Media Year 1 Contents Section One: Introduction Page 1 Section Two: About Pixar Page 2 Section Three: Drawing Page 3 Section Four:

More information

Chapter 7 - Light, Materials, Appearance

Chapter 7 - Light, Materials, Appearance Chapter 7 - Light, Materials, Appearance Types of light in nature and in CG Shadows Using lights in CG Illumination models Textures and maps Procedural surface descriptions Literature: E. Angel/D. Shreiner,

More information

Real-Time Rendering (Echtzeitgraphik) Michael Wimmer

Real-Time Rendering (Echtzeitgraphik) Michael Wimmer Real-Time Rendering (Echtzeitgraphik) Michael Wimmer wimmer@cg.tuwien.ac.at Walking down the graphics pipeline Application Geometry Rasterizer What for? Understanding the rendering pipeline is the key

More information

CS559: Computer Graphics. Lecture 12: Antialiasing & Visibility Li Zhang Spring 2008

CS559: Computer Graphics. Lecture 12: Antialiasing & Visibility Li Zhang Spring 2008 CS559: Computer Graphics Lecture 12: Antialiasing & Visibility Li Zhang Spring 2008 Antialising Today Hidden Surface Removal Reading: Shirley ch 3.7 8 OpenGL ch 1 Last time A 2 (x 0 y 0 ) (x 1 y 1 ) P

More information

FROM VERTICES TO FRAGMENTS. Lecture 5 Comp3080 Computer Graphics HKBU

FROM VERTICES TO FRAGMENTS. Lecture 5 Comp3080 Computer Graphics HKBU FROM VERTICES TO FRAGMENTS Lecture 5 Comp3080 Computer Graphics HKBU OBJECTIVES Introduce basic implementation strategies Clipping Scan conversion OCTOBER 9, 2011 2 OVERVIEW At end of the geometric pipeline,

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information

CSE 167: Introduction to Computer Graphics Lecture #7: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2015

CSE 167: Introduction to Computer Graphics Lecture #7: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2015 CSE 167: Introduction to Computer Graphics Lecture #7: Lights Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2015 Announcements Thursday in-class: Midterm Can include material

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Final Projects Proposals due Thursday 4/8 Proposed project summary At least 3 related papers (read & summarized) Description of series of test cases Timeline & initial task assignment The Traditional Graphics

More information

Drawing Fast The Graphics Pipeline

Drawing Fast The Graphics Pipeline Drawing Fast The Graphics Pipeline CS559 Fall 2016 Lectures 10 & 11 October 10th & 12th, 2016 1. Put a 3D primitive in the World Modeling 2. Figure out what color it should be 3. Position relative to the

More information

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading: Reading Required: Watt, Section 5.2.2 5.2.4, 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading: 18. Projections and Z-buffers Foley, et al, Chapter 5.6 and Chapter 6 David F. Rogers

More information

Programming Guide. Aaftab Munshi Dan Ginsburg Dave Shreiner. TT r^addison-wesley

Programming Guide. Aaftab Munshi Dan Ginsburg Dave Shreiner. TT r^addison-wesley OpenGUES 2.0 Programming Guide Aaftab Munshi Dan Ginsburg Dave Shreiner TT r^addison-wesley Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal London Munich Paris Madrid

More information

Advanced RenderMan 2: To RI INFINITY and Beyond

Advanced RenderMan 2: To RI INFINITY and Beyond Advanced RenderMan 2: To RI INFINITY and Beyond SIGGRAPH 2000 Course 40 course organizer: Larry Gritz, Pixar Animation Studios Lecturers: Tony Apodaca, Pixar Animation Studios Larry Gritz, Pixar Animation

More information

The Design of RenderMan

The Design of RenderMan Real-Time Graphics Architecture Kurt Akeley Pat Hanrahan http://www.graphics.stanford.edu/courses/cs448a-01-fall The Design of RenderMan 1 Religious Issue #1 The world is not made of polygons In fact,

More information

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON Deferred Rendering in Killzone 2 Michal Valient Senior Programmer, Guerrilla Talk Outline Forward & Deferred Rendering Overview G-Buffer Layout Shader Creation Deferred Rendering in Detail Rendering Passes

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students) TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students) Saturday, January 13 th, 2018, 08:30-12:30 Examiner Ulf Assarsson, tel. 031-772 1775 Permitted Technical Aids None, except

More information

Buffers, Textures, Compositing, and Blending. Overview. Buffers. David Carr Virtual Environments, Fundamentals Spring 2005 Based on Slides by E.

Buffers, Textures, Compositing, and Blending. Overview. Buffers. David Carr Virtual Environments, Fundamentals Spring 2005 Based on Slides by E. INSTITUTIONEN FÖR SYSTEMTEKNIK LULEÅ TEKNISKA UNIVERSITET Buffers, Textures, Compositing, and Blending David Carr Virtual Environments, Fundamentals Spring 2005 Based on Slides by E. Angel Compositing,

More information

PowerVR Hardware. Architecture Overview for Developers

PowerVR Hardware. Architecture Overview for Developers Public Imagination Technologies PowerVR Hardware Public. This publication contains proprietary information which is subject to change without notice and is supplied 'as is' without warranty of any kind.

More information

Course Recap + 3D Graphics on Mobile GPUs

Course Recap + 3D Graphics on Mobile GPUs Lecture 18: Course Recap + 3D Graphics on Mobile GPUs Interactive Computer Graphics Q. What is a big concern in mobile computing? A. Power Two reasons to save power Run at higher performance for a fixed

More information

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0)

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0) Acceleration Techniques V1.2 Anthony Steed Based on slides from Celine Loscos (v1.0) Goals Although processor can now deal with many polygons (millions), the size of the models for application keeps on

More information

Rasterization Computer Graphics I Lecture 14. Scan Conversion Antialiasing Compositing [Angel, Ch , ]

Rasterization Computer Graphics I Lecture 14. Scan Conversion Antialiasing Compositing [Angel, Ch , ] 15-462 Computer Graphics I Lecture 14 Rasterization March 13, 2003 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/ Scan Conversion Antialiasing Compositing [Angel,

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

COMP371 COMPUTER GRAPHICS

COMP371 COMPUTER GRAPHICS COMP371 COMPUTER GRAPHICS LECTURE 14 RASTERIZATION 1 Lecture Overview Review of last class Line Scan conversion Polygon Scan conversion Antialiasing 2 Rasterization The raster display is a matrix of picture

More information

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015 Orthogonal Projection Matrices 1 Objectives Derive the projection matrices used for standard orthogonal projections Introduce oblique projections Introduce projection normalization 2 Normalization Rather

More information

Advanced Distant Light for DAZ Studio

Advanced Distant Light for DAZ Studio Contents Advanced Distant Light for DAZ Studio Introduction Important Concepts Quick Start Quick Tips Parameter Settings Light Group Shadow Group Lighting Control Group Known Issues Introduction The Advanced

More information

Physically-Based Laser Simulation

Physically-Based Laser Simulation Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on

More information

Application of Parallel Processing to Rendering in a Virtual Reality System

Application of Parallel Processing to Rendering in a Virtual Reality System Application of Parallel Processing to Rendering in a Virtual Reality System Shaun Bangay Peter Clayton David Sewry Department of Computer Science Rhodes University Grahamstown, 6140 South Africa Internet:

More information

POWERVR MBX. Technology Overview

POWERVR MBX. Technology Overview POWERVR MBX Technology Overview Copyright 2009, Imagination Technologies Ltd. All Rights Reserved. This publication contains proprietary information which is subject to change without notice and is supplied

More information

Acknowledgement: Images and many slides from presentations by Mark J. Kilgard and other Nvidia folks, from slides on developer.nvidia.

Acknowledgement: Images and many slides from presentations by Mark J. Kilgard and other Nvidia folks, from slides on developer.nvidia. Shadows Acknowledgement: Images and many slides from presentations by Mark J. Kilgard and other Nvidia folks, from slides on developer.nvidia.com Practical & Robust Stenciled Shadow Volumes for Hardware-Accelerated

More information

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 Announcements Homework project #2 due this Friday, October

More information

Could you make the XNA functions yourself?

Could you make the XNA functions yourself? 1 Could you make the XNA functions yourself? For the second and especially the third assignment, you need to globally understand what s going on inside the graphics hardware. You will write shaders, which

More information

Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces

Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces EUROGRAPHICS 2001 / Jonathan C. Roberts Short Presentations Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces Eskil Steenberg The Interactive Institute, P.O. Box 24081, SE 104 50 Stockholm,

More information