OpenGL and RenderMan Rendering Architectures

Similar documents
Lecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Comparing Reyes and OpenGL on a Stream Architecture

REYES REYES REYES. Goals of REYES. REYES Design Principles

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Real-Time Reyes: Programmable Pipelines and Research Challenges. Anjul Patney University of California, Davis

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming

Grafica Computazionale: Lezione 30. Grafica Computazionale. Hiding complexity... ;) Introduction to OpenGL. lezione30 Introduction to OpenGL

Pipeline Operations. CS 4620 Lecture 14

E.Order of Operations

Real-Time Reyes Programmable Pipelines and Research Challenges

CS452/552; EE465/505. Clipping & Scan Conversion

A Real-time Micropolygon Rendering Pipeline. Kayvon Fatahalian Stanford University

DEFERRED RENDERING STEFAN MÜLLER ARISONA, ETH ZURICH SMA/

OpenGl Pipeline. triangles, lines, points, images. Per-vertex ops. Primitive assembly. Texturing. Rasterization. Per-fragment ops.

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload)

Lecturer Athanasios Nikolaidis

Graphics and Interaction Rendering pipeline & object modelling

TSBK03 Screen-Space Ambient Occlusion

CHAPTER 1 Graphics Systems and Models 3

VU Rendering SS Unit 9: Renderman

! Pixar RenderMan / REYES. ! Software shaders. ! Pixar Photorealistic RenderMan (PRMan) ! Basically a sophisticated scanline renderer

Introduction to Visualization and Computer Graphics

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload)

Rasterization Overview

Advanced Shading and Texturing

Pipeline Operations. CS 4620 Lecture 10

National Chiao Tung Univ, Taiwan By: I-Chen Lin, Assistant Professor

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

The Traditional Graphics Pipeline

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Drawing Fast The Graphics Pipeline

Reyes Rendering on the GPU

Lecture 12: Advanced Rendering

Next-Generation Graphics on Larrabee. Tim Foley Intel Corp

Rendering Grass with Instancing in DirectX* 10

Graphics and Imaging Architectures

Introduction to Computer Graphics. Knowledge basic concepts 2D and 3D computer graphics

COMP30019 Graphics and Interaction Rendering pipeline & object modelling

Lecture outline. COMP30019 Graphics and Interaction Rendering pipeline & object modelling. Introduction to modelling

Practical Shadow Mapping

Other Rendering Techniques CSE 872 Fall Intro You have seen Scanline converter (+z-buffer) Painter s algorithm Radiosity CSE 872 Fall

Drawing Fast The Graphics Pipeline

Some Resources. What won t I learn? What will I learn? Topics

Displacement Mapping

COMP environment mapping Mar. 12, r = 2n(n v) v

Visualizer An implicit surface rendering application

COMPUTER GRAPHICS COURSE. Rendering Pipelines

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu

Computer Graphics Fundamentals. Jon Macey

The Traditional Graphics Pipeline

Real-Time Graphics Architecture

Computer Graphics: Programming, Problem Solving, and Visual Communication

Overview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality.

CS130 : Computer Graphics Lecture 2: Graphics Pipeline. Tamar Shinar Computer Science & Engineering UC Riverside

CS4620/5620: Lecture 14 Pipeline

Applications of Explicit Early-Z Culling

CS450/550. Pipeline Architecture. Adapted From: Angel and Shreiner: Interactive Computer Graphics6E Addison-Wesley 2012

Shadows. COMP 575/770 Spring 2013

Computer Graphics Introduction. Taku Komura

First Steps in Hardware Two-Level Volume Rendering

Shading Languages. Seminar Computer Graphics. Markus Kummerer

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

Lecture 2. Shaders, GLSL and GPGPU

CS451Real-time Rendering Pipeline

Case Study: The Pixar Story. By Connor Molde Comptuer Games & Interactive Media Year 1

Chapter 7 - Light, Materials, Appearance

Real-Time Rendering (Echtzeitgraphik) Michael Wimmer

CS559: Computer Graphics. Lecture 12: Antialiasing & Visibility Li Zhang Spring 2008

FROM VERTICES TO FRAGMENTS. Lecture 5 Comp3080 Computer Graphics HKBU

Shadows in the graphics pipeline

CSE 167: Introduction to Computer Graphics Lecture #7: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2015

The Traditional Graphics Pipeline

Drawing Fast The Graphics Pipeline

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Programming Guide. Aaftab Munshi Dan Ginsburg Dave Shreiner. TT r^addison-wesley

Advanced RenderMan 2: To RI INFINITY and Beyond

The Design of RenderMan

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

Buffers, Textures, Compositing, and Blending. Overview. Buffers. David Carr Virtual Environments, Fundamentals Spring 2005 Based on Slides by E.

PowerVR Hardware. Architecture Overview for Developers

Course Recap + 3D Graphics on Mobile GPUs

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0)

Rasterization Computer Graphics I Lecture 14. Scan Conversion Antialiasing Compositing [Angel, Ch , ]

Previously... contour or image rendering in 2D

COMP371 COMPUTER GRAPHICS

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Advanced Distant Light for DAZ Studio

Physically-Based Laser Simulation

Application of Parallel Processing to Rendering in a Virtual Reality System

POWERVR MBX. Technology Overview

Acknowledgement: Images and many slides from presentations by Mark J. Kilgard and other Nvidia folks, from slides on developer.nvidia.

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Could you make the XNA functions yourself?

Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces

Transcription:

HELSINKI UNIVERSITY OF TECHNOLOGY 16.4.2002 Telecommunications Software and Multimedia Laboratory T-111.500 Seminar on Computer Graphics Spring 2002: Advanced Rendering Techniques OpenGL and RenderMan Rendering Architectures Piia Pulkkinen 51009R

OpenGL and RenderMan Rendering Architectures Piia Pulkkinen HUT, Telecommunications Software and Multimedia Laboratory Piia.Pulkkinen@hut.fi Abstract OpenGL and RenderMan are two different rendering architectures. OpenGL is used in real-time applications and RenderMan in movies. OpenGL is a low-level, platform independent API that offers access to the graphics hardware functions. RenderMan is based on the REYES architecture, which defines a photorealistic-quality image production algorithm. The high image quality is mainly achieved with programmable shaders and micropolygons, which are sub-pixel sized quadrilaterals. The lack of programmability restricts the image quality improvement in OpenGL but so far it has not been possible to use RenderMan in real-time rendering. 1 INTRODUCTION Computer graphics is nowadays used in two main fields: interactive applications and motion pictures. Both are demanding and challenging subjects, but they differ from each other in one particular sense. Interactive applications, such as computer games, require real-time rendering, even with the cost of reduced image quality, while movies demand high image quality and do not pay that much attention on rendering speed. So far there is not a rendering architecture implementation that would meet the needs of the both parties, so real-time applications and movies use different rendering architectures. The best-known real-time rendering architecture is OpenGL. It is an application programming interface (API) to graphics hardware and designed to work on several different hardware platforms (Segal and Akeley, 1994). The core OpenGL is a low-level API that provides commands only for basic primitives. The amount of commands is about 250 but it can be extended with optional libraries, which may include also higherlevel commands (Akeley and Hanrahan, 2001a). OpenGL satisfies the demand for realtime rendering, but the image quality, although getting better all the time, is not nearly photorealistic. RenderMan is the rendering architecture used in movies. RenderMan API is more compact than the OpenGL API, because it contains only about 100 commands (Akeley and Hanrahan, 2001b). However, it is more flexible than OpenGL, and therefore it is capable of producing photorealistic imagery. RenderMan is not used in real-time applications because it does not meet the requirements of real-time rendering. 1

This paper discusses OpenGL and RenderMan architectures and their differences. Section 2 tells about their history and development and Section 3 presents briefly both architectures. Section 4 concentrates on rendering quality issues and describes the fundamental features of both architectures. Finally, Section 5 offers some conclusions of the current state and of the future of the architectures. 2 HISTORICAL AND PHILOSOPHICAL BACKGROUND OpenGL and RenderMan were developed for different purposes and from different starting points. Both architectures have stayed alive for quite a long time, and they are nowadays remarkably popular in their own fields. 2.1 OpenGL and real-time rendering The history of OpenGL dates back to the late 1970 s when its progenitor was written. Silicon Graphics IRIS GL was based on that early graphics library, and when Silicon Graphics started the development of OpenGL in 1990, IRIS GL was taken as the design model. The OpenGL designers could have designed a totally new API, but they wanted to take all possible advantage of the working and widely accepted system. As the result OpenGL is very much like improved IRIS GL. OpenGL was intended to become a standard for real-time rendering architecture. Kurt Akeley, one of the designers of OpenGL, stated in the lecture slides of Stanford University Computer graphics course (Akeley and Hanrahan, 2001a) that the design goals for OpenGL were industry-wide acceptance, consistent and innovative implementations, innovative and differentiated applications, long life and high quality. What OpenGL was not designed for, according to Akeley, was to make graphics programming easy or to integrate digital media and 3D graphics. Originally OpenGL was developed by Silicon Graphics, who still owns the trademark OpenGL, but the current development is controlled by the ARB, the Architecture Review Board. The ARB members include companies like IBM, Intel, Microsoft, SGI, Sun and nvidia (Akeley and Hanrahan, 2001a). A committee driven development helps to avoid compatibility problems, but it also leads to slow progression in development, because the members must approve all actions. On the other hand, when the decisions are finally made, they tend to be good, because they really are deliberated. The latest version of OpenGL was released in 2001 under the version number 1.3. One important goal for OpenGL was to offer complete access to graphics hardware functions and at the same time to be device independent (Segal and Akeley, 1994). OpenGL has redeemed this promise pretty well. OpenGL implementations can be easily used on different platforms and on different levels of hardware. 2.2 RenderMan and photorealism RenderMan is based on the REYES architecture, or to be exact, RenderMan is an implementation of the REYES architecture. REYES, which stands for Renders Everything You Ever Saw, was developed by Lucasfilm, nowadays Pixar, in the mid- 2

1980s. The researches at Lucasfilm felt that there was no rendering algorithm that would be capable of creating that kind of special effects for movies they wanted, so they decided to write an algorithm of their own. They came up with an idea of a scanline rendering algorithm that they named the REYES architecture (Apodaca and Gritz, 2000). The REYES architecture was designed to solve the problems that the algorithms of the time suffered, so the philosophy behind it was something totally new. The other rendering algorithms assumed that the world consisted of polygons, which can be proved wrong just by looking around. REYES took into account that in fact the world contains a vast number of complex geometric primitives that cannot be modelled realistically with polygons. In order to achieve realism the complexity of the rendered image needs to be as large as in the photograph, which makes great demands on geometry, lighting and texturing (Akeley and Hanrahan, 2001b). However, because the beauty is in the eye of the beholder, the rendered images need not to be wholly realistic, only to look like it, which allows surprisingly much cheating. The image quality was the most important issue in the development of REYES. The rendered images were intended for huge screens that would reveal every incorrectly rendered pixel, so the rendering quality was greatly improved. In addition, motion blur was an effect that had earlier been forgotten, but REYES corrected also this defect (Apodaca and Gritz, 2000). Apodaca and Gritz define (Apodaca and Gritz, 2000) the RenderMan Interface a standard communications protocol between modeling programs and rendering programs capable of producing photorealistic-quality images. The specification was highly ambitious when it was published and it has not lost its ambitiousness in the course of time, because after more than ten years the specification still contains features that no other open specification for scene description has (Apodaca and Gritz, 2000). In the world of motion pictures RenderMan is a concept and it has been used in various different types of movies, from Bug s Life to Mulan and from Stuart Little to Episode I. The importance of RenderMan for the film industry was recognised in 2001 when the designers of REYES, Rob Cook, Loren Carpenter and Ed Catmull, were honored with a Scientific and Technical Oscar. The award was given for significant advancements to the field of motion picture rendering as exemplified in Pixar s RenderMan (Pixar, 2001). 3 ARCHITECTURES Differences between the architectures are based on their philosophical backgrounds. OpenGL was designed to be fast, RenderMan to be accurate. 3.1 OpenGL architecture OpenGL architecture is based on a state machine model. The state machine can be put into several different modes or states, which remain in effect until they are particularly changed. The state variables include colour, viewing and projection transformations, lighting conditions and the object material definitions (Woo et al., 1999). The functionality of the OpenGL architecture can be illustrated with the OpenGL block diagram. Most OpenGL implementations perform the rendering operations in the order 3

shown in Figure 1, although the ordering is not restrictive. Geometric (vertex) data, which consists of vertices, lines and polygons, takes a different route in some parts of the pipeline than pixel data consisting of pixels, images and bitmaps. The final steps, rasterization, per-fragment operations and writing into the framebuffer, are same for both types of data. Vertex Data Display List Evaluator Per-Vertex Operations Primitive Assembly Per- Fragment Operations Rasterization Framebuffer Pixel Operations Texture Memory Pixel Data Figure 1: The OpenGL block diagram (Segal and Akeley, 1994). The diagram describes the order of operations in OpenGL rendering pipeline. The OpenGL block diagram consists of the framebuffer and seven other main parts (Woo et al., 1999): Display list It is not always possible or desired to process data immediately. Both vertex and pixel data can be stored into display lists for future or current use. The same display list can be executed countless of times, which is especially useful in realtime applications that often need to render the same data for several frames. Evaluator All geometric primitives need to be described by vertices. However, parametric curves and surfaces may be described by control points, and evaluators are used for deriving the vertices from the control points. Per-vertex operations and primitive assembly Vertices are converted into geometric primitives in the per-vertex operations stage. In the same stage the lighting parameters are used for calculating the right colour value and, if textures are used, also texture coordinates can be generated. Primitive assembly consists mainly of clipping but also of perspective division, viewport and depth operations and culling. Pixel operations The pixel data read from the system memory is converted into the right size and format in the pixel operations stage. The result is then written into texture memory or moved to the next stage. Pixel data can also be read from the framebuffer, when data is stored into the system memory after operations. Texture memory All textures are stored into the texture memory. Rasterization In the rasterization stage both types of data are converted into fragments. The conversion has to be done in order to store data into the framebuffer, because one pixel in the framebuffer is represented by one fragment. Before leaving the stage each fragment square gets its own colour and depth values. 4

Per-fragment operations In the final stage before the framebuffer the fragment may go through scissor, alpha, stencil and depth-buffer test. If texturing is used, the texel is applied to the fragment. Also fog calculations, blending and dithering are performed. 3.2 RenderMan architecture The REYES architecture defines a rendering pipeline that RenderMan uses. If compared to the pipelines in the modern graphics hardware, the REYES pipeline has special geometric operations and it also gathers and stores detailed information about the image being rendered, which helps to achieve a high image quality. Otherwise the REYES pipeline and hardware pipelines do not differ that much from each other (Apodaca and Gritz, 2000). The outline of the algorithm, as seen in Figure 2, is quite straightforward. MODEL Bound Split NO Cull On screen? NO YES Diceable? YES Dice Textures Shade Sample Visibility Filter IMAGE Figure 2: The REYES rendering pipeline (Cook et al., 1987) consists of the main pipeline and one splitting loop. At first the primitive is bounded, which means that it is surrounded with a cameraspaced axis-aligned box. All RenderMan primitives are finite, so every primitive can be bound. The bounding box is then checked, whether it is on-screen or wholly outside it. If it is outside or the primitive is one-sided and totally back facing, it is culled, otherwise it continues in the pipeline. The algorithm does not perform any global illumination calculations or such, so although the primitive would somehow affect the visible scene, it will still be culled if the bounding box is outside the screen. In the diceable test the size of the primitive is tested. To proceed to the dicing phase the primitive needs to be small enough. If it is not, it is split into smaller parts that are sent back to the bounding stage. Dicing means converting the primitive into a grid consisting of micropolygons. Micropolygons are quadrilaterals, whose size is about ¼ 5

of pixel area, and they are the basic geometric units in RenderMan. Half a pixel is the Nyquist limit for images, which makes the shading of micropolygons easy. Dicing must not produce a grid with too many or numerous variable sized micropolygons; that is why the diceable test is needed. The shaders used in the shading phase are defined in RenderMan Shading Language files. Actually the whole shading system is an interpreter for the Shading Language. There are several different types of shaders, which are applied in a specific order to the grid. Texture information is also used in the shading calculations. After the grid is shaded, the sampling phase separates the micropolygons from each other and passes them one by one to the mini-version of the primitive loop. Micropolygons are bounded, tested for on-screen visibility and culled if needed. As micropolygons are very small, they do not have to be split in any case, because they are either inside or outside the screen. Next, in the visibility phase the location of the micropolygon inside the pixels is determined. If the micropolygon covers any of the pre-determined sampling points, the colour, opacity and depth of the micropolygon are stored as a visible point for that location. When all primitives that affect a certain pixel are processed, a reconstruction filter is used for generating the final pixel colour based on the visible point values. As shown in Figure 2, in the REYES pipeline shading is done before determining visibility. The feature is not common in rendering pipelines, because usually the hidden pixels are removed before shading anything. However, shading before visibility determination has proved to have several advantages. One of them is enabling displacement shading (Apodaca and Gritz, 2000). The shader is free to move the points wherever it needs because no visibility calculations are made yet. This would not be possible after the visibility is determined, since moving a point could have an effect on the pixel visibility. The most serious disadvantage is that depending on the depth complexity of the scene, the amount of pixels that are shaded but later covered by other pixels can be high. However, this is not usually a big problem in movies, where RenderMan is mostly used nowadays, because the depth complexity is seldom large. The viewing angles have been defined beforehand, and hidden objects are probably not even modelled. The situation is totally different in interactive applications such as computer games where, in general, all viewing angles are possible and chosen by user. As the Z-buffer algorithms handle all pixels, no matter if the whole object was occluded by another object, the large depth complexity causes problems in computer games. 4 QUALITY ISSUES Due to the architectural differences and details the images rendered with OpenGL and RenderMan do not look much alike. Although both architectures have some same features, they are implemented in different ways in OpenGL with rendering speed in mind, which has reduced image quality. 4.1 Fragments and micropolygons The OpenGL block diagram and the REYES rendering pipeline show that image information is processed in a different way in OpenGL and RenderMan. In the 6

rasterization phase of the OpenGL block diagram both pixel and geometric data are converted into fragments and these fragments are the smallest primitives OpenGL can handle (Woo et al., 1999). One fragment corresponds to one pixel in the framebuffer, which is actually quite a large area for images with small details. For RenderMan the pixel-sized fragments are too coarse, so the primitives are cut into micropolygons, which are smaller than pixels. The operation in the REYES pipeline is called dicing and it creates a two-dimensional array of micropolygons called a grid (Cook et al., 1987). The detail of dicing depends on the primitive and the estimate of the primitive size on the screen. The detail-level of dicing defines how many micropolygons to create of the primitive. On one hand to avoid the creation of unnecessary and too tiny micropolygons, and on the other hand to ensure that all visual details are captured, RenderMan uses adaptive subdivision in creation of micropolygons (Apodaca and Gritz, 2000). It means that the sizes of the micropolygons in the raster space are nearly equal, although their sizes in the parametric space can vary. Thus, the objects far away from the camera have less micropolygons than the objects close to the camera, which often leads to a situation where two adjacent grids have a different number of micropolygons along their common edge. This can also happen inside one primitive when it is divided into smaller subprimitives before dicing, because the adjacent subprimitives may project to different sizes on the screen and therefore have different tessellation rates in the dicing phase. The result is that the micropolygons in the adjacent grids are different in size in the parametric space. The filtering calculations that use the parametric size of the micropolygons get messed up on the grid boundary, which causes artifacts that can be visible in the final image. In addition, only primitives that have common vertices and common edges between them are defined connected. If the adjacent grids are different in size, this is not the case. Therefore, the edges and vertices that should be connected may diverge from each other, which causes a hole, a crack, between them. These problems are nearly solved in the current version of RenderMan that uses slightly varying sized micropolygons in the parametric space. The grid boundaries can be glued together easier, when the calculations are done with ideal approximations of the real sizes. The shading rate specifies how many shading samples per pixel must be taken to adequately achieve the colour variations of the primitive (Apodaca and Gritz, 2000). In the dicing phase the shading rate is used in determining the number of micropolygons in the grid, so that the final shading result would be as good as possible. Shading is done to all micropolygons in a grid at the same time. Because micropolygons are very small, Nyquist sized, only one colour per micropolygon is enough for an adequate shading result (Cook et al., 1987). In practise this means that flat shading can be used with micropolygons. When the micropolygons are transformed to screen space, the area of each pixel is stochastically sampled to determine the rendering parameters for the pixel. Stochastic sampling (Cook, 1986) means supersampling where the samples are not regularly but stochastically spaced. A sufficient randomness in spacing is achieved by adding noise separately to the x and y locations of each sample point. This process is also called jittering. In supersampling more than one sample is taken from the area of the image that represents one pixel on the screen. Stochastically spaced samples give a better result or the human eye considers it better than regularly spaced samples, although the amount of samples is equal. Figure 3 shows the micropolygon grid and the jittered 7

samples taken from the area of one pixel. The final pixel values are calculated as a weighted average of the supersamples. The averaging is done with a reconstruction filter and the result depends on the filter parameters values. The same samples produce different outputs with different reconstruction filters (Apodaca and Gritz, 2000). Flat shading makes the implementation of stochastic sampling efficient. Without it the implementation is far less useful. pixels micropolygon grid jittered samples Figure 3: Transforming the micropolygon grid to screen space (Cook et al., 1987). The area of each pixel is stochastically sampled. 4.2 Visibility The depth-buffer method used in OpenGL has a serious restriction. The Z-buffer can only deal with opaque surfaces, which in practise means that each pixel can have only one visible surface (Hearn and Baker, 1997). Transparent and semi-transparent surfaces have to be implemented in another way. RenderMan uses the A-buffer method, where one pixel can have several surfaces. This is implemented with a per pixel linked list, which contains the parameters of all surfaces. The A-buffer handles transparent and semi-transparent surfaces and also object antialiasing (Hearn and Baker, 1997). 4.3 Programmability As the images get more detailed all the time, programmability will become even more important feature of a rendering software than it is now. However, the programmability generally increases the rendering complexity, which is not always suitable. OpenGL architecture does not include a programming language. OpenGL was designed for interactive applications running in ordinary home computers with limited processing power and capacity. One design goal was to keep OpenGL API close to the hardware and that way to maximise the performance, and a programming language would have conflicted with this goal (Segal and Akeley, 1994). In the designers opinion replacing fixed operation order in graphics hardware with constantly changing algorithms would have separated the API from the hardware and reduced performance. 8

The lack of a programming language makes OpenGL restricted. All functions are defined in advance, which gives the user the opportunity only to turn the operations on or off, and change the parameter values (Segal and Akeley, 1994). Therefore, the images created with OpenGL are actually only combinations of pre-defined, fixed features. RenderMan, on the other hand, has a programming language. The Shading Language, as it is called, looks almost like C, and it can be used for describing the behaviour of lights and surfaces. The RenderMan Interface Specification lists five different types of these descriptions, shaders (Apodaca and Gritz, 2000). Surface shaders define the characteristics of the surface and the behaviour of light on the surface. Displacement shaders describe the bumpiness and inequality of the surfaces, and the lighting details are specified with light shaders. Volume shaders, also called atmosphere shaders, are used for creating realistic-looking open air. They define what happens to light when it passes through smoke or fog, for example. The fifth shaders, imager shaders, describe the final colour value transformations to the pixel before it is rendered. However, programmable imager shaders are not supported by all RenderMan architecture implementations. The most popular shaders are surface shaders, whereas the other shaders are used more occasionally. Volume shaders are an important feature when the rendered images have to look realistic. OpenGL does not pay any attention to air and the behaviour of light in it (Woo et al., 1999). It is possible to have fog in the certain distance from the camera, but the thickness of the fog is calculated using a fixed linear or an exponential function, which does not result in anything realistic-looking. Additionally, the light does not react to the fog in other way than attenuating when the fog gets thicker. In the same situation RenderMan uses volume shaders (Apodaca and Gritz, 2000), which describe the behaviour of light in the foggy air. As can be seen in the Figure 3, volume shaders result in highly realistic-looking images. Shaders can also be used for producing effects like clouds, fire and explosions, which are not easy to construct with the ordinary computer graphics. Figure 3: Smoky air is easy to implement in RenderMan with shaders (Apodaca and Gritz, 2000). 9

The problem with OpenGL is that it has a fixed shading model. This can be clearly seen in the missing existence of air and also in the materials of the surfaces. In OpenGL all surfaces look more or less the same. Usually the material is like plastic, sometimes like some kind of metal, depending on the given parameter values. Because the shading model cannot be changed, OpenGL is unable to present any other kinds of surface materials. RenderMan can use different surface shaders to every object which usually is the case and make the surface materials look at least nearly realistic (Apodaca and Gritz, 2000). Of course, there are some surface materials, like the reflecting side of a CD, which simply cannot be rendered even with shaders because the lighting calculations would be prohibitively complex. However, the importance of programmability is self-evident with all renderable materials. 4.4 Motion blur and depth of field In order to make a scene look realistic some special effects are needed. Two of the most important ones are motion blur and depth of field. Both of them are effects that the viewer does not actively recognise if they exist but if they are not there, the viewer notices instantly that the image cannot be real. When the filming is done with a real camera, fast moving objects seem to leave a blurry streak behind them. The effect is caused by the properties of the camera and it is called motion blur. The faster the object moves the stronger the effect is. Nowadays it is also possible to implement this with rendering software. The lamp in Figure 5 is evidently jumping, not hanging in the air, thanks to the motion blur effect. Figure 5: Motion blur effect appears clearly in the lamp stand when the lamp is jumping (Lasseter and Ostby, 1986). The depth of field effect is also caused by the properties of the camera. When the camera is focused at a certain distance, objects near the focusing point are delineated sharp but the objects far from the focusing point are blurry. In OpenGL both effects are implemented with an accumulation buffer (Segal and Akeley, 1994). The same scene is rendered several times from slightly different viewing 10

angles into the accumulation buffer, and the combination of the scenes results in the final picture. The accumulation buffer method is one application of the multipass algorithms, whose basic idea is to render the same primitive more than once. Multipass algorithms are suitable for OpenGL because they are easy to implement in OpenGL architecture. They handle only small number of parameters whose values can be efficiently changed without any effect on the other parameter values. RenderMan uses stochastic sampling to implement motion blur and depth of field effects (Apodaca and Gritz, 2000). The stochastic sampling algorithm is used in the visibility phase of the rendering pipeline, which requires that in the motion blur effect the whole motion path of the object has to be included in the bounding box calculations. Motion blur is created by taking samples at different times during one frame (Cook, 1986). The frame time is divided into slices and each sample point is assigned one randomly chosen slice. The exact sampling time is defined by jittering. If motion blur is applied the primitives can move only linearly and the shaded micropolygons cannot change colour during the motion. In depth of field the lens parameters and focusing equations define the amount of confusion for each primitive and stochastic sampling is used for calculating which blurry micropolygons are visible. The sampling algorithm needs to handle a huge amount of samples, because usually a large number of objects need to be blurred. Creating an accurate depth of field requires quite much processing capacity, and therefore a simple approximation of the effect is more popular despite some of its restrictions. The approximation is implemented with blurred composited layers and it is much faster than the real depth of field. 5 CONCLUSIONS The problem in the real-time graphics is that there is no rendering technique that would produce photorealistic-quality images and at the same time be capable of real-time rendering. RenderMan would solve the image quality problems in a twinkling of an eye, but currently certain parts of RenderMan are highly impractical to implement on most graphic accelerators (Segal and Akeley, 1994). Although this may well change in the not so distant future, there are still lots to do before the first working implementation of real-time RenderMan is out in the market. Improving the image quality of OpenGL is not an easy task either. The most obvious extension, which would make the images look better, is programmability. Programmability was dropped out of the original design of OpenGL for the performance reasons but actually there is nothing in the OpenGL structure that would prevent adding it. The design of OpenGL is heading towards programmability, and it is possible that the next version of OpenGL (2.0) even has some programmable features. Programmability would give an opportunity to add some features of RenderMan to OpenGL. One possible implementation would be to add programmable processors in the different parts of the OpenGL rendering pipeline. In the evaluator stage a programmable processor could be used for creating the same kind of shading as with displacement shaders in RenderMan. Fast parametric surfaces could be also generated. Surface shader-like shading could be applied in the per-fragment operations stage, and a 11

programmable framebuffer might give an opportunity to use imager shaders. In addition, if the lighting calculations were not done in the per-vertex operations stage but in the per-fragment operations stage with programmable processor the used shading model could be more freely defined. This would be a great improvement in the way of getting rid of the fixed shading model. Whether the both architectures continue developing on their own or try to take influence on each other somehow, the expectations of the audience cause the same problem for both of them. Although Moore s law seems to apply to the development of the computing hardware, the viewers of the movies and the users of the applications also become more demanding all the time. They want more detailed rendering, more special effects, more reality. The appetite for details cannot grow infinitely, but in the mean time film studios, programmers, designers, and all involved should keep their feet on the ground and not let the growth of rendering complexity to exceed the limit of Moore s law. REFERENCES Akeley, K.; Hanrahan, P. 2001a. The OpenGL Graphics System. Lecture slides of the course cs448a Real-Time Graphics Architecture, Stanford University. Autumn 2001. Akeley, K.; Hanrahan, P. 2001b. The Design of RenderMan. Lecture slides of the course cs448a Real-Time Graphics Architecture, Stanford University. Autumn 2001. Apodaca, A.A.; Gritz, L. 2000. Advanced RenderMan. First edition. San Diego, California, USA. Morgan Kaufmann Publishers. 543 p. Cook, R. L. 1986. Stochastic Sampling in Computer Graphics. ACM Transactions on Graphics. Vol. 5. No. 1, January 1986. pp 51-72. Cook, R. L.; Carpenter, L.; Catmull, E. 1987. The Reyes Image Rendering Architecture. Proceedings of SIGGRAPH 87. Anaheim, USA, July 27.-31., 1987. ACM Computer Graphics. Volume 21, Number 4, July 1987, pp. 95-102. Hearn, D.; Baker, M. P. 1997. Computer Graphics C Version. Second edition. Upper Saddle River, New Jersey, USA. Prentice Hall, Inc. 652 p. Lasseter, J.; Ostby, E. 1986. Pixar Christmas Card. Pixar Animation Studios. 2001. Pixar s Catmull, Carpenter & Cook Receive Academy Award of Merit. Press Release. Emeryville, California, USA, March 5., 2001. Segal, M.; Akeley, K. 1994. The Design of the OpenGL Graphics Interface. Mountain View, California, USA. Silicon Graphics Computer Systems. 10 p. Woo, M.; Neider, J.; Davis, T.; Shreiner, D. 1999. OpenGL Programming Guide. Third edition. USA. Addison-Wesley. 730 p. 12