Computer Graphics - Week 8

Similar documents
Aliasing and Antialiasing. ITCS 4120/ Aliasing and Antialiasing

Einführung in die Erweiterte Realität - 4. VRML - Gudrun Klinker Nov 11, 2003

Topics. Overview of VRML 97 VRML Today After VRML

Aliasing. Can t draw smooth lines on discrete raster device get staircased lines ( jaggies ):

Computer Graphics. Sampling Theory & Anti-Aliasing. Philipp Slusallek

Sampling, Aliasing, & Mipmaps

Scan line algorithm. Jacobs University Visualization and Computer Graphics Lab : Graphics and Visualization 272

Sampling, Aliasing, & Mipmaps

Sampling, Aliasing, & Mipmaps

Drawing a Triangle (and an Intro to Sampling)

Drawing a Triangle (and an introduction to sampling)

Computer Graphics 1. Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2011

X3D - Texture, Material Properties, Artificial Light Models -

Motivation. The main goal of Computer Graphics is to generate 2D images. 2D images are continuous 2D functions (or signals)

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

CS559: Computer Graphics. Lecture 12: Antialiasing & Visibility Li Zhang Spring 2008

2.5 Animations. Applications. Learning & Teaching Design User Interfaces. Content Process ing. Group Communi cations. Documents.

2.5 Animations. Contents. Basics. S ystem s. Services. Usage. Computer Architectures. Learning & Teaching Design User Interfaces.

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

Theoretically Perfect Sensor

Buffers, Textures, Compositing, and Blending. Overview. Buffers. David Carr Virtual Environments, Fundamentals Spring 2005 Based on Slides by E.

The main goal of Computer Graphics is to generate 2D images 2D images are continuous 2D functions (or signals)

Theoretically Perfect Sensor

Adaptive Point Cloud Rendering

Aliasing And Anti-Aliasing Sampling and Reconstruction

Surface shading: lights and rasterization. Computer Graphics CSE 167 Lecture 6

Computer Graphics I Lecture 11

Image Sampling and Quantisation

Image Sampling & Quantisation

8/5/2012. Introduction. Transparency. Anti-Aliasing. Applications. Conclusions. Introduction

CSE 167: Introduction to Computer Graphics Lecture #7: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2015

Computer Graphics. Shadows

Rasterization Overview

Deferred Rendering Due: Wednesday November 15 at 10pm

Line Drawing. Introduction to Computer Graphics Torsten Möller / Mike Phillips. Machiraju/Zhang/Möller

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2014

Fourier analysis and sampling theory

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Advanced Computer Graphics. Aliasing. Matthias Teschner. Computer Science Department University of Freiburg

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

Pipeline Operations. CS 4620 Lecture 10

Rasterization and Graphics Hardware. Not just about fancy 3D! Rendering/Rasterization. The simplest case: Points. When do we care?

Basics. Sampling and Reconstruction. Sampling and Reconstruction. Outline. (Spatial) Aliasing. Advanced Computer Graphics (Fall 2010)

CHAPTER 1 Graphics Systems and Models 3

Grouping Nodes. X3D Graphics for Web Authors. Chapter 3

Ray Tracer I: Ray Casting Due date: 12:00pm December 3, 2001

Outline. Foundations of Computer Graphics (Spring 2012)

Computer Graphics - Week 7

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

To Do. Advanced Computer Graphics. Sampling and Reconstruction. Outline. Sign up for Piazza

Precomputed Radiance Transfer: Theory and Practice

Visibility: Z Buffering

Discrete Techniques. 11 th Week, Define a buffer by its spatial resolution (n m) and its depth (or precision) k, the number of

the gamedesigninitiative at cornell university Lecture 17 Color and Textures

EF432. Introduction to spagetti and meatballs

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

CS130 : Computer Graphics Lecture 2: Graphics Pipeline. Tamar Shinar Computer Science & Engineering UC Riverside

Texture-Mapping Tricks. How Bad Does it Look? We've Seen this Sort of Thing Before. Sampling Texture Maps

Topic 0. Introduction: What Is Computer Graphics? CSC 418/2504: Computer Graphics EF432. Today s Topics. What is Computer Graphics?

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

CS4620/5620: Lecture 14 Pipeline

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms

Ray Tracer Due date: April 27, 2011

Pipeline Operations. CS 4620 Lecture 14

Sampling and Monte-Carlo Integration

Image Filtering, Warping and Sampling

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Getting Started with X3D

Sampling and Reconstruction

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

CSE 167: Lecture #8: GLSL. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Computer Graphics 10 - Shadows

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology

CS 381 Computer Graphics, Fall 2008 Midterm Exam Solutions. The Midterm Exam was given in class on Thursday, October 23, 2008.

Outline. Sampling and Reconstruction. Sampling and Reconstruction. Foundations of Computer Graphics (Fall 2012)

Lecture 2: 2D Fourier transforms and applications

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development

From Vertices to Fragments: Rasterization. Reading Assignment: Chapter 7. Special memory where pixel colors are stored.

Rasterization Computer Graphics I Lecture 14. Scan Conversion Antialiasing Compositing [Angel, Ch , ]

Multimedia Signals and Systems Virtual Reality and VRML

CS354 Computer Graphics Sampling and Aliasing

Topics and things to know about them:

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

CS 543: Computer Graphics. Rasterization

1. Fill in the right hand side of the following equation by taking the derivative: (x sin x) =

COMP30019 Graphics and Interaction Scan Converting Polygons and Lines

Shadows in the graphics pipeline

Drawing Fast The Graphics Pipeline

28 SAMPLING. ALIASING AND ANTI-ALIASING

CS 248 Assignment 2 Polygon Scan Converter. CS248 Presented by Abe Davis Stanford University October 17, 2008

COMP371 COMPUTER GRAPHICS

Introduction to Computer Graphics. Knowledge basic concepts 2D and 3D computer graphics

Renderer Implementation: Basics and Clipping. Overview. Preliminaries. David Carr Virtual Environments, Fundamentals Spring 2005

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University

Introduction: Tech Report Frameless Antialiasing Ellen J. Scher Zagier Page #1

VRML for Visualization

Graphics and Interaction Rendering pipeline & object modelling

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Transcription:

Computer Graphics - Week 8 Bengt-Olaf Schneider IBM T.J. Watson Research Center Questions about Last Week?

Comments about the Assignment Deadline for 2nd assignment extended to Friday March 12 at 5:30 pm Regular rules for late submission apply Post-mortem about 1st assignment We will subtract points for bad Readme files Programs have to run!! At least tell us they don't work. Programming should involve some upfront thinking Drawing each planet separately is somewhat inelegant Formatting and Comments!!! Questions are encouraged... but read the assignment and read our answers to your questions If not explicitly stated otherwise, teamwork is not permitted Overview of Week 3 Aliasing and Anti-aliasing Summary of raster graphics pipeline VRML

Anti-aliasing Aliasing are artifacts introduced by sampling the object geometry, e.g. staircasing, broken up polygons, blinking objects in animations etc. First, a short overview of sampling theory to further the understanding of sampling artifacts Then, we will study different techniques to limit or eliminate aliasing effects: anti-aliasing Last, we will look at implementation techniques to implement anti-aliasing, e.g. using the alpha channel Aliasing: Examples Staircasing Broken-up objects Blinking (missed) objects

Aliasing: Root Cause Aliasing is caused by sampling objects with insuffcient sampling frequency Shannon's sampling theorem states that reconstruction of a sampled signal can only be accomplished if the signal is sampled with a frequency at least with twice the highest frequency component of the signal. In practice, even higher sampling frequency is necessary. Most objects in computer graphics contain infinite frequency components At the edges the signal changes instantaneously Therefore, aliasing is a ubiquitous problem in graphics Anti-aliasing: Super-sampling An obvious remedy to aliasing is to increase the sampling frequency This means to render images at a higher pixel resolution Yet, super-sampling is still a point sampling process Obviously, this will alleviate but not solve the problem Super-sampling misses fewer objects + creates smaller staircases However it does not ensure proper sampling of all features

Anti-aliasing: Area Sampling (1) Point sampling is a hit-or-miss proposition Instead of that binary decision, we prefer to have a sliding scale that indicates how much an object contributes to a pixel This approach is called area sampling 2 5 5 5 5 5 5 2 10 10 10 10 10 2 9 1 1 4 10 10 10 10 9 8 10 10 10 5 4 10 10 8 0 1 10 9 1 9 3 2 Anti-aliasing: Area Sampling (2) Compute the contribution an object makes to a pixel Primitives are assumed to cover a finite area, even points and lines Computation either analytical and by oversampling Analytical approaches Compute the exact pixel area covered by a primitive for all pixels contributing to the anti-aliasing filter Weight the area by that filter function Write the pixel Oversampling, approximates true area-sampling Sample the image at a higher frequency Compute a weighted average of all subpixels within the filter area Write the pixel

Anti-aliasing: Filtering (1) Once the primitives contribution to the pixel has been determined, it must be translated into a pixel intensity The simplest approach is to simply use the coverage percentage as a scale factor for the object's intensity Creates non-linear transitions between pixels Distance of the object from the pixel center is ignored Box filter Anti-aliasing: Filtering (2) For a given pixel, the contributions of objects in a neighborhood around the pixel must be combined using a weighted average The averaging method must satisfy several criteria This neighborhood is larger than 1 pixel An object contributes less if it is farther away from the pixel center The total contribution of an object to all affected pixels is constant

Anti-aliasing: Filtering (3) Filter shapes with a fall-off away from the pixel center and extending beyond the pixel borders satisfy these criteria Pixel Centers Filter Support Anti-aliasing: Filtering (4) Different filter shapes satisfy these criteria Triangular filter, Gaussian filters,... However the actual image quality still differs between these filters In general the more complicated the filter, the better the results This observation is explained by sampling theory

Sampling Theory: Overview Sampling theory investigates and describes the effects of sampling continuous signals Examples of signals Time-variant, e.g. sound Space-variant, e.g. images One-dimensional, e.g. scanline Two-dimensional, e.g. image The signal is considered either in the time/spatial domain of the frequency domain Sampling Theory: Sampling Process A Signal sampled at intervals The size of these intervals determines the sampling frequency The signal is multiplied by a comb function, consisting of several pulses (a.k.a. Dirac function or Delta function) The sampled signal consists of unconnected (discrete) values The reconstructed signal constructs a new signal that approximates the original signal Reconstruction is done using a filter applied to the sampled signal A A Sampling Reconstruction x x x

Sampling Theory: Scan Conversion Scan Conversion samples the input geometry Determine the visible object and its attributes at the pixel center Pixel centers form a two-dimensional array of Delta functions Sampling Theory: Frequency Domain Fourier Analysis allows to decompose a signal into a mix of contributing sine waves Each sine wave has amplitude and phase For periodic signals: Fourier series, i.e. (infinite) sum of discrete frequencies For aperiodic signals: Fourier transformation, i.e. (infinite) integral various frequencies The total of all frequencies forming a signal is called the spectrum

Sampling Theory: Notation We denote the signal with lower case letters For example, f(x) or g(x,y) The Fourier transform is shown in capital letters For example, F(u) or G(u,v) The variable u is the frequency Sampling Theory: Signal and Spectrum Fourier transformation connects signal and frequency domain Fourier transform returns amplitude and phase Often, only the amplitude spectrum is of interest DC value is F(0) f(x) F(u) x Typically spectrum falls off rapidly towards high frequencies Higher frequencies indicate higher energy u

Sampling Theory: Filters A Filter modifies the frequency spectrum Scales every component of the spectrum G(u) = F(u) * H(u) F(u) H(u) Several filters are frequently used Low-pass, eliminate high frequencies High-pass, eliminate/reduce low frequencies Band-pass, attenuate all frequencies outside a certain range G(u) u u Sampling Theory: Convolution (1) Fourier transformation relates multiplication in one domain with convolution in the other domain f(t) h(t) t Convolution of signal f and filter h at a point x is the product of f and h centered at x. z g( x) = f ( x) h( x) = f ( t) h( x t) dt f(t) h(x-t) x t t

Sampling Theory: Convolution (2) Convolution computes an average of the signal f around x weighted by the filter function h f(t) h(t) t t f(t) h(x-t) x t Sampling Theory: Sampling (1) Sampling is multiplying the original signal with a comb function This corresponds to a convolution of the signal with Fourier transform of the comb function. The Fourier transform of a comb function is another comb function The distance between the Dirac pulses is determined by the sampling frequency t 0 2t 0 3t 0 4t 0 5t 0 6t 0 7t 0 9t 0 2ω ω ω = 2 π / t 0 ω 2ω x u

Sampling Theory: Sampling (2) Convolving the signal with a comb function replicates the spectrum of the signal centered around the Dirac pulses. F(u) H(u) u If the sampling frequency is too low, the replicated spectra overlap. This generates aliasing. 2ω ω ω F(u) * H(u) 2ω u 2ω ω ω 2ω u Sampling Theory: Nyquist Frequency Nyquist frequency is the minimum sampling frequency where spectra do not overlap F(u) u H(u) Therefore: f 0 1 = 2 f max t 0 2ω ω ω F(u) * H(u) 2ω u 2ω ω ω 2ω u

Sampling Theory: Low-pass Filter (1) In order to eliminate the extra copies of the spectrum, a low-pass filter is applied. F(u) u H(u) If the copies of the spectrum do not overlap, the signal can be fully reconstructed Otherwise, the signal is only approximated High frequencies are lost and the signal appears blurred 2ω 2ω ω ω F(u) * H(u) ω ω 2ω 2ω u u Sampling Theory: Low-pass Filter (2) Filtering Multiplication in the frequency domain Convolution in spatial domain Ideal low-pass filter A.k.a. box filter Sharp cut-off frequency Corresponds to sinc function in the spatial domain sinc(x) = sin(x) / x H(u) h(x) u x

Sampling Theory: Sinc function Amplitude A and zero crossings of the sinc function are determined by the sampling frequency W A = 2W will preserve the energy of the signal Lower sampling frequency widens the sinc function and reduces its amplitude Sinc is problematic Very wide non-zero support Negative lobes sinc(x) 2W -1/2W 1/2W x Sampling Theory: Other Filters (1) Box filter Sinc in frequency domain Wide non-zero, negative support Unweighted average, very simple, but large errors Triangular filter Sinc 2 in frequency domain Smaller, positive support Weighted average, fairly simple Gaussian filter Gausian in frequency domain Small, posititive support Better weighted average Computationally expensive e -x2 x x x

Sampling Theory: Other Filters (2) Windowed filters E.g. windowed sinc function, i.e. sw( x) sinc( x) x const. if 0 x const. = R S T < Requirements for any filter functions The integral over the filter function is 1. This ensures that the signal is not attenuated or amplified The filter function should have finite support in the spatial domain. This makes it practical to compute the filter function. Sampling Theory: Summary Scan conversion forms a sampling process with subsequent reconstruction Sampling theory tells us that reconstruction of a sample signal can only be accomplished if the sampling frequency is above the Nyquist frequency Then the copies of the signals spectrum do not overlap To meet this condition, the signal must be band-limited Reconstruction is accomplished with a low-pass filter Low-pass filtering means to compute a weighted average in the spatial domain The spatial domain is larger than one pixel! Different low-pass filters can be used.

Anti-aliasing: Implementation Proper filtering of the rendered primitives can be difficult and computationally expensive Various techniques approximate the proper calculation of the area contributions Super-sampling Alpha channel for image compositing Additional problems arise if hidden-surface removal is combined with anti-aliasing How much does the object cover the pixel? How much of the object is visible in the pixel? Anti-aliasing: Super-sampling (1) The area covered by the object is computed by point sampling the pixel itself Sample locations are known as sub-pixels Sub-pixel coverage approximates the true coverage Simple to compute, applicable to all primitive types Filtering weights and combines the sub-pixel values according to the filter function to the final pixel value

Anti-aliasing: Super-sampling (2) Determine visibility only at pixel center Creates wrong results if visibility changes in the pixel Requires only standard z-buffer and frame buffer Determine visibility at every sub-pixel Proper resolution of visibility within the pixel Adjust depth value to sub-pixel location Requires high-resolution z-buffer for sub-pixel color / z Z at pixel center 1 2 Z Subpixel Values Z at subpixel center 1 2 Anti-aliasing: Compositing (1) Compositing was developed for combining images Images are combined pixel by pixel First described by Porter & Duff in 1984 At every pixel we store in the alpha channel the current coverage information For a new fragment we compute new color and alpha information based on pixel and fragment color/alpha We will now look at compositing two fragements, each of which has only one edge crossing its pixel area More edges per pixel are rare cases and not treated explicitly No edges are covered by setting α = 1

Anti-aliasing: Compositing (2) The alpha value does not provide information about how the object is oriented within a pixel There is no clear definition on how to combine current pixel value and new fragment value Several possibilities for how two fragments might interact For image synthesis and anti-aliasing only one is relevant: A over B See Foley et al. chapter 17.6 for other operations α A =0.3 α B = 0.5 Anti-aliasing: Compositing (3) A over B Determine which objects is in front of the other (visibility) Both fragments contribute Assume random orientation of edges within the pixel Then, on average (1 α A ) of fragment B is visible α A =0.3 α B = 0.5 C = α A + ( 1 α ) α B A A B α = ( 1 α ) ( 1 α ) A B

Anti-aliasing: Compositing (4) Notice, that the fragment colors A and B are always used weighted by the coverage values a C = α A + ( 1 α ) α B A A B α = ( 1 α ) ( 1 α ) A B To avoid computing the term a*a at every pixel, colors are stored premultiplied by a Instead of (R,G,B,α) we store (αr,αg,αb,α) Anti-aliasing: OpenGL (1) Anti-aliasing in OpenGL may differ for different OpenGL implementations Quality of anti-aliasing controlled via "hints" (glhint) Point and Line anti-aliasing glenable(gl_point_smooth) + glenable(gl_line_smooth) Polygon anti-aliasing If drawn as points or lines, point/line anti-aliasing applies Otherwise, glenable(gl_polygon_smooth). This will generate coverage information in a fragment's alpha value Then setup the alpha-blending function to blend between incoming fragment and stored pixel value

Anti-aliasing: OpenGL (2) Supersampling is supported in OpenGL via the Accumulation Buffer The accumulation buffer is a very deep frame Successive rendering passes are accumulated in the accumulation buffer At the end the pixel values are divided by the number rendering passes (averaging) Supersampling is implemented by jittering the scene position slightly (subpixel distance) between passes Anti-aliasing: Summary Aliasing result of under-sampling the primitives Geometric primitives introduce infinite frequencies along edges Sampling theory indicates that the signal must be band-limited before sampling to allow full reconstruction of the signal The ideal low-pass filter is hard to implement Approximations mean the computation of a weighted average of the pixel area covered by a primitive (area sampling) Area sampling is hard to perform accurately Supersampling and compositing approximate correct area sampling and resolution of visibility between several fragments partially covering into a pixel

VRML VRML VRML = Virtual Reality Modeling Language Derived from SGI's OpenInventor Version 2.0 has been adopted by ISO as VRML'97 standard VRML was designed for definition, interaction and transmission of 3D models in a web-environment Based on a scene-graph model Supports several non-graphics constructs to build virtual worlds, e.g. audio, event routing, customizable nodes (prototypes), concept of time, scripting interface We will focus here on the graphics-specific elements of VRML, i.e. scene graphs and basic event processing Also see paper handed out in class

VRML: Overview Scenes are constructed by building a scene-graph Basically identical to the model hierarchies we discussed earlier Different nodes represent objects, transformations, attributes and grouping semantics More formally nodes are classified as geometry (shape) nodes, appearance nodes, geometric property nodes and grouping nodes Each node has none, one or several fields that set relevant node parameters Each node field has a name and a default value Fields not explicitely set will assume their default value VRML: Files VRML files have the extension ".wrl" for "world" VRML files Very brief header Description of the world #VRML V2.0 utf8 Shape { appearance Appearance { material Material { diffusecolor 1.0 0.0 0.0 specularcolor 1.0 1.0 1.0 shininess 10 } } geometry Cone { bottomradius 1.0 height 2.0 } }

VRML: Nodes Nodes compose the scene graph describing the world Syntax Unnamed nodes: <nodetype> { <body>} Named nodes: DEF <name> <nodetype> {<body>} Named nodes can be instantiated anywhere a node is expected: USE <name> The body of a node specifies the values of its fields and events it can send and receive #VRML V2.0 utf8 Shape { appearance Appearance { material Material { diffusecolor 1.0 0.0 0.0 specularcolor 1.0 1.0 1.0 shininess 10 } } geometry Cone { bottomradius 1.0 height 2.0 } } VRML: Fields Fields set a node's parameters All fields have default values Default values are used if no specific value is assigned Syntax Single valued field (prefix SF) <fieldname> <fieldvalue> Multi-valued field (prefix MF) <fieldname> [<fieldvalues>] Multiple values are separated by commas The brackets may be omitted if only one value is assigned #VRML V2.0 utf8 Shape { appearance Appearance { material Material { diffusecolor 1.0 0.0 0.0 specularcolor 1.0 1.0 1.0 shininess 10 } } geometry Cone { bottomradius 1.0 height 2.0 } }

VRML: Field Types Every field can only assume values of a specific type Types include Bool, Int32, Float, Vec2f, Vec3f, String, Color, Image, Time, Rotation Node Most node types are available as single-valued and multi-valued versions (except Bool and Image) VRML: Node Types (1) Shapes and Geometry Box, Cone, Coordinate, Cylinder, ElevationGrid, Extrusion, IndexedFaceSet, IndexedLineSet, Normal, PointSet, Shape, Sphere, Text Appearance Appearance, Color, FontStyle, ImageTexture, Material, MovieTexture, PixelTexture, TextureCoordinate, TextureTransform Grouping Anchor, Billboard, Collision, Group, Inline, LOD, Switch, Transform Environment AudioClip, Background, DirctionalLight, Fog, PointLight, Sound, Spotlight

VRML: Node Types (2) Viewing NavigationInfo, Viewpoint Animation (Interpolators) ColorInterpolator, CoordinateInterpolator, NormalInterpolator, OrientationInterpolator, PositionInterpolator, ScalarInterpolator, TimeSensor Interaction (Sensors) CylinderSensor, PlaneSensor, ProximitySensor, SphereSensor, TouchSensor, VisibilitySensor Miscellaneous Script, WorlInfo VRML: Shape and Geometry Nodes (1) Shape Node Container node to collect geometry and appearance data Use various other nodes to specify these fields 3D Primitives Box: axis-alined box with specified dimensions Cone: cone around Y axis, centered at origin Sphere: obvious

VRML: Shape and Geometry Nodes (2) Indexed Face Set Used to build shapes as a collection of planar polygons Polygons are defined from indices into arrays of vertices, colors, normals, and texture coordinates The node contains at least a list of vertex coordinates (field 'coord') a list of vertex indices defining the faces (field 'coordindex') Optionally, attributes can be defined at the vertices Colors, normals, texture coordinates IndexedLineSet is a similar construct for constructing a collection of lines VRML: Shape and Geometry Nodes (3) Example: Pyramid #VRML V2.0 utf8 Shape { geometry IndexedFaceSet { coord Coordinate { point [ -1-1 0, 1-1 0, 1 1 0, -1 1 0, 0 0 2 ] } color Color { color [ 1 0 0, 1 0 1, 0 0 1, 0 1 1, 1 1 0 ] } colorpervertex TRUE coordindex [ 0 3 2 1-1, # bottom 0 1 4-1, 1 2 4-1, 2 3 4-1, 3 0 4-1 ] }

VRML: Grouping Nodes (1) Simplest one is Group Collects several children into a group Field 'children' contains those node Transformation node is similar to Group node Scales, rotates and translates (in that order) its children Billboard Always orients the local Z-axis to point to the viewer by rotating all children around specified axis Supports the display of objects that always face the viewer, e.g. "flat trees", signs, etc. VRML: Grouping Nodes (2) Anchor node Loads another URL when the user clicks on any of the children nodes Replaces the current world Inline node Reads content of another URL and merges its nodes into the current world (similar to #include in C preprocessor) Does not replace the current world

VRML: Grouping Nodes (3) Switch Selects one amongst its children for display The 'whichchoice' field can be changed via events Useful e.g. for simple animation or state changes LOD (level of detail) Selects from several representations (levels) of the same object Switching depends on the distance from the viewer Switching thresholds are specified for each level Useful to increase rendering efficiency by displaying simpler representation for distant objects VRML: Appearance Nodes Appearance nodes are container nodes for various surface properties Materials Textures Textures are specified as images PixelTexture: specifies the image in the VRML file ImageTexture: specifies the image by reference to an external file MovieTexture: time dependent texture loaded from an external MPEG-1 file

VRML: Lighting Models and Materials Lighting models and material properties are very similar to OpenGL Lights are placed in the scene like regular objects. The subject to standard transformations Check out the following nodes: DirectionalLight, PointLight, SpotLight, Fog, and Material VRML: Viewing and Navigation Viewpoint selection and navigation is primarily controlled by the browser Viewpoints can be specified in the VRML file Viewpoint node allows to predefine several nodes Viewpoints can be selected by the browser Viewpoints can manipulated automatically by changing the parent transformation node Navigation specifies how the browser manipulates the viewpoint NavigationInfo node supports various modes, e.g. WALK, FLY, EXAMINE

VRML: Authoring and Viewing Obviously, creating VRML files using a text editor is (at best) inconvenient The text-based format makes it easy to generate VRML through authoring applications, e.g. 3D editors, scientific visualizations etc. VRML files are viewable through various tools Most popular viewers are known as VRML browser Plug-ins available for most Web-browser Summary Aliasing and Anti-aliasing Route cause is sampling below Nyquist frequency Area-sampling is an implementation of low-pass filtering the image Super-sampling and compositing VRML Basic concepts Syntactical elements We have not covered advanced VRML features

Homework Prepare for the midterm exam The exam is a closed book exam. You are allowed to bring 1 letter-sized sheet with notes. Read the lecture notes and the corresponding chapters in the textbook You must demonstrate understand of the material and can apply it Fully understand the polygon raster pipeline, i.e. sequence, function, and algorithms You should be able to apply algorithms etc. Except in very simple cases, you do not need to derive formulae. Exam questions will query knowledge, understanding and ability to use what you learned Assignment Create a VRML cityscape Streets, buildings, cars, people, trees,... Although not the preferred way, create the world from scratch, i.e. using a text editor There are very few requirements So... And... go out, be wild, have some fun come back with a cool VRML world

Next Week... Spring break!!!