Advanced Image Based Rendering Techniques
|
|
- Brook Fletcher
- 5 years ago
- Views:
Transcription
1 Advanced Image Based Rendering Techniques Herbert Grasberger Institute of Computergraphics & Algorithms, TU Vienna, Austria June 13, 2006 Abstract Image Based Rendering is a rendering approach based on light rays. No geometry has to be known to construct images of a certain object, only many reference images have to be available. This Document covers some advanced techniques of Image Based Rendering, like Plenoptic Modeling, where the Plenoptic Function is simplified in a certain way. Another advanced technique is Hardware Lumigraph Rendering where the texture mapping qualities of current 3D Accelerator Hardware are used to reconstruct a Lumigraph. The last technique is the Delta Tree, a storage efficient representation of reference images. The Document finishes with some applications of Image Based Rendering in film and animation. 1 Introduction Traditional geometry based rendering approaches rely on a model of the geometry to be drawn in some way. This is often a problem when an object in the real world needs to be modeled. To create a model for image based rendering, we just need a set of photos from the object, captured at special positions around it. The resulting data still has to be processed for space and rendering speed reasons, but the model is already complete at that point. The initial techniques were published in 1996: Lightfield Rendering [Marc Levoy 96] and The Lumigraph [Steveb J. Gortler 96]. The techniques described in this paper are based on those two approaches to Image Based Rendering. 2 Plenoptic Modeling Plenoptic Modeling is an Image Based Rendering method based on sampling, reconstructing and resampling the Plenoptic Function. The method was first proposed in [McMillan & Bishop 95]. herbert.grasberger@gmail.com, 532,
2 2.1 Plenoptic Function The Plenoptic function is a parameterized function which describes the radiance of light from any given direction (θ, φ) at any given point of view (V x, V y, V z ) at any time (t) at any wavelength (λ). p = P (θ, φ, λ, V x, V y, V z, t) (1) Figure 1: Visualisation of the Plenoptic Function [McMillan & Bishop 95] A complete sample of the plenoptic function is a full spherical map for a given viewpoint and time value. In Plenoptic Modeling a simplified version of the Plenoptic Function is used. Whereas the original Plenoptic Function includes a time parameter and all wavelengths of light, the version used in Plenoptic Modeling is timeinvariant and is reduced to three wavelengths which are 560nm (red), 530nm (green) and 430nm (blue). 2.2 Plenoptic Sample Representation One problem is the internal representation of the Plenoptic Function. There are several possibilities, like spherical, cubic or a cylindrical representations. The spherical version would be the most natural one, but it lacks a storage suitable representation. This is due to various disortions which arise when projecting a sphere onto a plane. A cubic projection would be better, especially when it comes to disortions, 2
3 but on the corners of the cube, it is overdefined. The cylindrical projection was chosen because it can easily be unrolled onto a plane. The cylinder has no endcaps, which limits the vertical field of view. The advantage of the cylindrical projection is the simplicity of aquisition. Only a video camera and a tripod is required. Two captured planar projections of such a scene are related by a certain transformation. u v w = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 x y 1 (2) x = u w y = v w In this formula, x and y represent the pixel coordinates of an image I and x and y the corresponding pixel in an Image I. With the restriction, that the pictures are aquired in a panning movement, the focal length and the position of the focus point are the same in each image. As a result two images differ only by a rotation in the vertical direction. In order to project two or more pictures onto the cylindrical projection the most common technique uses four common points between two images. With this 4 common points a mapping can be computed. It is possible with this method to map the Nth image onto image N 1. In this approach it is not necessary to determine an entire camera model. This set of transformations H i can be split up into an intrinsic transformation S i which is only determined by camera properties and the extrinsic transformation R i. The extrinsic transformation is determined by the rotation around the camera s center of projection. u = H i x = S 1 R i Sx (3) Due to this decomposition the projection and the rotation components of the transformation are decoupled. The intrinsic part is invariant over all of the images, which enables splitting up the problem of the cylindrical projection into the determination of the extrinsic rotation R i followed by the determination of the intrinsic projection component S i. Due to the invariance of the intrinsic part the whole mapping between the ith and the jth image of the series can be written as: I i = S 1 R y+1 R y+2 R y+3... R y I j (4) By modifying S the images can be reprojected onto arbitrary surfaces. Due to this fact, all of the acquired images can be mapped onto a cylinder easily. 3
4 2.3 Determining Image Flow Fields It is possible to use more than one cylinder for reconstructing a certain viewpoint. If you only use one cylinder the result is simlar to QuickTime VR, which means that no real motion is possible, only zooming is done here. With more than one cylinder it is possible to achive a depth impression when doing motion. In order to work with more than one cylinder, it is necessary to estimate the relative positions of the cylinders and to compute the depth information through disparity images. In order to compute the relative position between two cylinders, corresponding points have to be found and marked manually. The relative positions of center of projections can be determined automatically, but they can only be computed to a scale factor. This is due to the fact that it is not possible to determine from a set of pictures, if the observer is looking at an object or a whole scene. This is why corresponding points have to be manually specified. If the same camera properties were used for the two cylinders, the unknown transformation between these two cylinders can be reduced to a translation and rotation. The parameters have to be estimated by optimizing the rays between the corresponding points, where the value to optimize is the sum of distances between the rays. When this is done, only one parameter of the transformation matrices is missing. The missing parameter is the distance between the eye points of the cylinders, which must be estimated physically. Epipolar methods enable us to identify two equal points on different cylinders and compute a sinusoid like curve between those two points. This enables us to establish image flow fields. There are several methods to compute such a flow field: correlation methods relaxation methods method of differences 2.4 Plenoptic Function Reconstruction The core of an Image Based Rendering algorithm is the computation of new viewpoints. With Plenoptic Modeling computing a new viewpoint, which is represented by another cylinder, out of two cylinders A, B and the disparity image is done with an algorithm that is also able to describe occlusion and perspective effects. In figure 2 the point P in the scene has the angle θ concerning the cylinder A. With consideration of the disparty disp(a, B, p) = α between A and B, the point P has the angle θ + α concerning B. As a result, it is possible 4
5 Figure 2: Reconstrucing a new Cylinder [McMillan & Bishop 95] to compute with the positions of A and B and the angles the position of P on the new cylinder V and the angle θ + β, where β = disp(a, V, p). [McMillan & Bishop 95] Actually the P isn t computed exactly, but only the angles and positions are used to compute V. Due to some transformation techniques it is possible to compute the transformation interactively. It is also possible to do the projection onto a plane in the same computation step. One problem of this algorithm can be, that more than one pixel of the old cylinders might be projected onto the same pixel in the new cylinder, but it is possible to use the painter s algorithm in order to draw the right one of the old pixels. In this algorithm the whole scenery is painted back to front, which can be done even if the object geometry is not known. This can be established by projecting the eye s position onto the cylinder s surface and dividing it into four toroidal sheets. In order to achive the right projection the sheets with more distance to the eye point have to be drawn first (see figure 3). 2.5 Conclusion With Plenoptic Modeling it is possible to capture and to reconstruct the whole scenery with ordinary hardware. The only restriction of the viewing angle is because of the missing endcaps of the cylinder. It shares the same problems with other Image Based Rendering techniques, like the problem that the quality of the images depends on the original 5
6 Figure 3: Image flow field [McMillan & Bishop 95] footage. 3 Hardware Lumigraph Rendering The most important Image Based Rendering methods Lumigraph [Steveb J. Gortler 96] and Light Field Rendering [Marc Levoy 96] need many resources. It is possible to improve the performance of the Lumigraph using current 3D hardware [M.F.Cohen 97]. The Lumigraph is based on the Plenoptic Function, which is reduced to a 4D function. The principle of the Lumigraph is intersecting a ray with a pair of planes. The ordinary Lumigraph uses more than one pair, to illustrate the possibilites of hardware optimization, only one pair will be used in this example. Due to the finite context of the computer the Lumigraph has to be discretized. It hast been found out, that 32x32 nodes on the (s,t) plane and 256x256 nodes on the (u,v) plane give good results Reconstruction using Texture Mapping In the reconstruction process with 3D hardware, the planes are used as textures, because interpolation would need too much computing power compared to the texture mapping approach. The whole (u,v) plane is used as a texture and as a result triangles instead of pixel are drawn. Interpolation is done with blending. Each vertex of a (s,t) triangle has a corresponding triangle in the (u,v) plane, which are drawn 3 times with different alpha values. If the alpha values are 1 Because those values were published in 97, it can be assumed that today the values can be leveled up, due to more computing power; as for consistency I will use the values of 97 6
7 1.0 at the corresponding triangle and 0.0 at the two other ones, the original image is reconstructed. The texturecoordinates of the (u,v) texture are computed through intersecting the rays starting at the camera. In order to draw the whole image, each triangle of the (s,t) texture has to be drawn 3 times, which can be achieved with current 3D hardware without any problems. The bottleneck of this method is the memory. When a full Lumigraph with Figure 4: Reconstruction as Alpha Blended Texture Triangles [M.F.Cohen 97] 6 planes and RGB color values is used, more than one gigabyte memory is used 2. There are several methods to optimize this approach. 3.2 Reducing Data In general dynamic adjusting the tesselation is used to compress the (s,t) plane. The limited set of (s,t) nodes enables to draw fewer polygons x256x32x32x6x3 bytes = 1.125GB; due to the fact, that the bottleneck of all IBR methods is the texturesize and quality, higher texture sizes are recommended and as a result the memory consumption will exceed the sizes of current 3D accelerators memory 7
8 3.2.1 Subsampling The easiest way of compressing the (s,t) plane is simply the reduction of the resolution by a factor of 2. This reduces the size by 4, but the achieved image is going to look blurred, due to the alpha blending described above. To compensate the loss of picture quality, it is possible to use mip-mapping at the (s,t) plane Fixed pattern of (s,t) nodes Most images have their most important information in the center, because the viewer concentrates on this area. As a result, it is possible to use only the nodes in the center of the (s,t) plane, e.g. it is possible to pick only the 9 most centered nodes and to reconstruct the image with these nodes. These 9 nodes are blended with the alpha values like above (1,0,0). In order to complete the picture fictitious nodes are assumed in the rest of the (s,t) plane. These nodes are only drawn two times, because no texture is assigned here. See figure 5 for an example. Triangle I is drawn first using the (u,v) plane Figure 5: Reconstruction using the 9 centered nodes [M.F.Cohen 97] associated with node (a) and with alphas of 1.0 at node (a), 0.0 at node (b) and 0.5 at the fictitious node labeled (b,c). A similar pattern is used from the point of view of node (b). Similarly triangle II is drawn with texture (a) with alphas (1.0, 1.0, 0.5) an with texture (b) with alphas (0.0, 0.0, 0.5) 8
9 (Example out of [M.F.Cohen 97]) This small amount of nodes can even be used for small motion, but if the motion gets too big, the nodes have to be changed Nodes along a line In some cases the user s motion can be predicted, especially if the eyepoint is moving along a line. In this case, the (s,t) nodes can be aligned along an appropriate line. As a result the (s,t) plane is not divided into triangles, but into strips, which span the space between adjacent nodes along the line. Here, each strip is drawn twice, the alphas are set to 1.0 along the line at the current node an 0.0 at the other ones Using Projective Textures This method uses the current image as a texture map for nearby images that have to be reconstructed. If the approximate geometry is known, this texture is wrapped around the viewpoint. In this approach a matrix is constructed after the initial image, which is then used to project the texture onto the next frame. This is done in an animation, where the moving vector of the eyepoint is known, and as a result the appropriate transformation matrix can be computed. In this case every second frame of the animation might be computed with the exact image and the transformation. Unfortunately an interpolation for an animation where the angle or the position of the eyepoint differ very much is not possible because artefacts can occur in this case. An appropriate algorithm might use this technique of interpolation while computing a new exact image in the background and if this image is completed, it is put into the animation and interpolated, until the next exact image is ready Progressive Refinement The methods above might be combined with a method of progressive refinement. Only the nodes of the (s,t) plane that are necessary in the current frame are used, the rest is stored in a database or something similar. If the algorithm detects missing and unnecessary nodes that are currently in use, it can delete the unnecessary ones and load the missing ones. In combination with the right alpha values while drawing the animation, the user should not be aware of the node streaming and the resulting image should look as if it was reconstructed nearly from the whole (s,t) plane. 9
10 3.3 Conclusion The original approach [Steveb J. Gortler 96] where the whole (s,t) plane is used for reconstruction would be too memory consumptive, but with the modifications described above it is possible to get very good results, because high resolution image can be used too. 4 The Delta Tree The Delta Tree [Dally et al. 96] is a data structure that represents an object using reference images. An arbitrary reprojection of an object can be achieved by traversing the Delta Tree. Each node in the tree stores an image taken from a certain point on a sampling sphere. The main goal is, to store each surface pixel of the object only once in order to minimize storage because storage is the main bottleneck of all Image Based Rendering approaches. Each image is stored in the frequency domain. The reason for storing the images in the frequency domain is the fact, that reprojecting an image via the Delta Tree is similar to an image warp where it is also possible to handle level of detail and antialiasing in one step. 4.1 Motivation With Plenoptic Modeling it is only possible to get an interior view of a single scene, although it is possible to move in this scene. It is not possible to set up an exterior view of an object and move around. This can be achieved when using the Delta Tree. The Delta Tree is an object centered Image Based Rendering approach, where a certain amount of reference images are stored in the tree and the object viewed from a certain viewpoint can be reconstructed during traversing the tree. It is possible to choose any viewpoint, as long as the viewpoint is on the hull of the sampling sphere. If the point lies the hull, the object has to be divided into several objects. These objects can be combined again by combing the tree. The same method is used to set up complex scenes. 4.2 Definitions A sampling sphere is a sphere that surrounds the object completely. The surface is split up into a regular grid of sampling points. A pixel stores the object properties on a certain point. The information can be stored in a disparity map. In the current version, only color values are 10
11 stored, information about transparency, specularity and gloss may be used in the future too. A view V p is a photometric representation of the object stored in the Delta Tree seen from position P with a specified field of view. A subview is a rectangular subset of the view, which might be reduced in height an width. [Dally et al. 96] 4.3 Sampling The reference images captured during sampling can be achieved from real world objects, but it is also possible to use computer generated ones. As said before, the reference images have to be captured from a sampling sphere, where every position can be described by a pair of angles (θ, ψ). It is not necessary to know the radius of the sphere. The surface of the sphere is split up into a regular grid of reference points, the density of these points is proportional to the bumpyness of the object. If it is more bumpy, then the density of the sampling points has to be bigger, in order to cover all of the surface features and to store each surface pixel of the object. 4.4 Structure To describe the tree structure, it is necessary to describe the surface of the sampling sphere. The sampling points can be unrolled into a rectangular plane (see figure 6 ), where the height and width represent the two different angles (θ, π) from 0 to 45 degrees. Every black point on this plane corresponds to a camera position on the sampling sphere, where a reference image was captured. On the right side of figure 6 a Delta Tree which has the basic structure of a Quad tree is shown. Each node stores one reference image and a certain region of the sampling sphere is attached to it. Figure 6: Comparison Delta Tree and viewing space [Dally et al. 96] E.g. the root node in figure 6 stores the reference view at (0,0) and the 11
12 whole region from (0,0) to (45,45) is attached to the node. Each child node stores the reference view seen from one region corner. This is done until the leafs of the tree are reached, which store the pixel. In the left square the regions stored in the leaves are shown via grey values. A complete Delta Tree has got 4P root nodes, each root node corresponding each quadrant. Typically P = 6 according to the six cardinal directions. 4.5 Redundancy In order to reduce the amount of pixel, every node only saves the pixels which are not yet stored by the ancestor node. As a result, each node apart from the root node only stores a certain part of the reference image. Some of the nodes are even empty, because the ancestor stores the whole reference view. In this case only a pointer to the reference view is stored in the node (see node B0. C00,... in figure 6 ). Sometimes more than one node needs certain pixels. In this case the pixels are stored in the memory once, and all of the nodes point to this storage area. This occurs in certain conditions: The four trees corresponding one quadrant share views along their common edges. The views are shared by reference in this case. one child of each node shares a view with its ancestor. E.g. B0 shares a view with A. In this case B0 doesn t need to store any information, because all is already sampled by its ancestor A. In the figure, such views are shown cross hatched. first cousins within a tree may share a view, e.g. C11 shares a view V p1 with C23 and even four cousins shares V p2 (C 02, C 12, C 22.C 32 ) in this case the cousins store different partial views, which means for example that C 11 stores only those pixels that are not adequately sampled by A or B1. Any pixels that are common, are only shared by reference Reference views on the border of the root region are shared between two Delta Trees It can be seen in the figure, that the nodes that are down the tree store less information than the nodes in the upper region 4.6 Resampling Reconstructing a random view, located between some of the sample views on the sampling sphere is simply done by traversing the tree. Due to the layout of the Delta Tree this traversion is done in a spiral shape when looked at the planar projection of the sampling sphere. For the reconstruction the leafs surrounding the viewpoint P are visited, travsersed into the viewpoint via a 12
13 warp operation and the image is reconstructed using a z-buffer. The z-buffer is required to eliminate visible surfaces hidden in the path. In figure 7 this is traversion is illiustrated. The reason for visiting the surrounding leafs is that the new view is reconstructed with the reference views. In the situation that Figure 7: Traversing the tree [Dally et al. 96] the view is off the sampling sphere, the reconstruction process gets a little bit more complicated. In this situation it is necessary to recursively subdivide the generated view while traversing the Delta Tree. The subdivision is performed by projecting the region boundary on the sampling sphere onto the view. The projected boundary divides the pixels into two subviews which may then have to be divided another time, depending on it straddling some boundary edges. In figure 8 this is illustrated. Here the part V 1 of the image is reconstructed using R 1 and V 2 is reconstructed using R 2. The computation of the final Figure 8: Subdividing the view [Dally et al. 96] image is done with Plenoptic Modeling of planes. 13
14 4.7 Level of Detail Different levels of detail can be achieved very easily. Because all sample images are stored in frequency domain, it is possible to reconstruct the whole image with low-pass filtering. To optimize level-of detail, the images might be stored as mip maps during the construction of the Delta Tree. In this case it is possible to immediately choose the right level of detail for the resampling process. In the worst case, this approach requires reprojecting four times as many pixels as appear on the screen, but no level of detail hierarchy with multiple representation is required. 4.8 Conclusion The current version of the Delta Tree lacks information about specularity and transparency, but this can be easily implemented in future versions. One problem it shares with many Image Based Rendering approached is the fact that an object can only be reconstructed at a certain viewpoint, if reference views in the region of the eypoint exist. 5 IBR in Applications & Film Image Based Rendering techniques are used in many ways when it comes to different applications and film. For each of these techniques the following questions can be asked: What is the model and how are the images used? What is the rendering method? What is the effect and is what we see really there? 5.1 Golden Gate Movie Map Movie Maps [Naimark 97] were first made in the late 1970s by the MIT. Basically a movie map is based on filmed and on panoramic images. Then these material is processed to a video disc where the filmed material is used for the motion parts of the interactive environment and the panoramic images are used at certain points. In 1987 the Golden Gate Videodisc Exhibit was produced as an aerial moviemap. A helicopter flew a 1 by 1 mile grid of the Bay Area at a constant ground speed. The grid was determined by satellite navigation, and as a result the helicopter filmed every 30 feet (figure 9). The 35mm camera was always pointed at the center of the Golden Gate Bridge, and as a result no turn sequences were needed in the movie map. All in all a 10 by 10mile area was captured and put on a laserdisc. 14
15 The playback system used a trackball to control speed and direction, and as a result it was possible to freely move over the Bay Area at higher speeds than normal. Figure 9: Schematic view of the Bay Area [Devbec 99] 5.2 Matte Painting Matte Painting is an old technique used since the feature film Gone With the Wind(1939). The principle is that instead of building a set, it can be drawn, mostly digitally in these days. Then the actor, or the foreground image can be composed into the matte background. As a result it is possible to show the actor in some large space without really building that space. Some films even used panning and zooming of the matte painting, which gives the illusion of motion. These days the whole principle matte painting technique is used in many films, where the scenes are composed of real footage and computer generated backgrounds, or sometimes in full computer generated movies. In the last case the technique helps to reduce rendering time. E.g. in the film Final Fantasy everything was done with computer graphics. In many scenes only the actors and some items are actually moving, not the background. In such a case the background was rendered once, and then composed with the moving characters (which were also rendered seperately). This makes changing the scene also easier. If there are some parts of the background, that change, only these parts are rendered alone and composed into the scene (even if it is only smoke). The difference between computer generated matte painting and the real model can be demonstrated at the famous Death Star Scene from Star Wars Episode IV: A new Hope. This movie was filmed in the 1970 s and here 15
16 Figure 10: Comparison between cg an matte painting. Images courtesy of Lucas Digital, Lucasfilm, and Industrial Light and Magic [Devbec 99] a model of the Death Star was used for some footage. When ILM did the special edition of this movie in the late 1990s they replaced the footage of the model with computer generated material (figure 10). Figure 11 shows a Figure 11: Matte Painting scene out of Return of the Jedi [Star Wars Directions] scene out of the original Return of the Jedi version, where the actors were blended into the matte background. To hide the blending artefacts, smoke was inserted afterwards. Below the picture shows the composition from the side. 16
17 5.3 SFX in Matrix Figure 12: Setup of the cameras used for Bullet Time [wikipedia.com] The famous Bullet Time effect from the film Matrix was created with a technique called Virtual Camera. This system contains a set of still cameras which surround a certain scene and shoot the scene at the same time. As a result it is possible to achive an effect which looks as if the time has stopped. It is possible to chance the view by choosing another camera and warping the current image to the image of the next camera. In Matrix the cameras were aligned along a line (see the black points in figure 12) and on the end of each line a conventional motion camera is placed. With this setup it is possible to fly around the object while it is moving. 5.4 IBR in Character Animation Figure 13: Video Model [Devbec Course 99] Another application of Image Based Rendering is character animation 17
18 [Devbec Course 99]. Here, not only a large amount of images has to be captured, the images have to be classified and tracked too. After this is done it is possible to estimate the motion of the mouth and eyes where, as a result the spoken vocal and facial expression can be detected. This information can be stored and in a further step a video model (figure 13) can be built out of some phonetic footage, the head pose and the mouth shape. This video model can be used to produce a new 2D animation of a speaking person. Another application concerning character animation is the tracking of motion of a moving character and classification by the movement. This information can then be used reconstruct a complete moving and speaking character out of reference images. 5.5 Conclusion Image Based Rendering in applications and in film have a long history. Some things are not real Image Based Rendering methods (e.g. matte painting) but they are connected to this technique in some certain way. These techniques enable many new effects and have become standard tools in filmmaking. References [Marc Levoy 96] Pat Hanrahan, Marc Levoy Light Field Rendering. Computer Graphics Proceedings, Annual Conference Series (Proc. SIGGRAPH 96), pages 31-42, 1996 [Steveb J. Gortler 96] Michael F. Cohen, Steveb J. Gortler, Radek Grzesczuk, Richard Szeliski The Lumigraph. Computer Graphics Proceedings, An- nual Conference Series (Proc. SIGGRAPH 96), pages 43-54, 1996 [McMillan & Bishop 95] Leonard McMillan, Gary Bishop. Plenoptic Modeling. Computer Graphics Proceedings, Annual Conference Series (Proc. SIGGRAPH 95), pages 39-46, 1995 [M.F.Cohen 97] S.J.Gortler, P.Sloan, M.F.Cohen Time-Critical Lumigraph Rendering.Computer Graphics Proceedings, Annual Conference Series (Proc. SIGGRAPH 97), pages 17-23, 1997 [Dally et al. 96] William J. Dally, Leonard McMillan, Gary Bishop, Henry Fuchs The Delta Tree: An Object-Centered Approach to Image-Based Rendering. MIT AI Lab Technical Memo 1604, May 1996 [Devbec 99] Paul Devbec Applications of IBMR in Art and Cinema SIGGRAPH Course, [Naimark 97] Michael Naimark A 3D Moviemap and a 3D Panorama SPIE Proceedings, Vol. 3012, San Jose, 1997 [wikipedia.com] Wikipedia, Virtual Camera 31st May
19 [Star Wars Directions] Star Wars Digital Directions, 31st May 2006 [Devbec Course 99] Paul Devbec IBMR Techniques for Animating People SIGGRAPH Course,
Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about
Image-Based Modeling and Rendering Image-Based Modeling and Rendering MIT EECS 6.837 Frédo Durand and Seth Teller 1 Some slides courtesy of Leonard McMillan, Wojciech Matusik, Byong Mok Oh, Max Chen 2
More informationImage-Based Rendering
Image-Based Rendering COS 526, Fall 2016 Thomas Funkhouser Acknowledgments: Dan Aliaga, Marc Levoy, Szymon Rusinkiewicz What is Image-Based Rendering? Definition 1: the use of photographic imagery to overcome
More informationImage-Based Modeling and Rendering
Image-Based Modeling and Rendering Richard Szeliski Microsoft Research IPAM Graduate Summer School: Computer Vision July 26, 2013 How far have we come? Light Fields / Lumigraph - 1996 Richard Szeliski
More informationA Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India
A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract
More informationImage-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen
Image-Based Rendering Image-Based Rendering What is it? Still a difficult question to answer Uses images (photometric( info) as key component of model representation What s Good about IBR Model acquisition
More informationImage-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen
Image-Based Rendering Image-Based Rendering What is it? Still a difficult question to answer Uses images (photometric( info) as key component of model representation What s Good about IBR Model acquisition
More informationImage Base Rendering: An Introduction
Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to
More informationBut, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction
Computer Graphics -based rendering Output Michael F. Cohen Microsoft Research Synthetic Camera Model Computer Vision Combined Output Output Model Real Scene Synthetic Camera Model Real Cameras Real Scene
More informationImage-Based Rendering. Image-Based Rendering
Image-Based Rendering Image-Based Rendering What is it? Still a difficult question to answer Uses images (photometric info) as key component of model representation 1 What s Good about IBR Model acquisition
More informationImage-Based Modeling and Rendering
Traditional Computer Graphics Image-Based Modeling and Rendering Thomas Funkhouser Princeton University COS 426 Guest Lecture Spring 2003 How would you model and render this scene? (Jensen) How about this
More informationImage-Based Rendering and Light Fields
CS194-13: Advanced Computer Graphics Lecture #9 Image-Based Rendering University of California Berkeley Image-Based Rendering and Light Fields Lecture #9: Wednesday, September 30th 2009 Lecturer: Ravi
More informationApplications of Image-Based Modeling and Rendering in Art and Cinema
Applications of Image-Based Modeling and Rendering in Art and Cinema Paul Debevec Overview Image-based techniques are used in many ways; for each we can ask: What is the model? How are images used? What
More informationHybrid Rendering for Collaborative, Immersive Virtual Environments
Hybrid Rendering for Collaborative, Immersive Virtual Environments Stephan Würmlin wuermlin@inf.ethz.ch Outline! Rendering techniques GBR, IBR and HR! From images to models! Novel view generation! Putting
More informationModeling Light. Slides from Alexei A. Efros and others
Project 3 Results http://www.cs.brown.edu/courses/cs129/results/proj3/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj3/damoreno/ http://www.cs.brown.edu/courses/cs129/results/proj3/taox/ Stereo
More informationReal-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images
Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily
More informationModeling Light. Michal Havlik
Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Spring 2010 What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of
More informationModeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011
Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2011 What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power
More informationModeling Light. On Simulating the Visual Experience
Modeling Light 15-463: Rendering and Image Processing Alexei Efros On Simulating the Visual Experience Just feed the eyes the right data No one will know the difference! Philosophy: Ancient question: Does
More informationModeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007
Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 The Plenoptic Function Figure by Leonard McMillan Q: What is the set of all things that we can ever see? A: The
More informationTexture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture
Texture CS 475 / CS 675 Computer Graphics Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. Lecture 11 : Texture http://en.wikipedia.org/wiki/uv_mapping
More informationCS 475 / CS 675 Computer Graphics. Lecture 11 : Texture
CS 475 / CS 675 Computer Graphics Lecture 11 : Texture Texture Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. http://en.wikipedia.org/wiki/uv_mapping
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationProjective Texture Mapping with Full Panorama
EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel Volume 21 (2002), Number 3 (Guest Editors) Projective Texture Mapping with Full Panorama Dongho Kim and James K. Hahn Department of Computer Science, The
More informationMore and More on Light Fields. Last Lecture
More and More on Light Fields Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 4 Last Lecture Re-review with emphasis on radiometry Mosaics & Quicktime VR The Plenoptic function The main
More informationCSE528 Computer Graphics: Theory, Algorithms, and Applications
CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334
More informationLight Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen
Light Fields Light Fields By Levoy and Hanrahan, SIGGRAPH 96 Representation for sampled plenoptic function stores data about visible light at various positions and directions Created from set of images
More informationImage-based modeling (IBM) and image-based rendering (IBR)
Image-based modeling (IBM) and image-based rendering (IBR) CS 248 - Introduction to Computer Graphics Autumn quarter, 2005 Slides for December 8 lecture The graphics pipeline modeling animation rendering
More informationRe-live the Movie Matrix : From Harry Nyquist to Image-Based Rendering. Tsuhan Chen Carnegie Mellon University Pittsburgh, USA
Re-live the Movie Matrix : From Harry Nyquist to Image-Based Rendering Tsuhan Chen tsuhan@cmu.edu Carnegie Mellon University Pittsburgh, USA Some History IEEE Multimedia Signal Processing (MMSP) Technical
More informationAdvanced 3D-Data Structures
Advanced 3D-Data Structures Eduard Gröller, Martin Haidacher Institute of Computer Graphics and Algorithms Vienna University of Technology Motivation For different data sources and applications different
More informationModeling Light. Michal Havlik
Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power
More informationReal Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005
Real Time Rendering CS 563 Advanced Topics in Computer Graphics Songxiang Gu Jan, 31, 2005 Introduction Polygon based rendering Phong modeling Texture mapping Opengl, Directx Point based rendering VTK
More informationMorphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments
Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments Nikos Komodakis and Georgios Tziritas Computer Science Department, University of Crete E-mails: {komod,
More informationThe Light Field and Image-Based Rendering
Lecture 11: The Light Field and Image-Based Rendering Visual Computing Systems Demo (movie) Royal Palace: Madrid, Spain Image-based rendering (IBR) So far in course: rendering = synthesizing an image from
More informationAdaptive Point Cloud Rendering
1 Adaptive Point Cloud Rendering Project Plan Final Group: May13-11 Christopher Jeffers Eric Jensen Joel Rausch Client: Siemens PLM Software Client Contact: Michael Carter Adviser: Simanta Mitra 4/29/13
More informationmove object resize object create a sphere create light source camera left view camera view animation tracks
Computer Graphics & Animation: CS Day @ SIUC This session explores computer graphics and animation using software that will let you create, display and animate 3D Objects. Basically we will create a 3
More informationRay tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1
Ray tracing Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 3/19/07 1 From last time Hidden surface removal Painter s algorithm Clipping algorithms Area subdivision BSP trees Z-Buffer
More informationImage-based Modeling and Rendering: 8. Image Transformation and Panorama
Image-based Modeling and Rendering: 8. Image Transformation and Panorama I-Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung Univ, Taiwan Outline Image transformation How to represent the
More informationSpatial Data Structures
15-462 Computer Graphics I Lecture 17 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) March 28, 2002 [Angel 8.9] Frank Pfenning Carnegie
More informationImage or Object? Is this real?
Image or Object? Michael F. Cohen Microsoft Is this real? Photo by Patrick Jennings (patrick@synaptic.bc.ca), Copyright 1995, 96, 97 Whistler B. C. Canada Modeling, Rendering, and Lighting 1 A mental model?
More informationOptimizing an Inverse Warper
Optimizing an Inverse Warper by Robert W. Marcato, Jr. Submitted to the Department of Electrical Engineering and Computer Science in Partial Fulfillment of the Requirements for the Degrees of Bachelor
More informationCS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz
CS 563 Advanced Topics in Computer Graphics QSplat by Matt Maziarz Outline Previous work in area Background Overview In-depth look File structure Performance Future Point Rendering To save on setup and
More informationThe Delta Tree: An Object-Centered Approach to Image-Based Rendering
The Delta Tree: An Object-Centered Approach to Image-Based Rendering William J. Dally *, Leonard McMillan, Gary Bishop, and Henry Fuchs * Artificial Intelligence Laboratory, Massachusetts Institute of
More informationComputer Graphics. Lecture 9 Environment mapping, Mirroring
Computer Graphics Lecture 9 Environment mapping, Mirroring Today Environment Mapping Introduction Cubic mapping Sphere mapping refractive mapping Mirroring Introduction reflection first stencil buffer
More informationToday s lecture. Image Alignment and Stitching. Readings. Motion models
Today s lecture Image Alignment and Stitching Computer Vision CSE576, Spring 2005 Richard Szeliski Image alignment and stitching motion models cylindrical and spherical warping point-based alignment global
More informationComputer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)
Computer Graphics Lecture 14 Bump-mapping, Global Illumination (1) Today - Bump mapping - Displacement mapping - Global Illumination Radiosity Bump Mapping - A method to increase the realism of 3D objects
More informationAnnouncements. Mosaics. How to do it? Image Mosaics
Announcements Mosaics Project artifact voting Project 2 out today (help session at end of class) http://www.destination36.com/start.htm http://www.vrseattle.com/html/vrview.php?cat_id=&vrs_id=vrs38 Today
More informationComputer Graphics I Lecture 11
15-462 Computer Graphics I Lecture 11 Midterm Review Assignment 3 Movie Midterm Review Midterm Preview February 26, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationImage-Based Deformation of Objects in Real Scenes
Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method
More informationSurface Rendering. Surface Rendering
Surface Rendering Surface Rendering Introduce Mapping Methods - Texture Mapping - Environmental Mapping - Bump Mapping Go over strategies for - Forward vs backward mapping 2 1 The Limits of Geometric Modeling
More informationImage Based Rendering
Image Based Rendering an overview Photographs We have tools that acquire and tools that display photographs at a convincing quality level 2 1 3 4 2 5 6 3 7 8 4 9 10 5 Photographs We have tools that acquire
More informationSpatial Data Structures
15-462 Computer Graphics I Lecture 17 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) April 1, 2003 [Angel 9.10] Frank Pfenning Carnegie
More informationVolumetric Scene Reconstruction from Multiple Views
Volumetric Scene Reconstruction from Multiple Views Chuck Dyer University of Wisconsin dyer@cs cs.wisc.edu www.cs cs.wisc.edu/~dyer Image-Based Scene Reconstruction Goal Automatic construction of photo-realistic
More informationLecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)
Lecture 13: Reyes Architecture and Implementation Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) A gallery of images rendered using Reyes Image credit: Lucasfilm (Adventures
More informationWhat have we leaned so far?
What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection
More informationCS 354R: Computer Game Technology
CS 354R: Computer Game Technology Texture and Environment Maps Fall 2018 Texture Mapping Problem: colors, normals, etc. are only specified at vertices How do we add detail between vertices without incurring
More informationMosaics. Today s Readings
Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections
More informationCPSC / Texture Mapping
CPSC 599.64 / 601.64 Introduction and Motivation so far: detail through polygons & materials example: brick wall problem: many polygons & materials needed for detailed structures inefficient for memory
More informationSpatial Data Structures
CSCI 420 Computer Graphics Lecture 17 Spatial Data Structures Jernej Barbic University of Southern California Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees [Angel Ch. 8] 1 Ray Tracing Acceleration
More informationA Warping-based Refinement of Lumigraphs
A Warping-based Refinement of Lumigraphs Wolfgang Heidrich, Hartmut Schirmacher, Hendrik Kück, Hans-Peter Seidel Computer Graphics Group University of Erlangen heidrich,schirmacher,hkkueck,seidel@immd9.informatik.uni-erlangen.de
More informationStructure from Motion and Multi- view Geometry. Last lecture
Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,
More informationSpatial Data Structures
CSCI 480 Computer Graphics Lecture 7 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids BSP Trees [Ch. 0.] March 8, 0 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s/
More informationTopics and things to know about them:
Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will
More informationGraphics for VEs. Ruth Aylett
Graphics for VEs Ruth Aylett Overview VE Software Graphics for VEs The graphics pipeline Projections Lighting Shading VR software Two main types of software used: off-line authoring or modelling packages
More informationform are graphed in Cartesian coordinates, and are graphed in Cartesian coordinates.
Plot 3D Introduction Plot 3D graphs objects in three dimensions. It has five basic modes: 1. Cartesian mode, where surfaces defined by equations of the form are graphed in Cartesian coordinates, 2. cylindrical
More informationAnnouncements. Mosaics. Image Mosaics. How to do it? Basic Procedure Take a sequence of images from the same position =
Announcements Project 2 out today panorama signup help session at end of class Today mosaic recap blending Mosaics Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html
More informationSpatial Data Structures
Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) [Angel 9.10] Outline Ray tracing review what rays matter? Ray tracing speedup faster
More informationImage-Bas ed R endering Using Image Warping. Conventional 3-D Graphics
Image-Bas ed R endering Using Image Warping Leonard McMillan LCS Computer Graphics Group MIT Conventional 3-D Graphics Simulation Computer Vis ion Analys is T he Image-Bas ed Approach T rans formation
More information3D Modeling using multiple images Exam January 2008
3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider
More informationRendering Grass Terrains in Real-Time with Dynamic Lighting. Kévin Boulanger, Sumanta Pattanaik, Kadi Bouatouch August 1st 2006
Rendering Grass Terrains in Real-Time with Dynamic Lighting Kévin Boulanger, Sumanta Pattanaik, Kadi Bouatouch August 1st 2006 Goal Rendering millions of grass blades, at any distance, in real-time, with:
More informationImage stitching. Digital Visual Effects Yung-Yu Chuang. with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac Image stitching Stitching = alignment + blending geometrical registration
More informationCHAPTER 1 Graphics Systems and Models 3
?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........
More informationVolume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics
Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed
More informationIMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS
IMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS Xiaoyong Sun A Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment of the requirements for
More informationCapturing and View-Dependent Rendering of Billboard Models
Capturing and View-Dependent Rendering of Billboard Models Oliver Le, Anusheel Bhushan, Pablo Diaz-Gutierrez and M. Gopi Computer Graphics Lab University of California, Irvine Abstract. In this paper,
More informationMulti-view stereo. Many slides adapted from S. Seitz
Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window
More informationEfficient Rendering of Glossy Reflection Using Graphics Hardware
Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,
More informationHigh-Quality Interactive Lumigraph Rendering Through Warping
High-Quality Interactive Lumigraph Rendering Through Warping Hartmut Schirmacher, Wolfgang Heidrich, and Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany http://www.mpi-sb.mpg.de
More informationCMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014
CMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014 TA: Josh Bradley 1 Linear Algebra Review 1.1 Vector Multiplication Suppose we have a vector a = [ x a y a ] T z a. Then for some
More informationMore Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin.
More Mosaic Madness Jeffrey Martin (jeffrey-martin.com) CS194: Image Manipulation & Computational Photography with a lot of slides stolen from Alexei Efros, UC Berkeley, Fall 2018 Steve Seitz and Rick
More informationGame Architecture. 2/19/16: Rasterization
Game Architecture 2/19/16: Rasterization Viewing To render a scene, need to know Where am I and What am I looking at The view transform is the matrix that does this Maps a standard view space into world
More informationComputer Science 426 Midterm 3/11/04, 1:30PM-2:50PM
NAME: Login name: Computer Science 46 Midterm 3//4, :3PM-:5PM This test is 5 questions, of equal weight. Do all of your work on these pages (use the back for scratch space), giving the answer in the space
More informationGraphics for VEs. Ruth Aylett
Graphics for VEs Ruth Aylett Overview VE Software Graphics for VEs The graphics pipeline Projections Lighting Shading Runtime VR systems Two major parts: initialisation and update loop. Initialisation
More informationCOMP environment mapping Mar. 12, r = 2n(n v) v
Rendering mirror surfaces The next texture mapping method assumes we have a mirror surface, or at least a reflectance function that contains a mirror component. Examples might be a car window or hood,
More informationImage-Based Rendering using Image-Warping Motivation and Background
Image-Based Rendering using Image-Warping Motivation and Background Leonard McMillan LCS Computer Graphics Group MIT The field of three-dimensional computer graphics has long focused on the problem of
More informationIntroduction to 3D Concepts
PART I Introduction to 3D Concepts Chapter 1 Scene... 3 Chapter 2 Rendering: OpenGL (OGL) and Adobe Ray Tracer (ART)...19 1 CHAPTER 1 Scene s0010 1.1. The 3D Scene p0010 A typical 3D scene has several
More informationPAPER Three-Dimensional Scene Walkthrough System Using Multiple Acentric Panorama View (APV) Technique
IEICE TRANS. INF. & SYST., VOL.E86 D, NO.1 JANUARY 2003 117 PAPER Three-Dimensional Scene Walkthrough System Using Multiple Acentric Panorama View (APV) Technique Ping-Hsien LIN and Tong-Yee LEE, Nonmembers
More information2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into
2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel
More informationNext-Generation Graphics on Larrabee. Tim Foley Intel Corp
Next-Generation Graphics on Larrabee Tim Foley Intel Corp Motivation The killer app for GPGPU is graphics We ve seen Abstract models for parallel programming How those models map efficiently to Larrabee
More informationMosaics, Plenoptic Function, and Light Field Rendering. Last Lecture
Mosaics, Plenoptic Function, and Light Field Rendering Topics in Image-ased Modeling and Rendering CSE291 J00 Lecture 3 Last Lecture Camera Models Pinhole perspective Affine/Orthographic models Homogeneous
More informationLi-wei He. Michael F. Cohen. Microsoft Research. March 19, Technical Report MSTR-TR Advanced Technology Division.
Rendering Layered Depth Images Steven J. Gortler Harvard University Li-wei He Stanford University Michael F. Cohen Microsoft Research March 9, 997 Technical Report MSTR-TR-97-9 Microsoft Research Advanced
More informationCS 352: Computer Graphics. Hierarchical Graphics, Modeling, And Animation
CS 352: Computer Graphics Hierarchical Graphics, Modeling, And Animation Chapter 9-2 Overview Modeling Animation Data structures for interactive graphics CSG-tree BSP-tree Quadtrees and Octrees Visibility
More informationSculpting 3D Models. Glossary
A Array An array clones copies of an object in a pattern, such as in rows and columns, or in a circle. Each object in an array can be transformed individually. Array Flyout Array flyout is available in
More informationCSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012
CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 Announcements Homework project #2 due this Friday, October
More informationTexture Mapping II. Light maps Environment Maps Projective Textures Bump Maps Displacement Maps Solid Textures Mipmaps Shadows 1. 7.
Texture Mapping II Light maps Environment Maps Projective Textures Bump Maps Displacement Maps Solid Textures Mipmaps Shadows 1 Light Maps Simulates the effect of a local light source + = Can be pre-computed
More informationImage Warping and Mosacing
Image Warping and Mosacing 15-463: Rendering and Image Processing Alexei Efros with a lot of slides stolen from Steve Seitz and Rick Szeliski Today Mosacs Image Warping Homographies Programming Assignment
More informationCSL 859: Advanced Computer Graphics. Dept of Computer Sc. & Engg. IIT Delhi
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi Point Based Representation Point sampling of Surface Mesh construction, or Mesh-less Often come from laser scanning Or even natural
More informationrecording plane (U V) image plane (S T)
IEEE ONFERENE PROEEDINGS: INTERNATIONAL ONFERENE ON IMAGE PROESSING (IIP-99) KOBE, JAPAN, VOL 3, PP 334-338, OTOBER 1999 HIERARHIAL ODING OF LIGHT FIELDS WITH DISPARITY MAPS Marcus Magnor and Bernd Girod
More informationScene Modeling for a Single View
Scene Modeling for a Single View René MAGRITTE Portrait d'edward James with a lot of slides stolen from Steve Seitz and David Brogan, Breaking out of 2D now we are ready to break out of 2D And enter the
More information