Interactive Light Field Editing and Compositing

Size: px
Start display at page:

Download "Interactive Light Field Editing and Compositing"

Transcription

1 Interactive Light Field Editing and Compositing Billy Chen Daniel Horn* Gernot Ziegler Hendrik P. A. Lensch* Stanford University MPI Informatik (c) (d) Figure 1: Our system enables a user to interactively edit and composite light fields. This figure shows some of the operations that can be accomplished by the system. In we create a light field by compositing multiple copies of a giraffe and flower light field. Each copy is twisted or turned differently to create a visually interesting scene. is an image where objects have been selectively focused or defocused to create an artistic effect. Even though objects are defocused, occlusions between them are maintained and defocus blur blends with objects hidden behind occluders. This effect cannot be accomplished with a single image of each object. Image (c) shows light fields of refractive objects (glass balls) composited with a background light field. Because the background is a light field, the refracted image exhibits parallax and visibility change in different views. Note that we have chosen to focus on the virtual image in the refracted glass ball; consequently, the real image of the background is defocused. In (d) we approximate soft shadows cast from the giraffe light field onto the flower light field. Notice the penumbra of the shadow cast on the ground due to the light and notice that the shadow interacts with the flower believably. Abstract Light fields can be used to represent an object s appearance with a high degree of realism. However, unlike their geometric counterparts, these image-based representations lack user control for editing and manipulating them. We present a system that allows a user to interactively edit and composite light fields. We introduce five basic operators, alpha-compositing, apply, multiplication, warping, and projection, which may be used in combination. To describe how multiple operators and light fields are combined together, we introduce a compositing language similar to those used for combining shader programs. To enable real-time compositing, we have developed a framework for the programmable graphics hardware that implements the five basic operators and provides mechanisms for users to develop combinations of these. We built an interactive system using this framework and demonstrate its use for editing and authoring new light fields. Keywords: light fields, image-based modeling and rendering, 3D photography {billyc danielrh lensch}@graphics.stanford.edu gziegler@mpi-sb.mpg.de 1 Introduction A light field [Levoy and Hanrahan 1996; Gortler et al. 1996] is a four-dimensional function mapping rays in free space to radiance. Using light fields to represent scene information has become a popular technique for rendering photo-realistic images. Thus, the need arose to manipulate these datasets as flexibly as images and 3D models. In this paper, we present a system that allows a user to interactively edit and composite 4D light fields. Interactive rendering rates are achieved by exploiting programmable graphics hardware. Users can perform light field insertion, deformation and removal, refocusing, rapid-prototyping of photo-realistic scenes, or simulation of refractive effects in our framework. To support these capabilities, we introduce five basic operators: alpha-compositing, apply, multiplication, warping, and projection. In combination, these operators enable a large number of ways to edit a light field. The operators are designed to be simple so that they can be implemented as basic functions on programmable graphics hardware. Complex operators can be created by combining several basic operators together. To describe how this combining occurs, we introduce a compositing language, similar to those used to describe shaders [Cook 1984; Proudfoot et al. 2001] or pixel streams [Perlin 1985]. The language provides a structured foundation to composite light fields. This language also serves as an interface between the user and the graphics hardware. We have developed a simple hardware-accelerated framework for representing light fields, their operators, and the data flow. The framework provides a high-level abstraction layer for manipulating light fields, rather than working directly at the pixel level. We use this framework to interactively edit and composite light fields.

2 The rest of the paper is organized as follows. First, we describe related work in the areas of compositing and image-based editing. Second, we present the basic operators and how to combine them. Third, we show how these operators can be implemented on programmable graphics hardware to enable interactive editing. Finally, we describe four useful editing operations using the basic operators and demonstrate their use on a variety of light fields. 2 Related work The idea of compositing images was originally used in film and video production to simulate effects like placing an image of an actor in front of a different background [Fielding 1972]. In 1984, Porter and Duff introduced the alpha channel in order to composite digital images [Porter and Duff 1984]. Recently, Shum et al. extended 2D compositing to 4D light fields [Shum and Sun 2004]. They built a system that allows a user to manually segment a light field into layers. The system composites these layers together using a technique called coherence matting. Our framework offers a similar, but simpler compositing operator amongst a larger gamut of operators. Our focus is on simple operators that can be combined easily on the graphics hardware. Compositing is one way to edit images. Many researchers have investigated other editing operations for either a single image or up to a handful of images. Oh et al. built a system that takes a single image as input and allows the user to augment it with depth information [Oh et al. 2001]. Subsequent editing operations, like cloning and filtering, can be performed with respect to scene depth. Barsky has used a similar image representation to simulate how a scene would be observed through a human optical system [Barsky 2004]. To handle multiple images of a scene, Seitz and Kutulakos developed a system that builds a voxel-representation [Seitz and Kutulakos 1998]. In this system, 2D editing operations, like scissoring or morphing, are propagated to the other images using the voxel representation. In our system, we use sufficiently sampled light fields as an alternative representation of the scene and our operators manipulate each light field individually. For editing light fields, several researchers have built systems that take light fields as input, perform a specific editing task and return a new light field. Two such systems allow for warping light fields [Zhang et al. 2002; Chen et al. 2005]. In the former, two input light fields are combined to produce a morphed version. In the latter, an animator can interactively deform an object represented by a light field. Our system offers a general editing framework where warping is one of many operations. The idea of combining operators to produce novel ones is inspired by early work on flexible shading models [Cook 1984]. Cook uses a shading tree to represent a wide range of shading characteristics. In our paper, we describe a language that allows light fields to be combined. 3 Basic light field operators A light field is a four-dimensional function that takes a ray as input, and returns the radiance along that ray. We augment the radiance at each ray with a scalar alpha quantity that represents the opacity of the light field at that ray and call this the RGBA color. Given this representation of a light field, we now define five basic operators that enable light field compositing and editing. The first four operators composite and manipulate light fields. The last operator computes 2D projections of light fields. 3.1 Basic operators Notations and Conventions Each of the following operators takes one or two light fields as input and returns a single light field. The exception is the projection operator, which returns a 2D image. When operating upon colors, we perform the same operation component-wise on each channel unless noted. To obtain the scalar opacity value from a color c we write c[α]. Alpha compositing The alpha compositing operator enables correct depth compositing of multiple light fields in a scene. This operator can perform all twelve of Porter and Duff s [1984] compositing operations on 4D light fields 1. Similar to their formulation, we also assume colors are premultiplied by the alpha channel. Given a ray, r, we define the compositing operator C as C FA,F B (L A,L B )(r) = L A (r) F A (L B (r)) (1) +L B (r) F B (L A (r)) where F A (c) and F B (c) are functions that take in a color and return a scalar. As a shorthand, when F A (c) or F B (c) are defined the following way: F(c) = 0 F(c) = 1 F(c) = c[α] F(c) = c[1 α] we give these functions the names 0, 1, α, and 1 α respectively. Thus the traditional over, addition and out operators can be represented as follows: L A over L B = C 1,1 α (L A,L B ) L A + L B = C 1,1 (L A,L B ) L A out L B = C 1 α,0 (L A,L B ) Other Binary Operators: apply and multiplication While we can use the compositing operator above to define addition by setting blending fractions F A and F B to 1, compositing does not support arbitrary binary operations on colors. Thus we define the new operator apply, which takes two light fields and a binary function that maps colors to a color. Apply then returns a new light field where the binary function has been applied to colors of the corresponding rays in both light fields. A F (L A,L B )(r) = F(L A (r),l B (r)) (2) We can then define light field multiplication by passing a binary function, color multiply, to apply. color multiply is a function that takes two colors and performs a component-wise multiplication in each color channel, including alpha. L A L B = A color multiply (L A,L B ) (3) 1 Our compositing operators can also be applied to 2D images since an image can be expressed as a simple 4D light field.

3 Warping The warping operator deforms a light field function L by transforming the directions of input rays. The warping operator takes two inputs, the original light field L and any warping function F that maps input rays to output rays. The warping operator is defined as: W(L,F)(r) = L(F(r)) (4) Projection The task of the projection operator is to map a 4D light field to a 2D image for viewing. The projection operator selects rays from the input light field and organizes them into an image. In our system we use a canonical pinhole camera model centered at (0,0,0) and facing towards negative z with a field of view of 90 degrees and aspect ratio of 1. Given that L is the space of all 4D light field functions, and I is the space of all images, then: P : L I (5) Additionally, we provide an abstraction over our canonical P that takes in the location of an arbitrary pinhole and the extent of an image plane and creates a projection from that new camera location. It accomplishes this by applying a warping function to the input light field to orient it to our underlying canonical projection operation. In our paper, we use P h,c,r,u to represent this abstraction. It denotes taking a 4D-to-2D pinhole projection at location h onto an image plane parameterized by its center, right, and up vectors (c,r,u). 3.2 A light field compositing language In order to describe how operators interact, we introduce a compositing language that combines operators into an expression similar to code in a shading language. For example, the following expression: P h,c,r,u (L windowframe over (L glass (L flower over L table ))) (6) describes a pinhole image of a light field created by alphacompositing a light field of a flower with a table and viewing it through a window frame containing tinted glass. The pinhole camera is located at h with an image plane whose center, right, and up vectors are c, r, and u, respectively. 4 Implementation on the GPU architecture In order to keep the compositing process interactive, we make extensive use of the compute power and bandwidth available in modern programmable graphics hardware. Our framework provides a general interface for loading and manipulating 4D light fields on the GPU. The framework consists of two parts: the data representation for light fields, and the functions that represent light field operators. To represent a light field, we use the two-plane parameterization [Levoy and Hanrahan 1996]. Hence, the light field is an array of images, where (u,v) describes the 2D location of a pinhole camera in the UV -plane. The (s,t) coordinates describe the location of a pixel in the ST -plane. All the camera images are aligned to the ST -plane. A ray r is parameterized by its two intersection points with the UV - and ST -plane. Because the light field is discretized, a color L(r) and its alpha are interpolated from nearby samples using quadralinear interpolation. A light field using this representation is stored as a 3D texture on the graphics hardware. The UV axes are vectorized into the z axis in the 3D texture 2. The ST axes map directly to each xy-planar slice in the 3D texture. In other words, the z axis selects a camera in (u,v) coordinates, and the xy-plane is the image plane in (s,t) coordinates. The 3D texture is compressed using S3 texture compression [Iourcha et al. ]. This extension provides a 4:1 compression ratio for RGBA data. The compression is key to being able to store multiple light fields into texture memory. Multiple light fields are each stored as separate 3D textures, each occupying one of the 16 available texture units. Multiple operator expressions can be associated to a single texture unit, allowing editing of multiple instances of a single light field without using additional texture units or memory. The basic light field operators are represented as functions in a fragment program written in GLSL. To edit and composite light fields during runtime, the user specifies a GLSL expression using the defined basic functions. The user-definable functions for apply and composite are written directly in GLSL as functions mapping one or two colors to another color or scalar. The resulting fragment program is passed the virtual view, P i,c,r,u and the UV - and ST -plane locations for each light field. This fragment program is compiled and applied to the textures. When the fragment program begins, the virtual view s image plane (c, r, u) is discretized into display pixels. As the fragment program is executed per display pixel, the pixel is converted into a ray using the location of the virtual view. This ray is passed into the usersupplied compositing expression to determine its RGBA color. A 2D image is formed when the fragment program has executed for all the display pixels. 5 Creating novel editing operators We now describe four light field editing operators that are combinations of the basic operators described in Section 3. These operators are deformation, synthetic aperture focusing, refraction, and shadow casting. 5.1 Light field deformation Light field deformation consists of supplying a warping function for each light field and compositing the resulting light fields together. Chen et al. used free form deformation to specify the warping function [Chen et al. 2005]. In our system the user specifies a ray-to-ray warp as a function in the fragment shader. Figure 1a illustrates one example of a composition where two captured light fields were used to create a scene where a giraffe is surrounded by a grove of flowers. To create this scene, let us begin with the captured datasets. Figure 2a is an image from a captured light field of a toy flower, after matting the flower from the background. The flower was acquired in front of a known background and we used a Bayesian matting technique [Chuang et al. 2001] to extract mattes for all images of the light field. The mattes are stored in the alpha component of the light field. Let us call this light field L flower and place it at the origin of a coordinate system where the xz plane is the ground plane and the y axis is the up direction in the image. We now define a twisting function on this light field. Each ray in freespace is parameterized by two points, its intersection with the UV - and the ST -planes of 2 Many graphics card limit the z-dimension of the 3D texture to 512 so we tile multiple camera images in a single xy-planar slice.

4 i j focal plane (f c,f r,f u ) lens sensor plane (c) Figure 2: Compositing deformed light fields. is an image from a captured light field of a toy flower. shows an image of the toy flower deformed by the warping operator. (c) shows a simple composited scene using the compositing expression. L flower. We apply a rotation about the y axis on these two points by multiplying each point with the appropriate y-rotation matrix. The amount of rotation depends on the y (or height) component of each point. Points with greater y coordinates are rotated more. This creates a twisting effect on the light field, and we call this function F twist. We then apply F twist to L flower using the warping operator: L twisted = W(L flower,f twist ) Figure 2b is one image from the light field L twisted. Applying a similar warp to a light field of a toy giraffe, L giraffe and compositing it with L twisted together produces Figure 2c, which uses the compositing expression: W(L giraffe,f twist ) over L twisted Figure 5 in the results section shows a more complex example for correcting the poses of individuals in a single light field. 5.2 Synthetic aperture focusing We now show how to define a synthetic aperture focusing operator from the basic operators to perform focusing within a light field. This form of focusing from light fields has been investigated by several researchers [Isaksen et al. 2000; Vaish et al. 2004; Levoy and Hanrahan 1996].Figure 6b illustrates an example where the user has focused on the rear plane of flowers. We first show how this focusing operator can be represented in our framework, then we extend it to create a new form of imaging that creates views containing multiple planes of focus. In a single-lens optical system, a pixel in the image plane is computed by taking the sum over the colors of all rays that pass through the lens and are incident to that pixel. We transform this image formation process into a summation of pinhole images to utilize the pinhole projection operator in Equation 5. Figure 3 illustrates how this transformation is performed. The pinhole cameras have centers that lie in the lens aperture. Their image planes coincide with a common focal plane centered at f c with right and up vectors f r and f u. When an object lies off the focal plane, its pinhole images do not align. When summing these images, this mis-alignment creates blur due to defocus. Therefore, to create an image with a shallow depth of field, we sum over multiple pinhole images, with each pinhole located at a sample point in the lens and with each image plane coinciding with the Figure 3: Creating a focused image by summing pinhole images. The lens has been discretized to several sample positions. The focused image is computed by adding images produced by pinhole cameras lying in the lens aperture. Two pinholes positions, i and j are shown as black points within the lens. Their fields of view are shown as black lines. The dashed lines show the distortion of each field of view through the lens. The image plane of each pinhole camera coincides with the focal plane. focal plane, ( f c, f r, f u ). The corresponding compositing expression is as follows: Σ i S P i, fc, f r, f u (L) (7) Where S is the set of discrete pinhole positions within the lens aperture, and P i,c,r,u is a projection operator that renders a view of the light field from pinhole position i onto an image plane with center c, right vector r and up vector u. We call this operator the focusing operator. Figure 6b illustrates focusing within a light field. This focusing operator creates an image where objects at a single plane in the scene are in focus. Sometimes a user may want to focus on objects at multiple depths. We call this image a multifocal plane image because the final image has more than one depth in focus. This has uses in sports photography and movies. In a multi-focal plane image, each object, represented by a separate lightfield, has its own focal plane. Therefore, for each object we can tweak the amount of defocus it has by simply moving its associated focal plane. In our formulation, all the light fields are viewed through a common lens aperture. To create a multi-focal plane image, we step through each sample point on the lens as was done for a single-plane of focus. However, for each sample point we create multiple pinhole images, one for each light field. The image plane for each pinhole projection is aligned to the focal plane for that light field and a 2D image is rendered from that light field. The pinhole images are then composited using the over operator to form a single image per sample point. We then sum all sample point images to form the multi-focal plane image. For two light fields with two focal planes, the expression is: Σ j S P j,c1, f r, f u (L 1 ) over P j,c2, f r, f u (L 2 ) where c 1 and c 2 are the locations of the centers of the focal planes for lightfield L 1 and L 2 respectively. Figures 6c and d illustrate the multi-focal plane image. 5.3 Refraction We now define an operator that simulates objects with transmissive properties. This allows the object to refract other light fields in the scene. Figure 1c illustrates an example of two glass balls refracting two background light fields.

5 each ray r in freespace, consider the 3D position of its intersection with L object s ST -plane. We form a shadow ray which goes from this 3D position to the virtual light source. Let us define a ray-to-ray mapping called F ob ject,light which maps a ray r to its corresponding shadow ray. Then it follows that L light = W(L occluder,f ob ject,light ). Thus, L light (r)[α] exactly describes how much a ray r s intersection with L object s ST-plane is in shadow. To compute the effect of the shadow on L object, the color returned by L object (r) is multiplied by 1 L light (r)[α]. This multiplication is the out operator described in section 3. Figure 4: Refraction and reflection. shows an image of L calibration refracted through a glass ball light field. Notice that the virtual image is flipped, which is the correct effect for a solid glass ball. The artifacts in the virtual image are due to low sampling of the warping function F re f raction, this could be remedied by using a higher sampling or by defining an analytic description of the ray warp. In, we add a reflection light field which contains a specular highlight. Both the highlight and the refraction move with the observer. A refractive object warps the incoming rays to different outgoing directions. This warp can be stored as a function or it can be precomputed and stored as a 4D lookup table [Heidrich et al. 1999; Yu et al. 2005]. We call this warping function F re f raction. To create the refraction effect, we simply warp any light field behind the object with F re f raction. For example, Figure 4 shows how a calibration light field L calibration is refracted through a glass ball light field. The compositing expression is: W(L calibration,f re f raction ) F re f raction maps incident rays through the refracted sphere. We precompute F re f raction by raytracing through a synthetic sphere and storing the ray-to-ray mapping as a 4D lookup table. Notice that Figure 4a does not appear realistic because it is missing the reflection component. To remedy this problem, we can also warp the background light field by a reflection warp, which maps input rays to reflected rays (as opposed to refracted ones). In this case, the reflection is a specular highlight. The final compositing expression is: A(W(L calibration,f re f raction ),W(L highlight,f re f lection ) Figure 4b shows an image from this composited light field. 5.4 Shadows We now define a shadowing operator that approximates shadow casting from and onto light fields. Figure 8a illustrates the results of this operator. To create sharp shadows, we introduce a virtual point light source into the scene. Suppose our scene contains two light fields, L object and L occluder, and L occluder lies between the virtual light and L object. We now compute which colors L object (r) which should be shadowed by L occluder. First we warp the occluding light field L occluder so that its rays originate from the virtual light source and impact L object. We call this warped light field L light. L light can be computed as follows. For The final compositing expression that computes a shadow cast from L occluder onto L object is: W(L occluder,f ob ject,light ) out L object Shadows cast from multiple light fields can be simulated by compositing each occluding light field together with the over operator before performing the warping. Multiple point light sources can be added by defining a F ob ject,light warp for each light field, warping the light field and prepending the warped light fields to the above expression. Figures 1d and 8 illustrate some examples of the shadowing operator. 6 Results First we demonstrate how the deformation operator is useful for editing light fields. Figure 5 shows an image from a light field taken of two people. Unfortunately, the people were not looking straightahead during the acquisition. To correct this problem, we define a new function F look which linearly interpolates two F twist operators as defined in Section 5.1. One F twist operator is specified for each head, and is weighted by proximity to that head. One view of the resulting deformed light field is shown in Figure 5b. Notice that the people are now looking straight at the camera. The resulting image shows changes in visibility (cf. the previously hidden ear of the left figure), which would be impossible to achieve with an image editing approach. Second, we show the results of having multi-focal planes through a light field. This operator lets a user emphasize multiple depths in a composited scene by focusing on them. This is useful when there are several interesting objects at different depths, but unwanted objects between them. Figures 6c and d illustrate these multi-focal plane images. The toy giraffe in the middle is defocused. The defocus blur correctly mixes with the colors in the background light field. Third, we demonstrate how the refraction operator can be used to simulate refraction through a light field of a glass ball. Figure 7 shows images of glass ball light field in front of several background ones. Because the background are also light fields, the refracted image exhibits the proper parallax due to depth, which can be seen in the accompanying video. The refracted image is also upside-down, which is consistent with the optical properties of a glass sphere. In addition, our system allows the refraction and the focusing operator to be combined. This allows a user to focus in scenes with refractive objects. In Figure 7a, the user focuses on the virtual image of the Buddha light field. Notice that that the background is defocused. In Figure 7b, the user refocuses on the real image of the Buddha, and the virtual image and specular highlight are defocused.

6 Figure 5: Using deformation to correct the poses of individuals in a light field. is an image from a captured light field of 12 x 5 cameras, each 512 x 512 in resolution. During acquisition the individuals were looking in the wrong direction. No single image in the light field contains a view where both individuals were looking in the same direction. shows an image from the corrected light field, where both people are now looking in the same direction. Notice the visibility change where the ears now become visible. The bottom row shows various views of the corrected light field. Finally, we illustrate the use of the shadow operator to cast shadows from light fields. Figures 1d and 8 illustrate this operator. Notice that the toy giraffe s shadow projects a believable silhouette onto the ground. This shadow is also correctly shaped when it interacts with the background flower light field. We simulate soft shadows by inserting multiple point light sources into the scene. 7 (c) (d) Figure 6: Focusing within a light field. Image shows a pinhole image from a composited light field, where every object is in focus. Both the flower and giraffe light fields were captured using 256 camera images, with each image at 256 x 256 resolution. In image, a focal plane is selected near the rear flowers. Foreground objects are properly blurred and allow the viewer to see around and through occlusions. In image (c) we simulate the non-photorealistic effect of having multiple focal planes in a scene. Both the front and rear flowers are in focus, but the middle region remains blurred. Image (d) is another view of this multi-focal plane light field. Proper parallax between light fields are maintained, as well as the defocus blur for each focal plane. Conclusions and future work We have presented a system that allows a user to edit and composite light fields at interactive rates. The system is useful for editing light fields, creating novel imaging effects, or creating complex optical and illumination effects for games. Instead of developing specific operators suited for a single task, we developed a framework that is based on a few basic operators and a compositing language allowing these operators to be combined. We have shown that these combinations can create interesting operations. We are discovering that these operators can be used in a variety of contexts. For example our warping operator can be used to describe the aberrometry data of a human eye, produced from a Shack-Hartmann device [Platt and Shack 1971; Barsky et al. 2002]. The warped rays would sample from an acquired light field, presenting photorealistic images of scenes as seen through a human optical system. Finally, we are investigating the use of our system to edit other 4D datasets, like 4D incident illumination [Sen et al. 2005; Chen and Lensch 2005], or general 8D reflectance fields. The ability to composite 8D reflectance fields enables captured objects in a scene to exhibit realistic inter-reflections, global illumination, and visibility changes. Such effects are currently very difficult to produce but may be possible with further extensions to our system. References BARSKY, B. A., BARGTEIL, A. W., G ARCIA, D. D., AND K LEIN, S Introducing vision-realistic rendering. In Eurographics Rendering Workshop (Poster). BARSKY, B. A Vision-realistic rendering: simulation of the scanned foveal image from wavefront data of human subjects. In APGV 04: Proceedings of the 1st Symposium on Applied perception in graphics and visualization, ACM Press, New York, NY, USA, C HEN, B., AND L ENSCH, H. P. A Light source interpolation for sparsely sampled reflectance fields. In Workshop on Vision, Modeling and Visualization, C HEN, B., O FEK, E., S HUM, H.-Y., AND L EVOY, M Interactive deformation of light fields. In Symposium on Interactive 3D Graphics and Games (I3D). C HUANG, Y.-Y., C URLESS, B., S ALESIN, D. H., AND S ZELISKI, R A bayesian approach to digital matting. In Proceedings of IEEE CVPR 2001, IEEE Computer Society, vol. 2, C OOK, R Shade trees. In SIGGRAPH 1984, Computer Graphics Proceedings. F IELDING, R The Technique of Special Effects Cinematography. cal/hastings House, London, third edition. G ORTLER, S. J., G RZESZCZUK, R., S ZELISKI, R., lumigraph. In Proc. SIGGRAPH AND Fo- C OHEN, M. F The H EIDRICH, W., L ENSCH, H., C OHEN, M. F., AND S EIDEL, H.-P Light field techniques for reflections and refractions. In Eurographics Rendering Workshop. I OURCHA, K., NAYAK, K., AND H ONG, Z. System and method for fixed-rate blockbased image compression with inferred pixel values. US Patent 5,956,431.

7 VAISH, V., WILBURN, B., JOSHI, N., AND LEVOY, M Using plane + parallax for calibrating dense camera arrays. In Proc. CVPR. YU, J., YANG, J., AND MCMILLAN, L Real-time reflection mapping with parallax. In Symposium on Interactive 3D Graphics and Games (I3D). ZHANG, Z., WANG, L., GUO, B., AND SHUM, H.-Y Feature-based light field morphing. In ACM Transactions on Graphics (Proc. SIGGRAPH 2002). Figure 7: Simulating light field refraction and focusing. Image shows a view from a light field created from two refractive spheres in front of a flower and Buddha light field. The sphere and Buddha light fields were rendered synthetically with 1024 cameras, at 256 x 256 resolution. The image is created using the focusing operator described in Section 5.2 to focus on the virtual image of the flower in the refractive sphere. This makes the background appear blurry. Notice that the virtual image is correctly distorted by the sphere, it is upside-down and warped. In image the focal plane is shifted to the background, and the virtual images are defocused. Figure 8: Simulating shadows. In image, sharp shadows are being cast from the giraffe and flower light field. In image, we create multiple virtual light sources and sum their masks to create soft shadows. Notice that the soft shadow interacts correctly with the flower light fields. The image is also focused on the foreground, so that flower and shadows are blurred. ISAKSEN, A., MCMILLAN, L., AND GORTLER, S. J Dynamically reparameterized light fields. In Siggraph 00, LEVOY, M., AND HANRAHAN, P Light field rendering. In Proc. SIGGRAPH OH, B. M., CHEN, M., DORSEY, J., AND DURAND, F Image-based modeling and photo editing. In Proc. SIGGRAPH PERLIN, K An image synthesizer. In SIGGRAPH 1985, Computer Graphics Proceedings, ACM Press / ACM SIGGRAPH. PLATT, B., AND SHACK, R Lenticular hartmann-screen. In Optical Science Center. PORTER, T., AND DUFF, T Compositing digital images. In Computer Graphics Volume 18, Number 3, PROUDFOOT, K., MARK, W. R., TZVETKOV, S., AND HANRAHAN, P A real-time procedural shading system for programmable graphics hardware. In SIG- GRAPH 2001, Computer Graphics Proceedings. SEITZ, S., AND KUTULAKOS, K. N Plenoptic image editing. In Proc. ICCV SEN, P., CHEN, B., GARG, G., MARSCHNER, S. R., HOROWITZ, M., LEVOY, M., AND LENSCH, H. P. A Dual photography. In ACM Transactions on Graphics (Proc. SIGGRAPH 2005). SHUM, H.-Y., AND SUN, J Pop-up light field: An interactive image-based modeling and rendering system. In ACM Transactions on Graphics.

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Image-based modeling (IBM) and image-based rendering (IBR)

Image-based modeling (IBM) and image-based rendering (IBR) Image-based modeling (IBM) and image-based rendering (IBR) CS 248 - Introduction to Computer Graphics Autumn quarter, 2005 Slides for December 8 lecture The graphics pipeline modeling animation rendering

More information

Focal stacks and lightfields

Focal stacks and lightfields Focal stacks and lightfields http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 11 Course announcements Homework 3 is out. - Due October 12 th.

More information

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about Image-Based Modeling and Rendering Image-Based Modeling and Rendering MIT EECS 6.837 Frédo Durand and Seth Teller 1 Some slides courtesy of Leonard McMillan, Wojciech Matusik, Byong Mok Oh, Max Chen 2

More information

The Light Field and Image-Based Rendering

The Light Field and Image-Based Rendering Lecture 11: The Light Field and Image-Based Rendering Visual Computing Systems Demo (movie) Royal Palace: Madrid, Spain Image-based rendering (IBR) So far in course: rendering = synthesizing an image from

More information

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field Traditional Image Generation Course 10 Realistic Materials in Computer Graphics Surfaces + BRDFs Reflectance Fields USC Institute for Creative Technologies Volumetric scattering, density fields, phase

More information

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Efficient Rendering of Glossy Reflection Using Graphics Hardware Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,

More information

Jingyi Yu CISC 849. Department of Computer and Information Science

Jingyi Yu CISC 849. Department of Computer and Information Science Digital Photography and Videos Jingyi Yu CISC 849 Light Fields, Lumigraph, and Image-based Rendering Pinhole Camera A camera captures a set of rays A pinhole camera captures a set of rays passing through

More information

Light Field Techniques for Reflections and Refractions

Light Field Techniques for Reflections and Refractions Light Field Techniques for Reflections and Refractions Wolfgang Heidrich, Hendrik Lensch, Michael F. Cohen, Hans-Peter Seidel Max-Planck-Institute for Computer Science {heidrich,lensch,seidel}@mpi-sb.mpg.de

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Image-Based Modeling and Rendering

Image-Based Modeling and Rendering Image-Based Modeling and Rendering Richard Szeliski Microsoft Research IPAM Graduate Summer School: Computer Vision July 26, 2013 How far have we come? Light Fields / Lumigraph - 1996 Richard Szeliski

More information

Shadows. COMP 575/770 Spring 2013

Shadows. COMP 575/770 Spring 2013 Shadows COMP 575/770 Spring 2013 Shadows in Ray Tracing Shadows are important for realism Basic idea: figure out whether a point on an object is illuminated by a light source Easy for ray tracers Just

More information

CS 354R: Computer Game Technology

CS 354R: Computer Game Technology CS 354R: Computer Game Technology Texture and Environment Maps Fall 2018 Texture Mapping Problem: colors, normals, etc. are only specified at vertices How do we add detail between vertices without incurring

More information

Volumetric Scene Reconstruction from Multiple Views

Volumetric Scene Reconstruction from Multiple Views Volumetric Scene Reconstruction from Multiple Views Chuck Dyer University of Wisconsin dyer@cs cs.wisc.edu www.cs cs.wisc.edu/~dyer Image-Based Scene Reconstruction Goal Automatic construction of photo-realistic

More information

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction Computer Graphics -based rendering Output Michael F. Cohen Microsoft Research Synthetic Camera Model Computer Vision Combined Output Output Model Real Scene Synthetic Camera Model Real Cameras Real Scene

More information

Image-Based Rendering

Image-Based Rendering Image-Based Rendering COS 526, Fall 2016 Thomas Funkhouser Acknowledgments: Dan Aliaga, Marc Levoy, Szymon Rusinkiewicz What is Image-Based Rendering? Definition 1: the use of photographic imagery to overcome

More information

Real-Time Video-Based Rendering from Multiple Cameras

Real-Time Video-Based Rendering from Multiple Cameras Real-Time Video-Based Rendering from Multiple Cameras Vincent Nozick Hideo Saito Graduate School of Science and Technology, Keio University, Japan E-mail: {nozick,saito}@ozawa.ics.keio.ac.jp Abstract In

More information

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection 3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection Peter Eisert, Eckehard Steinbach, and Bernd Girod Telecommunications Laboratory, University of Erlangen-Nuremberg Cauerstrasse 7,

More information

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 15: Image-Based Rendering and the Light Field Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Demo (movie) Royal Palace: Madrid, Spain Image-based rendering (IBR) So

More information

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract

More information

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007 Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 The Plenoptic Function Figure by Leonard McMillan Q: What is the set of all things that we can ever see? A: The

More information

Capturing and View-Dependent Rendering of Billboard Models

Capturing and View-Dependent Rendering of Billboard Models Capturing and View-Dependent Rendering of Billboard Models Oliver Le, Anusheel Bhushan, Pablo Diaz-Gutierrez and M. Gopi Computer Graphics Lab University of California, Irvine Abstract. In this paper,

More information

Integrated three-dimensional reconstruction using reflectance fields

Integrated three-dimensional reconstruction using reflectance fields www.ijcsi.org 32 Integrated three-dimensional reconstruction using reflectance fields Maria-Luisa Rosas 1 and Miguel-Octavio Arias 2 1,2 Computer Science Department, National Institute of Astrophysics,

More information

Acquisition and Visualization of Colored 3D Objects

Acquisition and Visualization of Colored 3D Objects Acquisition and Visualization of Colored 3D Objects Kari Pulli Stanford University Stanford, CA, U.S.A kapu@cs.stanford.edu Habib Abi-Rached, Tom Duchamp, Linda G. Shapiro and Werner Stuetzle University

More information

Ray Tracing. CS334 Fall Daniel G. Aliaga Department of Computer Science Purdue University

Ray Tracing. CS334 Fall Daniel G. Aliaga Department of Computer Science Purdue University Ray Tracing CS334 Fall 2013 Daniel G. Aliaga Department of Computer Science Purdue University Ray Casting and Ray Tracing Ray Casting Arthur Appel, started around 1968 Ray Tracing Turner Whitted, started

More information

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography Mitsubishi Electric Research Labs (MERL) Computational Cameras Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography Amit Agrawal Mitsubishi Electric Research Labs (MERL)

More information

A Warping-based Refinement of Lumigraphs

A Warping-based Refinement of Lumigraphs A Warping-based Refinement of Lumigraphs Wolfgang Heidrich, Hartmut Schirmacher, Hendrik Kück, Hans-Peter Seidel Computer Graphics Group University of Erlangen heidrich,schirmacher,hkkueck,seidel@immd9.informatik.uni-erlangen.de

More information

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen Light Fields Light Fields By Levoy and Hanrahan, SIGGRAPH 96 Representation for sampled plenoptic function stores data about visible light at various positions and directions Created from set of images

More information

Image-Based Deformation of Objects in Real Scenes

Image-Based Deformation of Objects in Real Scenes Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

More and More on Light Fields. Last Lecture

More and More on Light Fields. Last Lecture More and More on Light Fields Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 4 Last Lecture Re-review with emphasis on radiometry Mosaics & Quicktime VR The Plenoptic function The main

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world

More information

Image-Based Modeling and Rendering

Image-Based Modeling and Rendering Traditional Computer Graphics Image-Based Modeling and Rendering Thomas Funkhouser Princeton University COS 426 Guest Lecture Spring 2003 How would you model and render this scene? (Jensen) How about this

More information

Pipeline Operations. CS 4620 Lecture 14

Pipeline Operations. CS 4620 Lecture 14 Pipeline Operations CS 4620 Lecture 14 2014 Steve Marschner 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives

More information

Multi-view stereo. Many slides adapted from S. Seitz

Multi-view stereo. Many slides adapted from S. Seitz Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

CS : Assignment 2 Real-Time / Image-Based Rendering

CS : Assignment 2 Real-Time / Image-Based Rendering CS 294-13: Assignment 2 Real-Time / Image-Based Rendering Ravi Ramamoorthi 1 Introduction In this assignment, you will implement some of the modern techniques for real-time and/or image-based rendering.

More information

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Ground Truth. Welcome!

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Ground Truth. Welcome! INFOGR Computer Graphics J. Bikker - April-July 2015 - Lecture 10: Ground Truth Welcome! Today s Agenda: Limitations of Whitted-style Ray Tracing Monte Carlo Path Tracing INFOGR Lecture 10 Ground Truth

More information

First Steps in Hardware Two-Level Volume Rendering

First Steps in Hardware Two-Level Volume Rendering First Steps in Hardware Two-Level Volume Rendering Markus Hadwiger, Helwig Hauser Abstract We describe first steps toward implementing two-level volume rendering (abbreviated as 2lVR) on consumer PC graphics

More information

COMP environment mapping Mar. 12, r = 2n(n v) v

COMP environment mapping Mar. 12, r = 2n(n v) v Rendering mirror surfaces The next texture mapping method assumes we have a mirror surface, or at least a reflectance function that contains a mirror component. Examples might be a car window or hood,

More information

Interactive Deformation of Light Fields

Interactive Deformation of Light Fields Interactive Deformation of Light Fields Billy Chen Stanford University Eyal Ofek Microsoft Research Asia Heung-Yeung Shum Microsoft Research Asia Marc Levoy Stanford University Figure 1: Light field deformation

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

Depth Estimation with a Plenoptic Camera

Depth Estimation with a Plenoptic Camera Depth Estimation with a Plenoptic Camera Steven P. Carpenter 1 Auburn University, Auburn, AL, 36849 The plenoptic camera is a tool capable of recording significantly more data concerning a particular image

More information

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University Global Illumination CS334 Daniel G. Aliaga Department of Computer Science Purdue University Recall: Lighting and Shading Light sources Point light Models an omnidirectional light source (e.g., a bulb)

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334

More information

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1) Computer Graphics Lecture 14 Bump-mapping, Global Illumination (1) Today - Bump mapping - Displacement mapping - Global Illumination Radiosity Bump Mapping - A method to increase the realism of 3D objects

More information

Computational Photography

Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2010 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application Introduction Pinhole camera

More information

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources.

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources. 11 11.1 Basics So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources. Global models include incident light that arrives

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Motohiro Nakamura 1, Takahiro Okabe 1, and Hendrik P. A. Lensch 2 1 Kyushu Institute of Technology 2 Tübingen University

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE Naho INAMOTO and Hideo SAITO Keio University, Yokohama, Japan {nahotty,saito}@ozawa.ics.keio.ac.jp Abstract Recently there has been great deal of interest

More information

Computational Photography

Computational Photography End of Semester is the last lecture of new material Quiz on Friday 4/30 Sample problems are posted on website Computational Photography Final Project Presentations Wednesday May 12 1-5pm, CII 4040 Attendance

More information

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Hybrid Rendering for Collaborative, Immersive Virtual Environments Hybrid Rendering for Collaborative, Immersive Virtual Environments Stephan Würmlin wuermlin@inf.ethz.ch Outline! Rendering techniques GBR, IBR and HR! From images to models! Novel view generation! Putting

More information

Image or Object? Is this real?

Image or Object? Is this real? Image or Object? Michael F. Cohen Microsoft Is this real? Photo by Patrick Jennings (patrick@synaptic.bc.ca), Copyright 1995, 96, 97 Whistler B. C. Canada Modeling, Rendering, and Lighting 1 A mental model?

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

Modeling Light. Michal Havlik

Modeling Light. Michal Havlik Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Spring 2010 What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of

More information

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian CS 563 Advanced Topics in Computer Graphics Camera Models by Kevin Kardian Introduction Pinhole camera is insufficient Everything in perfect focus Less realistic Different camera models are possible Create

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)

More information

Modeling Light. Slides from Alexei A. Efros and others

Modeling Light. Slides from Alexei A. Efros and others Project 3 Results http://www.cs.brown.edu/courses/cs129/results/proj3/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj3/damoreno/ http://www.cs.brown.edu/courses/cs129/results/proj3/taox/ Stereo

More information

CS 283: Assignment 3 Real-Time / Image-Based Rendering

CS 283: Assignment 3 Real-Time / Image-Based Rendering CS 283: Assignment 3 Real-Time / Image-Based Rendering Ravi Ramamoorthi 1 Introduction In this assignment, you will implement some of the modern techniques for real-time and/or image-based rendering. To

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

Computational Photography

Computational Photography Computational Photography Photography and Imaging Michael S. Brown Brown - 1 Part 1 Overview Photography Preliminaries Traditional Film Imaging (Camera) Part 2 General Imaging 5D Plenoptic Function (McMillan)

More information

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005 Real Time Rendering CS 563 Advanced Topics in Computer Graphics Songxiang Gu Jan, 31, 2005 Introduction Polygon based rendering Phong modeling Texture mapping Opengl, Directx Point based rendering VTK

More information

Ray tracing based fast refraction method for an object seen through a cylindrical glass

Ray tracing based fast refraction method for an object seen through a cylindrical glass 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Ray tracing based fast refraction method for an object seen through a cylindrical

More information

Representing and Computing Polarized Light in a Ray Tracer

Representing and Computing Polarized Light in a Ray Tracer Representing and Computing Polarized Light in a Ray Tracer A Technical Report in STS 4600 Presented to the Faculty of the School of Engineering and Applied Science University of Virginia in Partial Fulfillment

More information

Real-Time Universal Capture Facial Animation with GPU Skin Rendering

Real-Time Universal Capture Facial Animation with GPU Skin Rendering Real-Time Universal Capture Facial Animation with GPU Skin Rendering Meng Yang mengyang@seas.upenn.edu PROJECT ABSTRACT The project implements the real-time skin rendering algorithm presented in [1], and

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Image Based Rendering

Image Based Rendering Image Based Rendering an overview Photographs We have tools that acquire and tools that display photographs at a convincing quality level 2 1 3 4 2 5 6 3 7 8 4 9 10 5 Photographs We have tools that acquire

More information

Stereo pairs from linear morphing

Stereo pairs from linear morphing Proc. of SPIE Vol. 3295, Stereoscopic Displays and Virtual Reality Systems V, ed. M T Bolas, S S Fisher, J O Merritt (Apr 1998) Copyright SPIE Stereo pairs from linear morphing David F. McAllister Multimedia

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003 Image Transfer Methods Satya Prakash Mallick Jan 28 th, 2003 Objective Given two or more images of the same scene, the objective is to synthesize a novel view of the scene from a view point where there

More information

Computer Graphics. Lecture 9 Environment mapping, Mirroring

Computer Graphics. Lecture 9 Environment mapping, Mirroring Computer Graphics Lecture 9 Environment mapping, Mirroring Today Environment Mapping Introduction Cubic mapping Sphere mapping refractive mapping Mirroring Introduction reflection first stencil buffer

More information

Schedule. MIT Monte-Carlo Ray Tracing. Radiosity. Review of last week? Limitations of radiosity. Radiosity

Schedule. MIT Monte-Carlo Ray Tracing. Radiosity. Review of last week? Limitations of radiosity. Radiosity Schedule Review Session: Tuesday November 18 th, 7:30 pm, Room 2-136 bring lots of questions! MIT 6.837 Monte-Carlo Ray Tracing Quiz 2: Thursday November 20 th, in class (one weeks from today) MIT EECS

More information

Topics and things to know about them:

Topics and things to know about them: Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will

More information

CS 684 Fall 2005 Image-based Modeling and Rendering. Ruigang Yang

CS 684 Fall 2005 Image-based Modeling and Rendering. Ruigang Yang CS 684 Fall 2005 Image-based Modeling and Rendering Ruigang Yang Administrivia Classes: Monday and Wednesday, 4:00-5:15 PM Instructor: Ruigang Yang ryang@cs.uky.edu Office Hour: Robotics 514D, MW 1500-1600

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Light Field Transfer: Global Illumination Between Real and Synthetic Objects

Light Field Transfer: Global Illumination Between Real and Synthetic Objects Transfer: Global Illumination Between and Objects Oliver Cossairt Shree Nayar Columbia University Ravi Ramamoorthi Captured Interface Scene objects objects Projected Rendering Composite Image Glossy reflection

More information

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Lightmaps, as do other precomputed lighting methods, provide an efficient and pleasing solution for lighting and shadowing of relatively static

More information

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming Screen Space Ambient Occlusion TSBK03: Advanced Game Programming August Nam-Ki Ek, Oscar Johnson and Ramin Assadi March 5, 2015 This project report discusses our approach of implementing Screen Space Ambient

More information

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed

More information

Plenoptic camera and its Applications

Plenoptic camera and its Applications Aum Sri Sairam Plenoptic camera and its Applications Agenda: 1. Introduction 2. Single lens stereo design 3. Plenoptic camera design 4. Depth estimation 5. Synthetic refocusing 6. Fourier slicing 7. References

More information

Adding a Transparent Object on Image

Adding a Transparent Object on Image Adding a Transparent Object on Image Liliana, Meliana Luwuk, Djoni Haryadi Setiabudi Informatics Department, Petra Christian University, Surabaya, Indonesia lilian@petra.ac.id, m26409027@john.petra.ac.id,

More information

Computer Graphics. Si Lu. Fall uter_graphics.htm 11/22/2017

Computer Graphics. Si Lu. Fall uter_graphics.htm 11/22/2017 Computer Graphics Si Lu Fall 2017 http://web.cecs.pdx.edu/~lusi/cs447/cs447_547_comp uter_graphics.htm 11/22/2017 Last time o Splines 2 Today o Raytracing o Final Exam: 14:00-15:30, Novermber 29, 2017

More information

Plenoptic Image Editing

Plenoptic Image Editing Plenoptic Image Editing Steven M. Seitz Computer Sciences Department University of Wisconsin Madison Madison, WI53706 seitz@cs.wisc.edu Kiriakos N. Kutulakos Department of Computer Science University of

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

The Rasterization Pipeline

The Rasterization Pipeline Lecture 5: The Rasterization Pipeline (and its implementation on GPUs) Computer Graphics CMU 15-462/15-662, Fall 2015 What you know how to do (at this point in the course) y y z x (w, h) z x Position objects

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Clipping. CSC 7443: Scientific Information Visualization

Clipping. CSC 7443: Scientific Information Visualization Clipping Clipping to See Inside Obscuring critical information contained in a volume data Contour displays show only exterior visible surfaces Isosurfaces can hide other isosurfaces Other displays can

More information

MIT Monte-Carlo Ray Tracing. MIT EECS 6.837, Cutler and Durand 1

MIT Monte-Carlo Ray Tracing. MIT EECS 6.837, Cutler and Durand 1 MIT 6.837 Monte-Carlo Ray Tracing MIT EECS 6.837, Cutler and Durand 1 Schedule Review Session: Tuesday November 18 th, 7:30 pm bring lots of questions! Quiz 2: Thursday November 20 th, in class (one weeks

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1 Ray tracing Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 3/19/07 1 From last time Hidden surface removal Painter s algorithm Clipping algorithms Area subdivision BSP trees Z-Buffer

More information