Horizon Map Capture. Holly Rushmeier Laurent Balmelli Fausto Bernardini

Size: px
Start display at page:

Download "Horizon Map Capture. Holly Rushmeier Laurent Balmelli Fausto Bernardini"

Transcription

1 EUROGRAPHICS 2001 / A. Chalmers and T.-M. Rhyne (Guest Editors) Volume 20 (2001), Number 3 Horizon Map Capture Holly Rushmeier Laurent Balmelli Fausto Bernardini IBM T. J. Watson Research Center y Abstract We present a method for computing horizon maps from captured images of a bumpy surface. Horizon maps encode surface self-shadowing effects, and can be used with bump or normals maps to realistically render surfaces with small height perturbations. The method does not rely on complete surface reconstruction, and requires only eight captured images as input. In this paper we discuss how shadow information is extrapolated from the eight captured images to compute the horizon map. Our implementation accounts for the noise and uncertainties in physically acquired data. 1. Introduction The numerical representation of the shape and appearance of objects is a fundamental problem in computer graphics. The goal of numerical representation is to characterize the object as faithfully as possible with minimum storage and in a way that can be rapidly rendered into a 2D image. One method for efficiently representing small scale variations of geometry and appearance is to use images which are mapped onto a large scale, coarser, geometry. These maps may represent characteristics such as the color or roughness of the surface. In this paper we consider horizon maps 10, which encode how small geometric perturbations on a surface cast shadows on the surface itself as a function of light source direction. In particular, we consider the problem of how horizon maps can be acquired for an existing physical object using a simple hardware set-up of lights and a digital camera. Our goal in this work is to find a method to compute visually plausible images of the physical surface that we measure, not to produce precise reconstructions of the shadows that would be observed under all lighting conditions. Our method produces horizon maps that are consistent with the data we acquire, and smoothly interpolate and extrapolate this data. We begin by briefly reviewing the use of image maps in y IBM T. J. Watson Research Center, P.O. Box 704, Yorktown Heights, NY holly@watson.ibm.com, balmelli@us.ibm.com, fausto@watson.ibm.com. object modelling and methods to acquire maps from physical objects. We consider some possible brute force strategies for acquiring horizon maps and the drawbacks of these approaches Image Maps for Object Modelling Mapping images to object geometries has been used for decades, since the introduction of texture mapping to represent variations in object color 5. Blinn developed bump maps to represent fine-scale variations in surface geometry such as the bumps on an orange peel 1. Bump maps consist of a set of scalar values that are used to perturb the surface normals at rendering. Normals maps 2 store the surface normal at each pixel. Numerous other maps have been proposed. Bidirectional texture functions (BTF) 3 include the effects of fine scale geometry, self shadowing and a spatially varying bidirectional reflection distribution function (BRDF) for pairs of incident light/viewing directions. Eigen-textures 11 or surface light fields (SLF) 20 encode all of the light leaving an object at each surface point for a given incident light direction, including the effects of BRDF, self shadowing and surface finish. Although they are not as complete as BTF s and SLF s, texture and bump or normals maps have the advantage that they are separable. All geometric and color effects are combined in BTF s and SLF s. The color of an object, for example, can not be separated out of the information in a BTF or SLF to be used as a texture map on another object. A bump Published by Blackwell Publishers, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.

2 or normals map can represent small scale geometry with any texture map to represent color. The disadvantage of using bump or normals maps is that they do not include the effect of the bumpy surface casting shadows on itself. Consider the hemispherical bump viewed from above, lit by a source from the top of the page, shown in Figure 1. Part of the bump will be in an attached shadow that is a shadow formed by points on the surface with a normal pointing away from the light direction. Attached shadows can be computed from the normals map. There will also be a cast shadow below the attached shadow, formed by surface points where the view of the light source is blocked by the bump. Cast shadows can not be computed using the normals only. W NW SW N S NE SE E Figure 2: Horizon map. The 8 light directions for which the horizon angle for each pixel is stored in the horizon map. Example of azimuth angles for the N direction, for a hemispherical bump. Light direction Hemi-spherical bump Attached shadow Cast shadow Figure 1: Attached and cast shadows generated by a hemispherical bump on a flat surface (top view). To address this shortcoming Max developed the horizon map 10. Horizon maps take advantage of the fact that bump or normals maps represent terrain surfaces i.e. not surfaces with holes or handles. Whether the object casts a shadow on itself for a particular incident azimuthal light direction can be encoded by a single number the angle representing the horizon for that point and azimuthal direction. Max proposed using 8 values for the horizon in 8 directions (i.e. the azimuthal directions associated with 8 points of the compass N, NE, etc.), as illustrated in Figure 2. For each direction the zenith angle (i.e. angle with respect to the plane surface) of the horizon is stored, as illustrated in Figure 2 for a hemispherical bump lit from the North (top of page). Dark areas indicate that the angle to the horizon for this azimuth angle is close to perpendicular to the plane, white areas indicate portions of the surface that will never be in cast shadow. When rendering images for arbitrary light source directions, the horizon values for other azimuthal angles are found by interpolation. Shadows can then be computed by checking whether the current light direction is above or below the horizon value for each pixel. Bump and normals maps can be rendered interactively for varying light and viewing positions using hardware multipass texture mapping 6. Methods have also been developed for hardware rendering of cast shadows using horizon maps or related data structures 8; Acquiring Image Maps for Object Modelling Image maps for object modelling can be generated synthetically either by a procedural model, or using image paint programs. To reproduce the appearance of existing objects, it is also of interest to acquire maps. Capturing the spatially varying reflectance properties of surfaces has been studied in computer vision and computer graphics. Kay and Caelli 9 presented a method for capturing a spatially varying BRDF for simple objects using camera images. Ikeuchi et al. 17 demonstrated capturing a map of BRDF for more complicated objects using a robot positioning system, structured light and digital camera. Dana et al. 3 developed a system for capturing BTF for planar surfaces. In addition to display and storage algorithms for eigen-textures and SLF s, hardware set-ups for capturing large numbers of images for known source and view positions have been developed 20. Rushmeier et al. 15 described a simple photometric lighting system based on Woodham s original photometric stereo algorithm 7 to acquire normals maps. Normals acquired from an object can be separated from the texture map representing the color of the object, and can be applied to other, possibly synthetic objects 16. This method was subsequently extended to larger, more complex objects by Rushmeier et al. 13 and 14; Rocchini et al. 12. A complete representation of a bumpy surface requires both the normals and information about the shadows the object casts on itself. Given the normals maps, one possibility is to reconstruct the surface from the normals, using shapefrom-shading techniques as described in 7. It is well known that reconstruction from normals fails near surface discontinuities as diagrammed in Figure 3. Because a view of

3 Light direction (0.8, -0.6) (-0.707, 0.707) (0.707, 0.707) Cast shadow (-0.8, 0.6) (0.8, 0.6) Figure 3: Shortcomings of using shape-from-shading for the estimation of surface height. The method fails near surface discontinities, causing the surface in the figure to be incorrectly recontructed as flat. A small error in normal estimation, which results in only a small perceived luminance error, causes a noticeable error in estimating the height of the bump. The diagram at the top shows the true surface. The diagram at the bottom shows the effect of a possible small error in normal estimation. The radiance would be versus 1 for the erroneous surface - a small difference. However, the correctly estimated bump in the top diagram, for the light direction shown, casts a shadow of length In the case of the incorrect normals below, no cast shadow would be computed. near-vertical surfaces is not obtained, the height of the bump, and so the shadow it casts, will be severely underestimated. Normals maps may also include isolated pixels with substantial error. Individual pixel errors can produce errors over a large area when all of the normals are integrated to reconstruct a surface. A more subtle shortcoming of reconstructing from normals is the effect of small errors in the normals on the reconstructed height. A small error in normals as shown in Figure 3 has little effect on the perceived relative contrast in the light reflected by the surfaces, particularly since the eye has a sublinear (approximately power of 1/3) response to radiance. However, the small error in the normal results in a height difference that can change the size of the shadow. At a minimum, the horizon maps that we derive for the measured data should produce nearly the same shadows for the captured light angles as were observed in the original captured images. An alternative approach to surface reconstruction is to take a large number of images, similar to the eigen-texture and SLF methods. A special rig could be designed to take images densely spaced as a light source traverses 180 degree arcs in each of 4 directions. Besides requiring special hardware design, this approach would take substantial processing to reduce the large number of images to horizon maps. In this paper we demonstrate a method for computing horizon maps from just eight images of a bumpy surface. In the next section we describe our fundamental method for extrapolating shadow data from a small number of images. Following that, we detail how to account for the noise and uncertainties in physically acquired data in implementing the method. 2. Proposed Method with Ideal Input We acquire eight images to construct the horizon map for an object, one for each of the azimuthal directions in the horizon map. In this section we describe how to approximate shadows for all zenith angles given one ideal image for a single zenith angle for a particular azimuthal angle. We consider the case of an object that is essentially flat, with surface height variations that are small with respect to the width and height of the surface. Eight images are obtained from the standard azimuthal directions, at a zenith angle of about 45 degrees. If we can extrapolate this single set of shadows for one zenith angle to the full range of 0 to 90 degrees, we have the complete data required to build a horizon map. We will perform this extrapolation by first identifying shadows in the image, and differentiating between attached and cast shadows. For all cast shadows then we will estimate the height of the surface points causing the shadow. Given these sets of raised surface points, or ridges, we can estimate what parts of the surface they will occlude for other light positions. In the ideal case, shadows for a surface with non-zero albedo are detected by finding the areas with pixel intensity values of zero. Differentiating between cast and attached shadows can not be performed though by simple inspection of a single image. We use all of the captured images, and obtain the normals maps by the method described in 15.Given the normals map, attached shadows are identified in each image by finding the pixels where the scalar product of surface normal and light direction is negative. The remaining shadow pixels are in cast shadows. Any variation in normals from the flat plane indicates a change in height that would result in a cast shadow, at least for a grazing angle of incidence. However, we restrict ourselves to cast shadows which are apparent at an angle of incidence of 45 degrees. For each cast shadow, we identify the border pixels that are on the edge of the shadow in the direction of the light source (see Figure 4). These pixels are the high points on the bump that are casting the shadow. We

4 identify these pixels by following a line from each shadow boundary pixel in the direction of the light source. We estimate the height of the pixels on the edge of the cast shadow by finding the length of the cast shadow from the edge pixel following a line in the direction opposite the direction to the light source. The height of the edge pixel is then computed from the triangle shown in Figure 4. Cast shadow Cast shadow Estimated height Ridge L Light direction Figure 5: We assume that bumps casting shadows are sharp, as in the left figure. In the case of a smooth bump, as shown on the right, we would underestimate the height of the bump. However, the result is consistent with measured data. r Image b b θ Cast shadow l r Light direction Figure 4: Computation of the height of an occluder from a cast shadow. Walking from a pixel r on the cast shadow boundary in the direction away the light source to find the length l of the shadow. Computation of height h of the bump. h L be computed for all pixels. For each pixel, a line is followed in the azimuthal direction currently under consideration until a ridge is encountered. The horizon value is computed again from the triangle shown in Figure 4, but in this case with the height h known and the light direction θ being calculated. Non-default horizon map values will be computed for cast shadows only, since the ridges have been placed at the cast shadow boundary. A case to consider is when a shadow is the result of a groove or indentation, rather than a raised bump, as shown in Figure 6. It would be inappropriate to extend such a shadow indefinitely. However, in such a case there will be a ridge formed by a shadow extending from the light source in the opposing direction. Overextending shadows is avoided by not computing any non-default horizon value when a ridge is encountered for an opposing light direction. Light a Ridge a Ridge b Light b By calculating the height at the end of the cast shadow, we are assuming a relatively sharp bump, as shown in Figure 5. For a somewhat gentler bump, the height of the casting surface and its position is somewhat misestimated. However, assuming the casting surface is at the edge of the cast shadow will produce results that are consistent with the observations of no shadows when the light direction is coincident with the observer position, and the captured shadow position for the light direction of capture, with smooth interpolation and extrapolation of the shadow. We can safely assume nearly sharp bumps since we have required the shadows to be apparent at an incident angle of 45 degrees. If the ridge casting the shadow were assumed to be at the edge of the attached shadow, points on the bump above the plane would erroneously be estimated as being in cast shadow for incident light angles close to the normal of the base plane. Given the computed ridge heights, the horizon maps can Figure 6: Avoiding overextending shadows in grooved regions: Non-default horizon map values are computed for azimuth direction a only in region 2. For pixels in region 3, the ridges for the opposing azimuth direction b is encountered before ridge a, and so no shadow is computed. Similarly, in computing the horizon maps for direction b, no non-default values would be computed in region 1. Our approach will give visually plausible shadows, consistent with the acquired data. However the results will not be exact for all cases. It is assumed that the shadows are cast on the smooth base plane. For closely spaced bumps of varying heights, our approach will in some cases somewhat overestimate the extent of a shadow.

5 3. Implementation for Real Data The implementation of the method for operating on acquired data is made complicated by issues not found in the ideal case space constraints on the physical hardware set-up, shadows that are not pure black, noisy data, and discrete sampling. We have used a number of methods from signal processing to deal with these issues, and have developed the following processing pipeline: 1. Correct with reference image 2. Detect cast shadows 3. Find ridge locations and heights 4. Use ridges to compute horizon maps In this section we describe the practical implementation of this pipeline Correcting for Finite Light Locations In the ideal case, light sources for the input images would be located at a large distance from the scanned surface to simulate directional light sources. Furthermore, the light intensities incident from each direction would be perfectly balanced. In reality, a practical setup for surfaces of any reasonable size requires closer, positional rather than directional sources. The physical constraints of the test room may not allow them to be located exactly equally distant from the test surface. Fortunately, the effect of varying light source distance can be accounted for using a planar white diffuse test surface as a reference. A diagram of our hardware set-up is shown in Figure 7. The hardware consists of 8 light sources (75 watt halogen bulb) arranged on a frame around the target object, a digital camera (Kodak DC290) and a stand that allows us to mount a target surface perpendicular to the camera line of sight. 20cm Tripod Target Light 65cm 70cm 70cm 60cm Camera 60cm Figure 7: Hardware set-up used to acquire the data. A white diffuse target was imaged with each of the light sources in turn, with the same aperture and exposure. In addition, a flat checkerboard target with known block size was imaged to allow the transformation of pixel locations in the image to a global coordinate system centered on the target and with z-axis in the direction of the camera. The light source locations were measured with respect to the coordinate system. For light source position L, pixel location P, and reference intensity I lp, the correction factor to apply to an image captured with light source L is (L P) (1) I lp jl Pj 1 This normalization insures that if a perfectly flat surface were imaged by the system it would return a normal of Λ for all points on the surface. Given the reference images, each captured image for a given scanned surface can be appropriately adjusted. Once the images have been adjusted for the finite location, the method described in 15 can be applied to compute surface normals Cast Shadow Detection Ideally, shadows are black, with a pixel intensity value of zero. In captured images shadows area do not have a zero value, and the value recorded for shadow areas varies from image to image. Reasons for these non-zero areas are noisy camera black levels, stray light from the environment, and interreflections from the object itself into the shadow areas 4. In the experiments presented in this paper, we used uniform albedo surfaces. However, for varied albedo objects, the relative albedo could be estimated as in 15, and the images adjusted to a uniform albedo gray scale. To detect shadow regions in the captured images, we analyze the histograms of the gray scale images. A histogram for a typical captured image is shown in Figure 8. Overall the histogram is very noisy, and is shown after several smoothing operations in Figure 8. The histogram shows a small peak at the low end, representing the pixels in shadow. The cutoff value of grey level values we consider to be in shadow is found by using a histogram thresholding technique known as histogram minimization 19. The histogram is traversed from left to right and a threshold value is found as soon as a minima is attained after passing the initial peak representing pixels in shadow (Figure 8). Besides identifying shadows with the threshold value found in the histogram, we need to differentiate between cast and attached shadows. Attached shadows within the shadow regions are found where the dot product of the normal with the light source direction is less than or equal to zero.

6 x Cast shadow histogram A problematic, but common, case is a pixel on a shadow boundary nearly parallel to the light source direction, shown in Figure 9. Dark pixels represent shadow regions and are bounded by lighter pixels showing the ridge found for the light direction. Since pixels are either wholly in or out of shadow, it is not apparent from an isolated pixel whether it is on a boundary parallel to the light source. Consider pixel P at the center of the circle in the figure, we define a triangular region along the walk direction using a user-specified aperture angle β (Figure 9). For a cutoff distance, we can now search for lit pixels in order to correctly classify pixel P. Since lit pixels are indeed within the test region, this edge pixel will not be labelled as part of the ridge for this shadow Cast shadow histogram Figure 9: Classification of edge pixels: in order to decide on a ridge pixel P, we define an aperture region along the walk direction using an aperture angle β and a cut-off distance Figure 8: Gray levels histogram for a captured image. Histogram after low-pass filtering. The vertical line depicts the threshold found by the algorithm Computing Ridge Locations and Heights For each cast shadow of minimum size all the boundary pixels are found. For each boundary pixel we step two pixels in the direction of the light source. If both of these pixels are lit, this boundary pixel is a candidate for being on the shadow-casting ridge. Because the physical light sources are positional, rather than directional, the direction to the light source is slightly different for each pixel in the image. The position of each pixel in the image is converted to a three dimensional coordinate in the global system to compute the light source direction. The height for each ridge pixel is then computed by following a path from the ridge pixel to the end of the shadow, and computing a height using the length of the shadow and the vector to the light source. The finite spatial sampling inherent in a digital image results in two types of errors in the ridge calculations. First, there are sometimes small gaps in the ridges because a short segment of the ridge is nearly aligned with the light source direction. Second, the shadow lengths are quantized by the pixel spacing, causing noticeable jumps in height from one ridge pixel to another. We deal with these problems by filling small gaps in ridges and then low-pass filtering the heights along the ridges. Each ridge is traversed until a disconnected ridge pixel is found. Then, the neighborhood of the last visited ridge pixel is searched for another ridge component. When found, reconnection is performed by following the shadow boundary until the new ridge is reached. The search ends when no new ridge pixel is found from the last disconnected pixel. A set of ridges before and after filtering is shown in Figure 10 and 10, respectively Combining Ridges to Compute Horizon Maps Given the ridges for each captured image, these can be combined to compute the horizon maps. Three issues need to be accounted for in this combination: 1) physical constraints dictate that the captured directions are not identical to the 8 standard azimuthal directions, 2) image variations and deviations from the sharp bump assumption may result in small

7 relate the size of pixels in the captured image to physical length measurements. Figure 10: Filtering and connection of the ridges: the curved boundaries of cast shadows yield gaps after the ridge construction. The ridge components are linked by searching in the neighborhood of each disconnected ridge pixel. variations in the ridge heights for two different captured directions, and 3) bump shadows should not extend over other bumps. To appropriately combine the ridges, the heights in each set of ridges for light direction L is averaged with ridges in the same neighborhood (e.g. 5 pixel radius) in the two light source directions on either side of direction L. When computing the horizon map for one of the basic directions, the ridges for the closest direction L and the ridges on either side of it are used. For example, when computing the horizon map for light from the "North", ridges obtained from the images captured with the light source above and to the right, above and above and to the left of the target are used. Ridges from the other 5 light source positions are considered as ridges from opposing directions. A problem occurs when opposing ridges that should be coincident are slightly off-set. To avoid these slight offests from stopping horizon calculations, opposing ridges are not allowed to stop the calculation if they lie in the region that was in shadow for the images captured from the current three valid (non-opposing) light directions. 4. Results We tested our method for capturing horizon maps on three test objects, each with a different type of surface variation. The test objects, along with the reference white plane, are shown in Figure 11. The reference plane, a white paper painted with white diffuse paint, is shown on the upper left. On the upper right is a surface with small, short (less than 1cm) objects a washer, some string, layers of papers and plastic cubes, glued to a white paper and painted white. On the lower left is a more continuously bumpy surface formed using a plaster modelling compound bonded to white paper. Finally, on the lower right, some relatively tall (2.5cm) plastic boxes are glued to a white paper. The surface on the lower right also includes a checkerboard pattern that was used to Figure 11: The test objects used. The steps in the computation of a horizon map are illustrated in Figure 12 for one of the test objects. Figure 12 shows one of the captured images of the collection of small short objects, lit from the upper right. Figure 12 shows the same image after correction with the reference white image. Figure 12(c) shows the scene relit using the normals computed from the 8 captured images the image shows attached but not cast shadows. Figure 12(d) shows the cast shadows extracted for the upper right light direction. Figure 12(e) shows the horizon map generated for the standard azimuthal direction closest to the upper right light direction. Finally Figure 12(f) shows the reconstruction of the original image, using the computed normals and horizon maps. Figure 13-(d) shows the variation of shadows computed using the horizon maps for four light directions. For reference, Figures (e)-(h) show the surface lit without cast shadows for the same light directions. In particular, Figures (d) and (h) compare examples of extreme cases with larger zenith angles where the horizon map interpolation begins to break down. Light source intensity is the same for images -(c) and (e)-(g), and doubled for (d) and (h). Figure 15 shows the heights reconstructed from the normals for the small object data set using the method described in 7. The results illustrate the problems discussed in section 1.2. The small plastic cubes in the lower left hand corner, that are actually 0:75cm tall, have discontinuities that result in their height being underestimated as 0:36cm. Isolated errors in the normals that are not evident in the synthetic images computed in Figures 12 result in a number of spikes. Note that these spikes have already been reduced by filtering the initial height results. Small errors in the normals result in the height of the large loop of string, that is actually 0:3cm being overestimated as 0:6cm. These errors are small on the scale of the 20cm wide surface (1:5% error), but have a noticeable impact on the cast shadows. Figure 15 also compares a captured image with synthetic images reconstructed from the computed heights and with our method (c), highlighting the obvious artifacts produced by the height reconstruction.

8 (c) (d) (e) (f) Figure 12: Steps in the computation of a horizon map. (c) (d) (e) (f) (g) (h) Figure 13: Comparison of rendering with cast shadows computed from the horizon map and without, for four light directions. More results are shown for the two other test targets depicted in Figure 11. Figures 14-(d) show, for the plaster modelling compound surface, one of the captured images, after correction, and the horizon map computed for the azimuthal direction closest to the light source direction in the previous image. Because the surface is completely bumpy, rather than having small isolated height variations, the cast shadows are not as noticeable. The next two figures show reconstructed images for a light direction not in the set of captured directions, one without (c) and one with cast shadows added using the horizon map (d). Figure 14(e)-(h) show one of the corrected captured images for the tall block data set (e) and the horizon map for the nearest azimuthal direction (f). Clearly the eight-direction horizon map cannot be used to interpolate shadows for other azimuthal directions for the long shadows shown in (e). Figures (g) and (h) show the shadows computed for interpolated azimuthal directions for near-normal incident light. The detail shown in closeup in Figure 16 illustrates a case of an indentation between two surfaces. The ridges excfl The Eurographics Association and Blackwell Publishers 2001.

9 (c) (d) (e) (f) (g) (h) Figure 14: Experimental results for two additional datasets. (c) Figure 15: Computing the horizon map from heights obtained with the shape-from-shading (top image) technique can lead to artifacts. One of the captured images. Reconstructed cast shadows from shape-from-shading heights. (c) Reconstructed cast-shadows computed with our method. tracted for this area are shown in. The horizon map (c) for this area shows that the shadow of the upper raised area does not extend into the lower raised area, except at one corner where the two raised areas overlap. The reconstructed image (d), for light from the upper right, shows that the shadow of the upper raised area does not extend over the lower raised area, except for the small corner. We have implemented a software viewer for interactively (c) (d) Figure 16: An example where an opposing ridge prevents shdaw from one bump inappropriately extending over another. changing light directions and displaying both the relit surface and cast shadows. The horizon maps we compute could also readily be incorporated in a hardware rendering system such as that described by Sloan and Cohen Conclusions We have described and illustrated a system for capturing horizon maps that can be used with an interactive viewer to display cast shadows for bumpy surfaces. Our method recfl The Eurographics Association and Blackwell Publishers 2001.

10 quires a small number of images, and does not require the full reconstruction of the scanned surface. Our method assumes relatively sharp bumps, however produces plausible shadows that are consistent with the shadows observed in the input images even when this assumption is violated. Our system has been demonstrated for flat base surface with uniform albedo. A next step in this area is to extend the method to curved surfaces, and to perform tests on surfaces with varying albedo and surface finish. References 1. J. Blinn. Simulation of wrinkled surfaces. Computer Graphics, 12(3): , Proceedings of SIGGRAPH J. Cohen, M. Olano, and D. Manocha. Appearance-preserving simplification. In Proceedings of SIGGRAPH 98, pages ACM, Orlando, FL. 3. K. Dana, B. van Ginneken, S. Nayar, and J. Koenderink. Reflectance and texture of real-world surfaces. ACM Transactions on Graphics, 18(1):1 34, D. Forsyth and A. Zisserman. Reflections on shading. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(7): , P.S. Heckbert. Survey of texture mapping. IEEE Computer Graphics & Applications, 6(11):56 67, November W. Heidrich and H.-P. Seidel. Hardware-accelerated shading and lighting. In Proceedings of SIGGRAPH 99, pages ACM, Los Angeles, CA. 7. B.K.P. Horn and M. J. Brooks. Shape from Shading. MIT Press, J. Kautz, W. Heidrich, and K. Daubert. Bump map shadow for OpenGL rendering. Technical Report MPI-I , Max Planck Institut fuer Informatik, February G. Kay and T. Caelli. Inverting an illumination model from range and intensity maps. CVGIP Image Understanding, 59(2): , N. L. Max. Horizon mapping: shadows for bump-mapped surfaces. The Visual Computer, 4: , K. Nishino, Y. Sato, and K. Ikeuchi. Eigen-texture method: appearance compression based on 3D model. In Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition, pages , June C. Rocchini, P. Cignoni, C. Montani, and R. Scopigno. Multiple textures stitching and blending on 3D objects. In Rendering Techniques 99, pages Springer Verlag, June Granada, Spain. 13. H. Rushmeier and F. Bernardini. Computing consistent normals and colors from photometric data. In Proceedings of the Second Intl. Conf. on 3-D Digital Imaging and Modeling, pages , October Ottawa, Canada. 14. H. Rushmeier, F. Bernardini, J. Mittleman, and G. Taubin. Acquiring input for rendering at appropriate level of detail: Digitizing a Pietà. In Rendering Techniques 98, pages Springer Verlag, June Vienna, Austria. 15. H. Rushmeier, G. Taubin, and A. Guéziec. Applying shape from lighting variation to bump map capture. In Rendering Techniques 97, pages Springer Verlag, June St. Etienne, France. 16. H. Rushmeier, G. Taubin, and A. Gueziec. Acquiring bump maps from curved objects. US Patent , October Assigned to IBM Corporation. 17. Y. Sato, M. Wheeler, and K. Ikeuchi. Object shape and reflectance modeling from observation. In Proceedings of SIG- GRAPH 97, pages ACM, Los Angeles, CA. 18. P. J. Sloan and M. F. Cohen. Interactive horizon mapping. In Rendering Techniques 2000, pages Springer Verlag, June Brno, Czech Republic. 19. I.M. Spiliotis, D. van Ormondt, and B.G. Mertzios. Prior knowledge for improved image estimation from raw MRI data. In Proc. 2nd International Workshop on Image and Signal Processing, pages , November D. Wood, D. Azuma, W. Aldinger, B. Curless, T. Duchamp, D. Salesin, and W. Steutzle. Surface light fields for 3D photography. In Proceedings of SIGGRAPH 00, pages , New Orleans, LA.

Self-shadowing Bumpmap using 3D Texture Hardware

Self-shadowing Bumpmap using 3D Texture Hardware Self-shadowing Bumpmap using 3D Texture Hardware Tom Forsyth, Mucky Foot Productions Ltd. TomF@muckyfoot.com Abstract Self-shadowing bumpmaps add realism and depth to scenes and provide important visual

More information

Light-Dependent Texture Mapping

Light-Dependent Texture Mapping Light-Dependent Texture Mapping Dan Gelb, Tom Malzbender, Kevin Wu Client and Media Systems Laboratory HP Laboratories Palo Alto HPL-98-131 (R.1) April 26 th, 2001* E-mail: malzbend@hpl.hp.com texture

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

Self-similarity Based Editing of 3D Surface Textures

Self-similarity Based Editing of 3D Surface Textures J. Dong et al.: Self-similarity based editing of 3D surface textures. In Texture 2005: Proceedings of the 4th International Workshop on Texture Analysis and Synthesis, pp. 71 76, 2005. Self-similarity

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Estimating the surface normal of artwork using a DLP projector

Estimating the surface normal of artwork using a DLP projector Estimating the surface normal of artwork using a DLP projector KOICHI TAKASE 1 AND ROY S. BERNS 2 1 TOPPAN Printing co., ltd. 2 Munsell Color Science Laboratory, Rochester Institute of Technology Summary:

More information

Shape and Appearance from Images and Range Data

Shape and Appearance from Images and Range Data SIGGRAPH 2000 Course on 3D Photography Shape and Appearance from Images and Range Data Brian Curless University of Washington Overview Range images vs. point clouds Registration Reconstruction from point

More information

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows are what we normally see in the real world. If you are near a bare halogen bulb, a stage spotlight, or other

More information

Practical Shadow Mapping

Practical Shadow Mapping Practical Shadow Mapping Stefan Brabec Thomas Annen Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany Abstract In this paper we propose several methods that can greatly improve

More information

3D Capture for Computer Graphics

3D Capture for Computer Graphics 3D Capture for Computer Graphics Holly E. Rushmeier IBM T.J. Watson Research Center P.O. Box 704 Yorktown, NY 10598 Abstract We examine the use of 3D scanned objects in computer graphics applications.

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Comment on Numerical shape from shading and occluding boundaries

Comment on Numerical shape from shading and occluding boundaries Artificial Intelligence 59 (1993) 89-94 Elsevier 89 ARTINT 1001 Comment on Numerical shape from shading and occluding boundaries K. Ikeuchi School of Compurer Science. Carnegie Mellon dniversity. Pirrsburgh.

More information

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface. Lighting and Photometric Stereo CSE252A Lecture 7 Announcement Read Chapter 2 of Forsyth & Ponce Might find section 12.1.3 of Forsyth & Ponce useful. HW Problem Emitted radiance in direction f r for incident

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

Chapter 1 Introduction. Marc Olano

Chapter 1 Introduction. Marc Olano Chapter 1 Introduction Marc Olano 1 About This Course Or, why do we want to do real-time shading, and why offer a course on it? Over the years of graphics hardware development, there have been obvious

More information

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 CSE 167: Introduction to Computer Graphics Lecture #7: Color and Shading Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday,

More information

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7 Lighting and Photometric Stereo Photometric Stereo HW will be on web later today CSE5A Lecture 7 Radiometry of thin lenses δa Last lecture in a nutshell δa δa'cosα δacos β δω = = ( z' / cosα ) ( z / cosα

More information

Investigation of Directional Filter on Kube-Pentland s 3D Surface Reflectance Model using Photometric Stereo

Investigation of Directional Filter on Kube-Pentland s 3D Surface Reflectance Model using Photometric Stereo Investigation of Directional Filter on Kube-Pentland s 3D Surface Reflectance Model using Photometric Stereo Jiahua Wu Silsoe Research Institute Wrest Park, Silsoe Beds, MK45 4HS United Kingdom jerry.wu@bbsrc.ac.uk

More information

Complex Shading Algorithms

Complex Shading Algorithms Complex Shading Algorithms CPSC 414 Overview So far Rendering Pipeline including recent developments Today Shading algorithms based on the Rendering Pipeline Arbitrary reflection models (BRDFs) Bump mapping

More information

Mapping textures on 3D geometric model using reflectance image

Mapping textures on 3D geometric model using reflectance image Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi Ikeuchi The University of Tokyo Cyra Technologies, Inc. The University of Tokyo fkurazume,kig@cvl.iis.u-tokyo.ac.jp

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 21: Light, reflectance and photometric stereo Announcements Final projects Midterm reports due November 24 (next Tuesday) by 11:59pm (upload to CMS) State the

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Capturing and View-Dependent Rendering of Billboard Models

Capturing and View-Dependent Rendering of Billboard Models Capturing and View-Dependent Rendering of Billboard Models Oliver Le, Anusheel Bhushan, Pablo Diaz-Gutierrez and M. Gopi Computer Graphics Lab University of California, Irvine Abstract. In this paper,

More information

Re-rendering from a Dense/Sparse Set of Images

Re-rendering from a Dense/Sparse Set of Images Re-rendering from a Dense/Sparse Set of Images Ko Nishino Institute of Industrial Science The Univ. of Tokyo (Japan Science and Technology) kon@cvl.iis.u-tokyo.ac.jp Virtual/Augmented/Mixed Reality Three

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Thomas Buchberger, Matthias Zwicker Universität Bern Herbst 2008 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation

More information

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Ko Nishino, Zhengyou Zhang and Katsushi Ikeuchi Dept. of Info. Science, Grad.

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

General Principles of 3D Image Analysis

General Principles of 3D Image Analysis General Principles of 3D Image Analysis high-level interpretations objects scene elements Extraction of 3D information from an image (sequence) is important for - vision in general (= scene reconstruction)

More information

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models 3D Programming Concepts Outline 3D Concepts Displaying 3D Models 3D Programming CS 4390 3D Computer 1 2 3D Concepts 3D Model is a 3D simulation of an object. Coordinate Systems 3D Models 3D Shapes 3D Concepts

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

CS 130 Final. Fall 2015

CS 130 Final. Fall 2015 CS 130 Final Fall 2015 Name Student ID Signature You may not ask any questions during the test. If you believe that there is something wrong with a question, write down what you think the question is trying

More information

And if that 120MP Camera was cool

And if that 120MP Camera was cool Reflectance, Lights and on to photometric stereo CSE 252A Lecture 7 And if that 120MP Camera was cool Large Synoptic Survey Telescope 3.2Gigapixel camera 189 CCD s, each with 16 megapixels Pixels are 10µm

More information

Recognition of Object Contours from Stereo Images: an Edge Combination Approach

Recognition of Object Contours from Stereo Images: an Edge Combination Approach Recognition of Object Contours from Stereo Images: an Edge Combination Approach Margrit Gelautz and Danijela Markovic Institute for Software Technology and Interactive Systems, Vienna University of Technology

More information

Mesh Decimation Using VTK

Mesh Decimation Using VTK Mesh Decimation Using VTK Michael Knapp knapp@cg.tuwien.ac.at Institute of Computer Graphics and Algorithms Vienna University of Technology Abstract This paper describes general mesh decimation methods

More information

Reconstruction of Discrete Surfaces from Shading Images by Propagation of Geometric Features

Reconstruction of Discrete Surfaces from Shading Images by Propagation of Geometric Features Reconstruction of Discrete Surfaces from Shading Images by Propagation of Geometric Features Achille Braquelaire and Bertrand Kerautret LaBRI, Laboratoire Bordelais de Recherche en Informatique UMR 58,

More information

Compression of View Dependent Displacement Maps

Compression of View Dependent Displacement Maps J. Wang and K. J. Dana: Compression of view dependent displacement maps. In Texture 2005: Proceedings of the 4th International Workshop on Texture Analysis and Synthesis, pp. 143 148, 2005. Compression

More information

Improved Illumination Estimation for Photon Maps in Architectural Scenes

Improved Illumination Estimation for Photon Maps in Architectural Scenes Improved Illumination Estimation for Photon Maps in Architectural Scenes Robert F. Tobler VRVis Research Center Donau-City Str. 1/3 1120 Wien, Austria rft@vrvis.at Stefan Maierhofer VRVis Research Center

More information

Polarization-based Transparent Surface Modeling from Two Views

Polarization-based Transparent Surface Modeling from Two Views Polarization-based Transparent Surface Modeling from Two Views Daisuke Miyazaki Masataka Kagesawa y Katsushi Ikeuchi y The University of Tokyo, Japan http://www.cvl.iis.u-tokyo.ac.jp/ Abstract In this

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary Summary Ambien Occlusion Kadi Bouatouch IRISA Email: kadi@irisa.fr 1. Lighting 2. Definition 3. Computing the ambient occlusion 4. Ambient occlusion fields 5. Dynamic ambient occlusion 1 2 Lighting: Ambient

More information

Complex Features on a Surface. CITS4241 Visualisation Lectures 22 & 23. Texture mapping techniques. Texture mapping techniques

Complex Features on a Surface. CITS4241 Visualisation Lectures 22 & 23. Texture mapping techniques. Texture mapping techniques Complex Features on a Surface CITS4241 Visualisation Lectures 22 & 23 Texture Mapping Rendering all surfaces as blocks of colour Not very realistic result! Even with shading Many objects have detailed

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 27, 2015 Lecture #18 Goal of Realistic Imaging The resulting images should be physically accurate and

More information

Assignment #2. (Due date: 11/6/2012)

Assignment #2. (Due date: 11/6/2012) Computer Vision I CSE 252a, Fall 2012 David Kriegman Assignment #2 (Due date: 11/6/2012) Name: Student ID: Email: Problem 1 [1 pts] Calculate the number of steradians contained in a spherical wedge with

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

Lighting. Figure 10.1

Lighting. Figure 10.1 We have learned to build three-dimensional graphical models and to display them. However, if you render one of our models, you might be disappointed to see images that look flat and thus fail to show the

More information

Acquisition and Visualization of Colored 3D Objects

Acquisition and Visualization of Colored 3D Objects Acquisition and Visualization of Colored 3D Objects Kari Pulli Stanford University Stanford, CA, U.S.A kapu@cs.stanford.edu Habib Abi-Rached, Tom Duchamp, Linda G. Shapiro and Werner Stuetzle University

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information

Computer Graphics. Shadows

Computer Graphics. Shadows Computer Graphics Lecture 10 Shadows Taku Komura Today Shadows Overview Projective shadows Shadow texture Shadow volume Shadow map Soft shadows Why Shadows? Shadows tell us about the relative locations

More information

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker CMSC427 Shading Intro Credit: slides from Dr. Zwicker 2 Today Shading Introduction Radiometry & BRDFs Local shading models Light sources Shading strategies Shading Compute interaction of light with surfaces

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

PHYSICS 1040L LAB LAB 7: DIFFRACTION & INTERFERENCE

PHYSICS 1040L LAB LAB 7: DIFFRACTION & INTERFERENCE PHYSICS 1040L LAB LAB 7: DIFFRACTION & INTERFERENCE Object: To investigate the diffraction and interference of light, Apparatus: Lasers, optical bench, single and double slits. screen and mounts. Theory:

More information

Accelerated Ambient Occlusion Using Spatial Subdivision Structures

Accelerated Ambient Occlusion Using Spatial Subdivision Structures Abstract Ambient Occlusion is a relatively new method that gives global illumination like results. This paper presents a method to accelerate ambient occlusion using the form factor method in Bunnel [2005]

More information

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1) Computer Graphics Lecture 14 Bump-mapping, Global Illumination (1) Today - Bump mapping - Displacement mapping - Global Illumination Radiosity Bump Mapping - A method to increase the realism of 3D objects

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Visual Perception. Visual contrast

Visual Perception. Visual contrast TEXTURE Visual Perception Our perception of the visual shape, size, color, and texture of things is affected by the optical environment in which we see them and the relationships we can discern between

More information

Light Reflection Models

Light Reflection Models Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 21, 2014 Lecture #15 Goal of Realistic Imaging From Strobel, Photographic Materials and Processes Focal Press, 186.

More information

Illuminating Micro Geometry Based on Precomputed Visibility

Illuminating Micro Geometry Based on Precomputed Visibility Illuminating Micro Geometry Based on Precomputed Visibility Wolfgang Heidrich Katja Daubert Jan Kautz Hans-Peter Seidel Max-Planck-Institute for Computer Science Abstract Many researchers have been arguing

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Hardware Displacement Mapping

Hardware Displacement Mapping Matrox's revolutionary new surface generation technology, (HDM), equates a giant leap in the pursuit of 3D realism. Matrox is the first to develop a hardware implementation of displacement mapping and

More information

Graphics and Interaction Rendering pipeline & object modelling

Graphics and Interaction Rendering pipeline & object modelling 433-324 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering

More information

Pipeline Operations. CS 4620 Lecture 10

Pipeline Operations. CS 4620 Lecture 10 Pipeline Operations CS 4620 Lecture 10 2008 Steve Marschner 1 Hidden surface elimination Goal is to figure out which color to make the pixels based on what s in front of what. Hidden surface elimination

More information

Surface Reconstruction. Gianpaolo Palma

Surface Reconstruction. Gianpaolo Palma Surface Reconstruction Gianpaolo Palma Surface reconstruction Input Point cloud With or without normals Examples: multi-view stereo, union of range scan vertices Range scans Each scan is a triangular mesh

More information

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Efficient Rendering of Glossy Reflection Using Graphics Hardware Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

CSE 167: Introduction to Computer Graphics Lecture #6: Colors. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

CSE 167: Introduction to Computer Graphics Lecture #6: Colors. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 CSE 167: Introduction to Computer Graphics Lecture #6: Colors Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 Announcements Homework project #3 due this Friday, October 18

More information

Digital Image Processing. Introduction

Digital Image Processing. Introduction Digital Image Processing Introduction Digital Image Definition An image can be defined as a twodimensional function f(x,y) x,y: Spatial coordinate F: the amplitude of any pair of coordinate x,y, which

More information

Multiple Importance Sampling

Multiple Importance Sampling Multiple Importance Sampling Multiple Importance Sampling Reflection of a circular light source by a rough surface Radius Shininess Sampling the light source f()() xgxdx Sampling the BRDF Page 1 Multiple

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

CEng 477 Introduction to Computer Graphics Fall 2007

CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection Visible surface detection or hidden surface removal. Realistic scenes: closer objects occludes the

More information

Skeleton Cube for Lighting Environment Estimation

Skeleton Cube for Lighting Environment Estimation (MIRU2004) 2004 7 606 8501 E-mail: {takesi-t,maki,tm}@vision.kuee.kyoto-u.ac.jp 1) 2) Skeleton Cube for Lighting Environment Estimation Takeshi TAKAI, Atsuto MAKI, and Takashi MATSUYAMA Graduate School

More information

COS 116 The Computational Universe Laboratory 10: Computer Graphics

COS 116 The Computational Universe Laboratory 10: Computer Graphics COS 116 The Computational Universe Laboratory 10: Computer Graphics As mentioned in lecture, computer graphics has four major parts: imaging, rendering, modeling, and animation. In this lab you will learn

More information

Image Processing 1 (IP1) Bildverarbeitung 1

Image Processing 1 (IP1) Bildverarbeitung 1 MIN-Fakultät Fachbereich Informatik Arbeitsbereich SAV/BV (KOGS) Image Processing 1 (IP1) Bildverarbeitung 1 Lecture 20: Shape from Shading Winter Semester 2015/16 Slides: Prof. Bernd Neumann Slightly

More information

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture Texture CS 475 / CS 675 Computer Graphics Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. Lecture 11 : Texture http://en.wikipedia.org/wiki/uv_mapping

More information

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture CS 475 / CS 675 Computer Graphics Lecture 11 : Texture Texture Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. http://en.wikipedia.org/wiki/uv_mapping

More information

Displacement Mapping

Displacement Mapping HELSINKI UNIVERSITY OF TECHNOLOGY 16.4.2002 Telecommunications Software and Multimedia Laboratory Tik-111.500 Seminar on computer graphics Spring 2002: Rendering of High-Quality 3-D Graphics Displacement

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Analysis of photometric factors based on photometric linearization

Analysis of photometric factors based on photometric linearization 3326 J. Opt. Soc. Am. A/ Vol. 24, No. 10/ October 2007 Mukaigawa et al. Analysis of photometric factors based on photometric linearization Yasuhiro Mukaigawa, 1, * Yasunori Ishii, 2 and Takeshi Shakunaga

More information

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 CSE 167: Introduction to Computer Graphics Lecture #6: Lights Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 Announcements Thursday in class: midterm #1 Closed book Material

More information

COS 116 The Computational Universe Laboratory 10: Computer Graphics

COS 116 The Computational Universe Laboratory 10: Computer Graphics COS 116 The Computational Universe Laboratory 10: Computer Graphics As mentioned in lecture, computer graphics has four major parts: imaging, rendering, modeling, and animation. In this lab you will learn

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Reading for Today A Practical Model for Subsurface Light Transport, Jensen, Marschner, Levoy, & Hanrahan, SIGGRAPH 2001 Participating Media Measuring BRDFs

More information

Tutorial Notes for the DAGM 2001 A Framework for the Acquisition, Processing and Interactive Display of High Quality 3D Models

Tutorial Notes for the DAGM 2001 A Framework for the Acquisition, Processing and Interactive Display of High Quality 3D Models χfiχfi k INFORMATIK Tutorial Notes for the DAGM 2001 A Framework for the Acquisition, Processing and Interactive Display of High Quality 3D Models Research Report MPI-I-2001-4-005 September 2001 Hendrik

More information

Efficient View-Dependent Sampling of Visual Hulls

Efficient View-Dependent Sampling of Visual Hulls Efficient View-Dependent Sampling of Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Computer Graphics Group MIT Laboratory for Computer Science Cambridge, MA 02141 Abstract In this paper

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison CHAPTER 9 Classification Scheme Using Modified Photometric Stereo and 2D Spectra Comparison 9.1. Introduction In Chapter 8, even we combine more feature spaces and more feature generators, we note that

More information

Fingerprint Classification Using Orientation Field Flow Curves

Fingerprint Classification Using Orientation Field Flow Curves Fingerprint Classification Using Orientation Field Flow Curves Sarat C. Dass Michigan State University sdass@msu.edu Anil K. Jain Michigan State University ain@msu.edu Abstract Manual fingerprint classification

More information

Image-based BRDF Representation

Image-based BRDF Representation JAMSI, 11 (2015), No. 2 47 Image-based BRDF Representation A. MIHÁLIK AND R. ĎURIKOVIČ Abstract: To acquire a certain level of photorealism in computer graphics, it is necessary to analyze, how the materials

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Computer Graphics. Illumination and Shading

Computer Graphics. Illumination and Shading () Illumination and Shading Dr. Ayman Eldeib Lighting So given a 3-D triangle and a 3-D viewpoint, we can set the right pixels But what color should those pixels be? If we re attempting to create a realistic

More information

Improved Radiance Gradient Computation

Improved Radiance Gradient Computation Improved Radiance Gradient Computation Jaroslav Křivánek Pascal Gautron Kadi Bouatouch Sumanta Pattanaik Czech Technical University New gradients Gradients by [Křivánek et al. 2005] Figure 1: Right: The

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

Recovery of Fingerprints using Photometric Stereo

Recovery of Fingerprints using Photometric Stereo Recovery of Fingerprints using Photometric Stereo G. McGunnigle and M.J. Chantler Department of Computing and Electrical Engineering Heriot Watt University Riccarton Edinburgh EH14 4AS United Kingdom gmg@cee.hw.ac.uk

More information

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015 Orthogonal Projection Matrices 1 Objectives Derive the projection matrices used for standard orthogonal projections Introduce oblique projections Introduce projection normalization 2 Normalization Rather

More information

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute Lightcuts Jeff Hui Advanced Computer Graphics 2010 Rensselaer Polytechnic Institute Fig 1. Lightcuts version on the left and naïve ray tracer on the right. The lightcuts took 433,580,000 clock ticks and

More information

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading: Reading Required: Watt, Section 5.2.2 5.2.4, 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading: 18. Projections and Z-buffers Foley, et al, Chapter 5.6 and Chapter 6 David F. Rogers

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information