Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo

Size: px
Start display at page:

Download "Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo"

Transcription

1 Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo Miao Liao 1, Xinyu Huang, and Ruigang Yang 1 1 Department of Computer Science, University of Kentucky Department of Mathematics and Computer Science, North Carolina Central University Abstract We present a novel method that can separate m-bounced light and remove the interreflections in a photometric stereo setup. Under the assumption of a uniformly colored lambertian surface, the intensity of a point in the scene is the sum of 1-bounced light through m-bounced light rays. Ruled by the law of diffuse reflection, whenever a light ray is bounced by the surface, its intensity will be attenuated by the factor of albedo ρ. This implies that the measured intensity value can be written as a polynomial function of ρ, and the intensity contribution of the m-bounced light rays are expressed by the term of ρ m. Therefore, when we change the surface albedo, the intensity of the m-bounced light is changed to the order of m. This non-linearity gives us the possibility to separate the m-bounced light. In practice, we illuminate the scene with different light colors to effectively simulate different surface albedos since albedo is spectrum dependent. Once the m-bounced light rays are separated, we can perform the photometric stereo algorithm on the 1-bounced light (direct lighting) images to produce the 3D shape without the impact of interreflections. Experiments have shown that we get significantly improved scene reconstruction with a minimum of two color images. 1. Introduction Photometric stereo [10, 18, 1] is a popular technique to estimate the shapes of objects. It is able to rapidly obtain dense surface orientation from intensity images, providing an inexpensive yet accurate approach for 3D reconstruction. However, one of the limitations of photometric stereo is that it suffers from light interreflections when concavity exists in the scene. This problem has been studied in [5, 3, 15]. However, it is still not easy to develop an efficient algorithm that works well in practice. The surface intensity recorded by the camera is the sum of the directly reflected light rays Figure 1. An illustration of our lighting separation method. In order to change the spectrum-dependent surface albedo, both the blue light (bottom, left image) and green light (bottom, middle image) are used to illuminate an angle painted with blue diffuse material (top image). The separation of 1-bounced lighting and - bounced lighting are composed in the bottom, right image. The 1-bounced lighting is embedded in the red channel and the - bounced lighting is in the cyan channel. The intensity of the indirect lighting is scaled up by 10 times. from the light source (direct lighting) and the light rays that are bounced from other surface points (indirect lighting). Photometric stereo formulations usually do not account for the existence of indirect lighting, thus, producing incorrect shape and reflectance estimates. Nayar et al. [15] observed that the recovered concave shape will appear to be shallower than the actual shape. We present a novel method that separates the indirect lighting from direct lighting for the photometric stereo. We assume the target objects have lambertian surfaces of uniform color. Under diffuse reflection, whenever a light ray is bounced by the lambertian surface, its intensity will be attenuated by the factor of albedo value ρ. So the m-bounced light rays are attenuated by ρ m. We employ this property to separate light rays that are bounced different times. 689

2 The lighting separation is achieved using multiple images that are captured under different surface albedo. Instead of changing the surface material to get varying surface albedo, we alter the illumination color to use the different albedo values under different spectrum of lighting (figure 1). Compared to the previous methods [15, 3, 7, 9] that aim to remove the interreflections in photometric stereo, our method is much simpler while generating comparable results. The followings are the advantages of our method over previous ones In most cases only two images are enough to produce accurate separation, no structured lighting pattern is needed. It requires no prior knowledge of the scene geometry. It can be used to handle images with occlusion or images that are only a part of a large scene. It has almost no constraint on the light source. As long as it is static, the lighting could be a point light/lights, an area light/lights, or directional light/lights. All these attributes make our method in particular attractive for real-time photometric stereo. While our assumption of uniform lambertian surface may appear as too restrictive, many recent proposed systems [11, 8], designed for high accuracy and real-time performance, are based on the same assumption. Our proposed method can lead to the first system that can ameliorate or remove the effect of interreflections in real time.. Related Work Forsyth et al. [4, 5] studied the effect of interreflections and its impact on the shape recovery. Experiments were conducted on concave objects with varying albedo value. It also has been shown that the same object with smaller albedo value exhibits less interreflections. This is due to the non-linearity of the m-bounced light with respect to the surface albedo. However, the authors did not make the further research to explore this non-linearity to remove the interreflections. Instead, they claimed that the most productive approach to recover the object shape is constructing a dictionary of the most common generic interreflections, and using this dictionary to recognize and estimate surface shape. [, 3, 7, 9, 6] exploited the color bleeding effect that exists in the light interreflections and used this cue to separate the interreflections from the direct lighting. Their solution is quite limited since several assumptions have to be made upon the scene for their algorithms to be applicable. For example, the scene has to consist of independent convex surfaces of different color, the illumination needs to be isotropic diffuse and there exists at least one point on each surface where the interreflections are negligible. Nayar et al. [15] addressed the problem of interreflections by iteratively refining the shape and reflectance. The iteration starts with the erroneous (pseudo) estimates of shape and reflectance in the presence of the interreflections. The interreflections are simulated based on the recovered shape and reflectance, and are compensated to compute the shape and reflectance in the next iteration. This method implicitly removes the impact of the interreflections. In their follow-up work [14], Nayar et al. extended the algorithm to colored and multi-colored surfaces. By observing that the reflectance of a color surface point is dependent on the incident light spectrum, the algorithm is applied to the three bands of color images independently. The potential problem of their method is that it can not handle occlusions. The pseudo shape is only reconstructed from the visible part of the scene. Therefore, those occluded regions, which could contribute to the light interreflections, are not modeled in the pseudo shape. A bad initial guess of the scene geometry could lead to the failure of convergence of the algorithm. Other methods have been proposed to explicitly separate the direct and indirect lightings. Seitz et al. [17] has proved there exists an inverse light transport operator that could separate m-bounced light from a scene of arbitrary BRDF. Based on the lambertian surface assumption, they proposed a practical method to estimate the operator. Nayar et al. [16] presented a method to separate direct lighting from global lighting for complex scenes using high frequency lighting. The scenes could include complex interreflections, subsurface scattering and volumetric scattering. The idea is to occlude a small region of the scene from the light source while keep other parts illuminated. The measured intensities of the occluded region are solely due to the effect of global illumination. When all the points in the scene have been occluded once, they are composed to form a global illumination map, which is subtracted from the image without occlusion to get the direct illumination image. Notice that most separation algorithms require a lot more images and very controlled illumination although some of them can deal with arbitrary BRDF and several variants described in [16] can reduce the number of required images. 3. Our Method It is known that the intensity of a point in the scene is the sum of m-bounced light rays (m could be 1,, 3,...). Ruled by the law of diffuse reflection, whenever a light ray is bounced by a lambertian surface, its intensity will be attenuated by the factor of ρ. Base on this observation, the measured intensity value can be written as a polynomial function of ρ, and the intensity contribution of the m-bounced light rays are expressed by the term of ρ m. With only one measurement of the surface intensity, we can not separate the m-bounced light intensity from others. However, if we change the albedo ρ to, say, half of its value, while keep all 690

3 the other factors that could impact the intensity unchanged, the 1-bounced light intensity becomes a half, the -bounced light intensity becomes a quarter and the 3-bounced light intensity is only one eighth of its previous value. Given the non-linearity of the intensity changes, we can separate the m-bounced light. If the final measured surface intensity is composed of up to m-bounced light rays, we need to capture m images under m different albedo values. Changing the albedo of the object surfaces is not an easy task unless we can alter its color or gray level. Fortunately, we can achieve this by altering light color instead of surface color to change surface albedo, as the albedo value for a color surface is dependent on the incident light color [14]. It should be noted that, when we illuminate the scene with different light colors, we need to make sure that the light source energy is equivalent so that the cause of the differences of measured surface intensity is the surface albedo change. Once the m-bounced light rays are separated, we can perform the photometric stereo algorithm on the 1-bounced light (direct lighting) images to produce the 3D shape without the impact of interreflections. Unlike methods described in [16, 17], our method requires the lambertian surface with uniform color. Hence, our method is particularly useful for reconstruction of this kind of surface. In many applications, such as 3D teeth reconstruction, we can easily achieve these requirements by spray-painting the object. Recently, Johnson et al. [11] proposed a novel device in which a reflective skin is used to cover an object s surface. This material provides the lambertian surface and its own BRDF that are the same as the requirements of our method. However, the results in the paper show that this device is mainly used to reconstruct planar surfaces. It could be difficult to reconstruct surfaces with a large depth variation, such as the teeth model shown in the experiments section. In section 3.1, we first briefly introduce the photometric stereo algorithm. Then we derive the polynomial function of ρ in section 3. and discuss the lighting separation in section 3.3. Finally, we describe the practical issues of changing the surface albedo using different illumination color in section Photometric Stereo Similar to most photometric algorithms, our algorithm is based on the one proposed in [18]. Assume the surface point x is illuminated by three different directional lights, and the recorded intensity values are N 1 (x), N (x) and N 3 (x). Given that the orientation of the light i is (n i1, n i, n i3 ), we have N 1(x) N (x) N 3 (x) = ρ(x) n 11 n 1 n 13 n 1 n n 3 n 31 n 3 n 33 n(x) (1) We invert the 3x3 light direction matrix to solve for ρ(x)n(x), which is normalized to get the surface normal n(x). Once the surface normal (u, v, w) is defined at each pixel, we reconstructed the depth map z by minimizing the following quadratic error function: E(z) = x,y 3.. Interreflection Model ( z x w u ) +( z y w v ) () We briefly introduce the interreflection model here and refer interested readers to [13] for more details. Let s denote H(x) as the irradiance from the light source towards the surface point x. And we assume the surface has constant reflectance (albedo) ρ. Let the final radiance (including the effects of interreflections) at surface point x be N(x). The interreflection geometry is represented by the kernel K(x, x ), K(x, x ) = P os [n(x) (x x)] P os [n(x ) (x x )] x x (3) Where n(x) is the surface normal at point x, and P os [a] = a + a (4) Then the radiance at surface point x is the sum of the direct reflection and interreflections: N(x) = ρ H(x) + ρ K(x, x )N(x )dx (5) If we define the iterated kernels K m in the following way: K 1 = K (6) K(x, y) K m (x, x ) = K m 1 (y, x )dy (7) The radiance equation can be rewritten as N(x) = ρh(x) + ρ m m=1 K m (x, x ) ρh(x ) dx (8) We can think of euqation 8 as the polynomial function of ρ, therefore, it can be rewritten as 691

4 N(x) = H(x) ρ + ( m= Let s define K m 1 (x, x ) H(x ) dx )ρ m (9) C 1 (x) = H(x) C (x) = K 1 (x, x ) H(x ) dx C 3 (x) = K (x, x ) H(x ) dx... C m (x) = K m 1 (x, x ) H(x ) dx Equation 8 becomes N(x) = (10) C m (x)ρ m (11) m=1 It is easy to see that C m (x)ρ m is the total energy of those light rays that are bounced m times, and each time they are reflected by the surface, their energy is decreased by the factor of ρ. C m (x) encodes the surface geometry and light source energy which will not change if the light and surface are not changed Separation of m-bounced Light In most cases, using only the -bounced model is accurate enough to approximate the interreflection, thus, N(x) = C 1 (x)ρ + C (x)ρ (1) If we measure N(x) twice with fixed light and surface but varying surface albedo, we get two equations: N 1 (x) = C 1 (x)ρ 1 + C (x)ρ 1 N (x) = C 1 (x)ρ + C (x)ρ (13) These linear equations can be written as matrix vector multiplication, [ N1 (x) N (x) ] [ ] [ ] ρ1 ρ = 1 C1 (x) ρ ρ (14) C (x) Assume the two albedo values ρ 1 and ρ are known, we can solve for C 1 (x) and C (x) easily by inverting this - variable linear system. Then we can separate the direct lighting C 1 (x)ρ 1 from the indirect light C (x)ρ 1. Similarly, if the first m times bounced light are needed to accurately approximate the scene interreflection, we need to measure N(x) m times with m different albedo values (ρ 1, ρ,..., ρ m ) while keep the light and surface unchanged. The following linear equations can be built, N 1 (x) N (x)... N m (x) = ρ 1 ρ 1... ρ m 1 ρ ρ... ρ m ρ m ρ m... ρ m m C 1 (x) C (x)... C m (x) (15) By inverting this equation, we can not only separate direct lighting from indirect lighting but also calculate the energy of light that are bounced different times Changing Light Color As mentioned earlier, changing surface albedo involves changing the color of the surface or the gray level, neither is easy to accomplish. Plus, in order to keep the pixel correspondences between captured images, we have to keep the relative position between the camera and the scene objects unchanged during the capture process. This requirement adds more difficulties on changing the surface color/gray level without moving the object. Alternatively, based on the observation that the albedo of the color surface is dependent on the incident light spectrum, we can use different light colors to simulate different albedos. However, equation 15 holds only if the light source energy is consistent over all the images, which is usually not the case when the color of the light is changed. Furthermore, in order to solve equation 15, the absolute value of the albedos (ρ 1, ρ,..., ρ m ) are required. We will address these two issues in the following subsections. For simplicity, we will use the -bounced model (equation 13) in the following explanation. It can be easily generalized to the m-bounced model Light Intensity Compensation As in equation 13, C 1 (x) and C (x) keep constant only when the light source keeps constant during the capturing process. If the light color changes, so are C 1 (x) and C (x). Thus, equation 13 is rewritten as, N 1 (x) = C 11 (x)ρ 1 + C 1 (x)ρ 1 N (x) = C 1 (x)ρ + C (x)ρ (16) Let s assume the light source intensity for the second image is α times of that for the first image, then, for every surface point x, the irradiance H (x) from the light source for the second image is also α times of the irradiance H 1 (x) for the first image. That is, H (x) = αh 1 (x) (17) As defined in equation 10, C i (x) is linearly dependent on the irradiance H(x). Therefore, when the irradiance becomes α times of the original value, the coefficient C i (x) will become α times. That means, C 1 (x) = αc 11 (x) C (x) = αc 1 (x) Equation 16 can be written as, (18) N 1 (x) = C 11 (x)ρ 1 + C 1 (x)ρ 1 N (x) = αc 11 (x)ρ + αc 1 (x)ρ (19) Now the coefficients of the 1-bounced light and the - bounced light are the same except for a scalar α. If we 69

5 move the α to the left side of the equation and remove the first subscript of C, we can build up a similar linear equation, N 1 (x) = C 1 (x)ρ 1 + C (x)ρ 1 N (x)/α = C 1 (x)ρ + C (x)ρ (0) The ratio of the light source intensity can be easily calibrated using a gray convex object that has constant albedo for any light spectrum. We used a ping pong ball that is painted with gray material. Figure. Left image: the sphere of the same material as the scene object. Middle image: the sphere is illuminated by green color light (I (x)). Right image: the sphere is illuminated by blue color light (I 1(x)) Albedo Ratio Once the ratio of the light source intensity is calibrated, we can solve for the coefficients by inverting equation 0. Here, we assume the absolute value of the surface albedo ρ 1 and ρ are known. However, measuring the absolute value of albedo is much harder than it looks like. The following equation computes the albedo of a surface point x, which is derived from the diffuse reflection law. ρ(x) = I(x) H(x) cos θ (1) I(x) is the intensity of point x recorded by the camera, H(x) is the irradiance from the light source and θ is the angle between the incident light and the surface normal. The object that is used to compute the albedo should be made of the same material as the object under consideration, as well as being convex to avoid any interreflections. Both the direction of the incident light and the direction of the surface normal need to be estimated in order to get the incident angle θ. Moreover, it is very difficult to accurately measure the irradiance, because it involves placing either a light meter or an object of known albedo and surface normal at exactly the same position as surface point x, given that the irradiance varies across 3D space. Besides the tricks to ensure the same position, a light meter is usually designed to measure the irradiance of an area instead of a point, which means the metering result is actually the average of the surrounding area of x. The second option requires the pre-knowledge of the surface albedo of a second object, which is what we are trying to solve here, a chickenand-egg question. Fortunately, the ultimate goal is to compute the values of C 1 (x)ρ 1 and C (x)ρ 1 in equation 0, while the values of C 1 (x) and C (x) are just intermediate for this purpose. Therefore, we can take C 1 (x)ρ 1 and C (x)ρ 1 as the unknown variables, and only measure the ratio of the surface albedo under different lightings. Let s assume ρ ρ 1 = β () and we replace ρ in equation 0 with ρ 1, we get N 1 (x) = C 1 (x)ρ 1 + C (x)ρ 1 N (x)/α = βc 1 (x)ρ 1 + β C (x)ρ 1 (3) If we take C 1 (x)ρ 1 and C (x)ρ 1 as the unknown variables, which are the 1-bounced light and -bounced light, the linear equation is as following: [ N 1 (x) N (x)/α ] = [ 1 1 β β ] [ C1 (x)ρ 1 C (x)ρ 1 ] (4) We denote the 1-bounced light intensity C 1 (x)ρ 1 as L 1 (x) and -bounced light intensity C (x)ρ 1 as L (x), the final equation that separates L 1 (x) and L (x) is [ L1 (x) L (x) ] = [ 1 1 β β ] 1 [ N 1 (x) N (x)/α ] (5) Since it can be easily generalized to the m-bounced cases, we do not write down the equations here. Measuring the relative surface albedo is much easier than the absolute value. According to equation 1, the ratio between the albedo values at surface point x is ρ (x) ρ 1 (x) = I (x) I 1 (x) H1(x) H (x) (6) requires using an non-concave object that has the same material as the scene objects. As shown in figure, we use a blue sphere to measure the ratio of surface albedo values with respect to blue and green illumination light respectively. So far, the possibility to make use of the spectrumdependent albedo is based on the assumption that the surface albedo can not be constant across the entire spectrum (e.g., no grey object). As long as its spectrum is not flat, we could use narrow-band light to select the different albedo (or even paint the object as we did in figure 10). Since we are trying to invert a matrix that are filled up by albedo ratios, the matrix will become ill-conditioned when the ratios are close to 1. We performed a simple experiment to generate arbitrary albedo ratios using synthetic data. H 1(x) H (x) is the ratio of irradiance from light source, which can be calibrated using a grayscale sphere as described previously. While I(x) I 1(x) 693

6 Figure 5. The separation of 1-bounced lighting and -bounced lighting for the 3-wall scene. Left image is the 1-bounced lighting and right image is the -bounced lighting. Note that the -bounced image intensity is scaled up for illustration. The real -bounced image is much darker than that. Figure 3. Error increases as the ratio of albedo values approaches to 1. Figure 6. A more complicated scene. We used 4-bounced model to separate the lightings. 4 different albedo values are assigned to the scene objects that are rendered under the same lighting condition. The albedo values for the above images are: 0.78, 0.59, 0.39 and 0. respectively. Figure 4. Interreflection synthesis. The left image shows the setup in the 3DS MAX R. Light is only bounced at most times before entering the camera. In the right images, the albedo value of the right image is half of that for the middle image. The results in figure 3 show that the average intensity error between our computed direct lighting and the ground truth is under 3% when the ratios are out of range [ ]. While the ratio approaches to 1, the error quickly rises up due to noise/quantization. The experiment shows that the ratio should not be too close to 1, but our approach is robust for a wide range of ratios. 4. Experiment Our algorithm is tested on both synthetic and real data sets. The synthetic data was generated by simulating the light interreflections using radiosity algorithm provided by 3DS MAX R. We setup three directional lights with equivalent energy as the light sources of photometric stereo. Then we built a simple scene with 3 perpendicular walls as shown in figure 4. The radiosity engine takes into consideration of the geometric complexity of the scene, and automatically estimates how many iterations are needed to produce realistic results. The 3-wall scene is so simple that it only needs iterations to generate the realistic results, which means that the light is bounced at most twice before entering the camera. Thus, the -bounced model will separate the direct and indirect lighting. As in figure 4, we rendered the scene with two different surface albedo values while keeping all other parameters unchanged. The -bounced separation equation is applied on these synthetic images, and the separated lighting are shown in figure 5. We turned off the radiosity engine and generated the image with only direct lighting as ground truth. The separated direct lighting image is compared with the ground truth. The average intensity of the ground truth image is 94, and the average intensity difference between our separation result and the ground truth is 0.76, which is 0.8% of the intensity level. This demonstrates that our algorithm indeed works. The error maybe result from the roundoff error of the image intensity and the imperfection of the radiosity simulation. The photometric stereo algorithm is applied on the images before and after interreflection removal to recover the surface normal. The results are compared to the ground truth normal map. The mean orientation error is 7.88 degrees before the interreflection removal. After removing the interreflections, the mean angle error is reduced to 1.11 degrees. We also synthesized the interreflections on a more complicated scene, so that more iterations are requested to generate the final image (figure 6). For this scene, 4-bounced model is needed to accurately separate different lightings. Thus, we generated 4 images under 4 different albedo values. The result direct lighting is compared to the ground truth with the mean error of 1.57, which is 1.4% of the intensity level 109. We again recovered the surface normal using photometric stereo and compared the results to the ground truth. The mean error before and after the interreflection removal are degrees and 1.5 degrees. 694

7 Figure 7. Comparison of reconstructed shapes. These are different views of the reconstructed angle. The red one is the ground truth shape obtained by laser scanner. The green one is constructed from the images by our separation results, and the mean orientation error is.67 degrees. The cyan one is built from the directly captured images without the lighting separation, and the mean orientation error is degrees. Figure 8. The hemisphere used for reconstruction. The normal map is recovered only for the circled part (right image). Real Data Sets For the real data sets, we setup the photometric stereo experiment with 3 projectors and a DSLR camera. The projectors are positioned 1.5 meters away from the scene to simulate directional lights. The directions of the lightings are calibrated using a mirror sphere. As Funt et al. [6] pointed out, the -bounce model of interreflections is a good approximation in many situations, thus, our tests on the real objects are all based on -bounced model. In figure 1, We made a near-perpendicular angle of black paper board and paint it with blue diffuse material. Since the paint is not 100% opaque, a black board is necessary to guarantee that all the light reflected by the angle is reflected by the paint material. In order to simulate different surface albedo values, both green and blue color light are projected onto the angle by the projectors. Using the calibration methods described in the previous sections, we measured the ratio of the illumination intensity and the ratio of the surface albedo of green and blue lights for the middle light (light ), which are 3.44 and 0.8 respectively. With these two parameters calculated, we can build up the -bounced model to accomplish the separation. The surface normal is recovered using photometric stereo and its shape is integrated from the result normal map. Figure 7 shows reconstructed shape of the angle with and without the interreflection removal. It can be seen that after removing the interreflections, the reconstructed shape is well aligned with the ground truth. We also compared the recovered surface normal to the ground truth, and the mean orientation error is.67 degrees, compared to degrees before the interreflection removal. It is also comparable to the error of.5 degrees reported by Nayar s method [15] on a similar angle. We applied the similar approach on another common shape that produces interreflections: an inner hemisphere as shown in figure 8. The normal map is recovered only for the circled part of the hemisphere (figure 8, right image) as it is the area that can be illuminated by all of the three lights in our setup. The -bounced model is employed to separate the direct lighting and indirect lighting. Figure 9. The top image shows the reconstructed shape after interreflections removal, and the mean orientation error is 3. degrees. The bottom image shows the shape before interreflections removal and the mean orientation error is 13.6 degrees. The red color shape is ground truth. Figure 9 shows the comparison of the reconstructed shapes and the ground truth. The average orientation error is 3. degrees compared to 13.6 degrees before the interreflection removal. Table 1. Comparisons of mean normal errors (degree). with without interreflection interreflection Synth Synth Real: Angle Real: Hemisphere The mean orientation errors of all the above test cases are summarized in table 1 for easy reference. Finally, the lighting separation is applied to a more geometrically complex object. The reconstructed shape is compared to the ground truth in figure 10. It is obvious that the reconstructed shape before interreflection removal is smoother at concave angles. The marked regions (shown in orange ellipses in figure 10) illustrate that the reconstruction with interreflections is shallower than the one from our method. This result is consistent with the observation in [15]. The ICP algorithm [1] reports the alignment error of 0.0mm compared to 0.39mm before the interreflection removal. Regarding the issue of image noise, we typically take 695

8 Acknowledgement This work is supported in part by University of Kentucky Research Foundation and US National Science Foundation award IIS , CPA , and MRI References [1] P. J. Besl and N. D. Mckay. A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 199. [] M. S. Drew and B. V. Funt. Calculating surface freflectance using a single-bounce model of mutual reflection. Internationl Conference on Computer Vision, [3] M. S. Drew and B. V. Funt. Variational approach to interreflection in color images. Journal of the Optical Society of America A, 199. [4] D. Forsyth and A. Zisserman. Mutual illumination. Computer Vision and Pattern Recognition, [5] D. Forsyth and A. Zisserman. Shape from shading in the light of mutual illumination. Image and Vision Computing, [6] B. V. Funt and M. S. Drew. Color space analysis of mutual illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, [7] B. V. Funt, M. S. Drew, and J. Ho. Color constancy from mutual reflection. International Journal of Computer Vision, [8] C. Hernndez, G. Vogiatzis, G. J. Brostow, B. Stenger, and R. Cipolla. Non-rigid photometric stereo with colored lights. In In Proc. of ICCV, pages 1 8, 007. [9] J. Ho, B. V. Funt, and M. S. Drew. Separating a color signal into illumination and surface reflectance components: Theory and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, [10] B. K. Horn, R. J. Woodham, and W. M. Silber. Determining shape and reflectance using multiple images. MIT AI Memo, [11] M. K. Johnson and E. H. Adelson. Retrographic sensing for the measurement of surface texture and shape. Conference on Computer Vision and Pattern Recognition, 009. [1] H. Kim, B. Wilburn, and M. Ben-Ezra. Photometric stereo for dynamic surface orientations. European Conference on Computer Vision, 010. [13] J. Koenderink and A. van Doorn. Geometrical modes as general method to treat diffuse interreflections in radiometry. In Opt. Soc. Amer., pages 73(6): , [14] S. Nayar and Y. Gong. Colored interreflections and shape recovery. In DARPA Image Understanding Workshop (IUW), pages , 199. [15] S. Nayar, K. Ikeuchi, and T. Kanade. Shape from interreflections. International Journal of Computer Vision, [16] S. Nayar, G. Krishnan, M. Grossberg, and R. Raskar. Fast separation of direct and global components of a scene using high frequency illumination. In Proceedings of ACM SIGGRAPH, 006. [17] S. Seitz, Y. Matsushita, and K. Kutulakos. A theory of inverse light transport. In International Conference on Computer Vision, 005. [18] R. Woodham. Photometric method for determining surface orientation from multiple images. In Optical Engineering, Figure 10. Comparison of teeth reconstructions. The red model on left is ground truth by laser scanner. The green reconstructed surface (in the middle column) is after interreflection removal, and the cyan one (in the right column) is before interreflection removal. The marked parts show the impact of the interreflections. many images to suppress the impact of image noise. In our experiments, typically the 1-bounced light is at level of 100 in terms of pixel value and the -bounced light is at level 10, which are sufficiently above image noise level. In addition, only the 1-bounced light is needed for photometric stereo, which is the driving application of this paper. For high-order bounces, we probably need to use some HDR techniques to have good measurement. 5. Conclusion We present a novel method to separate the m-bounced light in the photometric stereo setup, thus removing the impact of interreflections for the shape recovery process. The separation is accomplished by using different surface albedo values under different light colors. Compared to previous methods that address issue of interreflections, our approach is much more practical in terms of quality and efficiency. We do not require the initial guess of the scene shape, enabling our method to deal with invisible surfaces, neither do we put any constraints on the lighting source. It could a single light or multiple lights, point light or area light. In most of the cases, only two images are enough to produce accurate lighting separation, in contrast to those separation algorithms that usually require a large number of images. Although we tested the effectiveness of our method using only the photometric stereo, our lighting separation algorithm can be applied in any other shapefrom-intensity methods that are based on constant lambertian surface. 696

Interreflection Removal Using Fluorescence

Interreflection Removal Using Fluorescence Interreflection Removal Using Fluorescence Ying Fu 1, Antony Lam 2, Yasuyuki Matsushita 3, Imari Sato 4, and Yoichi Sato 1 1 The University of Tokyo, 2 Saitama University, 3 Microsoft Research Asia, 4

More information

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface. Lighting and Photometric Stereo CSE252A Lecture 7 Announcement Read Chapter 2 of Forsyth & Ponce Might find section 12.1.3 of Forsyth & Ponce useful. HW Problem Emitted radiance in direction f r for incident

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

And if that 120MP Camera was cool

And if that 120MP Camera was cool Reflectance, Lights and on to photometric stereo CSE 252A Lecture 7 And if that 120MP Camera was cool Large Synoptic Survey Telescope 3.2Gigapixel camera 189 CCD s, each with 16 megapixels Pixels are 10µm

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Using a Raster Display Device for Photometric Stereo

Using a Raster Display Device for Photometric Stereo DEPARTMEN T OF COMP UTING SC IENC E Using a Raster Display Device for Photometric Stereo Nathan Funk & Yee-Hong Yang CRV 2007 May 30, 2007 Overview 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS 1. Background

More information

Capturing light. Source: A. Efros

Capturing light. Source: A. Efros Capturing light Source: A. Efros Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection? Lenses

More information

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray Photometric stereo Radiance Pixels measure radiance This pixel Measures radiance along this ray Where do the rays come from? Rays from the light source reflect off a surface and reach camera Reflection:

More information

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7 Lighting and Photometric Stereo Photometric Stereo HW will be on web later today CSE5A Lecture 7 Radiometry of thin lenses δa Last lecture in a nutshell δa δa'cosα δacos β δω = = ( z' / cosα ) ( z / cosα

More information

Photometric Stereo with Auto-Radiometric Calibration

Photometric Stereo with Auto-Radiometric Calibration Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp

More information

Analysis of photometric factors based on photometric linearization

Analysis of photometric factors based on photometric linearization 3326 J. Opt. Soc. Am. A/ Vol. 24, No. 10/ October 2007 Mukaigawa et al. Analysis of photometric factors based on photometric linearization Yasuhiro Mukaigawa, 1, * Yasunori Ishii, 2 and Takeshi Shakunaga

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

Estimating the surface normal of artwork using a DLP projector

Estimating the surface normal of artwork using a DLP projector Estimating the surface normal of artwork using a DLP projector KOICHI TAKASE 1 AND ROY S. BERNS 2 1 TOPPAN Printing co., ltd. 2 Munsell Color Science Laboratory, Rochester Institute of Technology Summary:

More information

Epipolar geometry contd.

Epipolar geometry contd. Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives

More information

Lecture 22: Basic Image Formation CAP 5415

Lecture 22: Basic Image Formation CAP 5415 Lecture 22: Basic Image Formation CAP 5415 Today We've talked about the geometry of scenes and how that affects the image We haven't talked about light yet Today, we will talk about image formation and

More information

Re-rendering from a Dense/Sparse Set of Images

Re-rendering from a Dense/Sparse Set of Images Re-rendering from a Dense/Sparse Set of Images Ko Nishino Institute of Industrial Science The Univ. of Tokyo (Japan Science and Technology) kon@cvl.iis.u-tokyo.ac.jp Virtual/Augmented/Mixed Reality Three

More information

Time-to-Contact from Image Intensity

Time-to-Contact from Image Intensity Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract

More information

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein Lecture 23: Photometric Stereo Announcements PA3 Artifact due tonight PA3 Demos Thursday Signups close at 4:30 today No lecture on Friday Last Time:

More information

More computational light transport

More computational light transport More computational light transport http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 23 Course announcements Sign-up for final project checkpoint

More information

High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination

High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination Yudeog Han Joon-Young Lee In So Kweon Robotics and Computer Vision Lab., KAIST ydhan@rcv.kaist.ac.kr jylee@rcv.kaist.ac.kr

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction The central problem in computer graphics is creating, or rendering, realistic computergenerated images that are indistinguishable from real photographs, a goal referred to as photorealism.

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University Global Illumination CS334 Daniel G. Aliaga Department of Computer Science Purdue University Recall: Lighting and Shading Light sources Point light Models an omnidirectional light source (e.g., a bulb)

More information

Photometric Stereo.

Photometric Stereo. Photometric Stereo Photometric Stereo v.s.. Structure from Shading [1] Photometric stereo is a technique in computer vision for estimating the surface normals of objects by observing that object under

More information

A Theory of Inverse Light Transport

A Theory of Inverse Light Transport A heory of Inverse Light ransport Steven M. Seitz University of Washington Yasuyuki Matsushita Microsoft Research Asia Kiriakos N. Kutulakos University of oronto Abstract In this paper we consider the

More information

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Ko Nishino, Zhengyou Zhang and Katsushi Ikeuchi Dept. of Info. Science, Grad.

More information

Using a Raster Display for Photometric Stereo

Using a Raster Display for Photometric Stereo Using a Raster Display for Photometric Stereo Nathan Funk Singular Systems Edmonton, Canada nathan.funk@singularsys.com Yee-Hong Yang Computing Science University of Alberta Edmonton, Canada yang@cs.ualberta.ca

More information

Computational light transport

Computational light transport Computational light transport http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 19 Course announcements Homework 5 has been posted. - Due on

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Multi-View 3D-Reconstruction

Multi-View 3D-Reconstruction Multi-View 3D-Reconstruction Cedric Cagniart Computer Aided Medical Procedures (CAMP) Technische Universität München, Germany 1 Problem Statement Given several calibrated views of an object... can we automatically

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Lecture 24: More on Reflectance CAP 5415

Lecture 24: More on Reflectance CAP 5415 Lecture 24: More on Reflectance CAP 5415 Recovering Shape We ve talked about photometric stereo, where we assumed that a surface was diffuse Could calculate surface normals and albedo What if the surface

More information

Comment on Numerical shape from shading and occluding boundaries

Comment on Numerical shape from shading and occluding boundaries Artificial Intelligence 59 (1993) 89-94 Elsevier 89 ARTINT 1001 Comment on Numerical shape from shading and occluding boundaries K. Ikeuchi School of Compurer Science. Carnegie Mellon dniversity. Pirrsburgh.

More information

Assignment #2. (Due date: 11/6/2012)

Assignment #2. (Due date: 11/6/2012) Computer Vision I CSE 252a, Fall 2012 David Kriegman Assignment #2 (Due date: 11/6/2012) Name: Student ID: Email: Problem 1 [1 pts] Calculate the number of steradians contained in a spherical wedge with

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 20: Light, reflectance and photometric stereo Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Properties

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Skeleton Cube for Lighting Environment Estimation

Skeleton Cube for Lighting Environment Estimation (MIRU2004) 2004 7 606 8501 E-mail: {takesi-t,maki,tm}@vision.kuee.kyoto-u.ac.jp 1) 2) Skeleton Cube for Lighting Environment Estimation Takeshi TAKAI, Atsuto MAKI, and Takashi MATSUYAMA Graduate School

More information

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3 Image Formation: Light and Shading CSE 152 Lecture 3 Announcements Homework 1 is due Apr 11, 11:59 PM Homework 2 will be assigned on Apr 11 Reading: Chapter 2: Light and Shading Geometric image formation

More information

Starting this chapter

Starting this chapter Computer Vision 5. Source, Shadow, Shading Department of Computer Engineering Jin-Ho Choi 05, April, 2012. 1/40 Starting this chapter The basic radiometric properties of various light sources Develop models

More information

Computer Graphics. Illumination and Shading

Computer Graphics. Illumination and Shading () Illumination and Shading Dr. Ayman Eldeib Lighting So given a 3-D triangle and a 3-D viewpoint, we can set the right pixels But what color should those pixels be? If we re attempting to create a realistic

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

Irradiance Gradients. Media & Occlusions

Irradiance Gradients. Media & Occlusions Irradiance Gradients in the Presence of Media & Occlusions Wojciech Jarosz in collaboration with Matthias Zwicker and Henrik Wann Jensen University of California, San Diego June 23, 2008 Wojciech Jarosz

More information

Introduction to Computer Vision. Week 8, Fall 2010 Instructor: Prof. Ko Nishino

Introduction to Computer Vision. Week 8, Fall 2010 Instructor: Prof. Ko Nishino Introduction to Computer Vision Week 8, Fall 2010 Instructor: Prof. Ko Nishino Midterm Project 2 without radial distortion correction with radial distortion correction Light Light Light! How do you recover

More information

Draft from Graphical Models and Image Processing, vol. 58, no. 5, September Reflectance Analysis for 3D Computer Graphics Model Generation

Draft from Graphical Models and Image Processing, vol. 58, no. 5, September Reflectance Analysis for 3D Computer Graphics Model Generation page 1 Draft from Graphical Models and Image Processing, vol. 58, no. 5, September 1996 Reflectance Analysis for 3D Computer Graphics Model Generation Running head: Reflectance Analysis for 3D CG Model

More information

Radiometry and reflectance

Radiometry and reflectance Radiometry and reflectance http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 16 Course announcements Homework 4 is still ongoing - Any questions?

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller Global Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Reading Foley, van Dam (better): Chapter 16.7-13 Angel: Chapter 5.11, 11.1-11.5 2 Limitation of local illumination A concrete

More information

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet: Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu

More information

CS635 Spring Department of Computer Science Purdue University

CS635 Spring Department of Computer Science Purdue University Inverse Light Transport CS635 Spring 200 Daniel G Aliaga Daniel G. Aliaga Department of Computer Science Purdue University Inverse Light Transport Light Transport Model transfer of light from source (e.g.,

More information

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Ryusuke Homma, Takao Makino, Koichi Takase, Norimichi Tsumura, Toshiya Nakaguchi and Yoichi Miyake Chiba University, Japan

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary Summary Ambien Occlusion Kadi Bouatouch IRISA Email: kadi@irisa.fr 1. Lighting 2. Definition 3. Computing the ambient occlusion 4. Ambient occlusion fields 5. Dynamic ambient occlusion 1 2 Lighting: Ambient

More information

Removing Shadows from Images

Removing Shadows from Images Removing Shadows from Images Zeinab Sadeghipour Kermani School of Computing Science Simon Fraser University Burnaby, BC, V5A 1S6 Mark S. Drew School of Computing Science Simon Fraser University Burnaby,

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch CSCI 420 Computer Graphics Lecture 18 Global Illumination Jernej Barbic University of Southern California BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch. 13.4-13.5] 1 Global Illumination

More information

The Rendering Equation and Path Tracing

The Rendering Equation and Path Tracing The Rendering Equation and Path Tracing Louis Feng April 22, 2004 April 21, 2004 Realistic Image Synthesis (Spring 2004) 1 Topics The rendering equation Original form Meaning of the terms Integration Path

More information

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Motohiro Nakamura 1, Takahiro Okabe 1, and Hendrik P. A. Lensch 2 1 Kyushu Institute of Technology 2 Tübingen University

More information

Lambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science

Lambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science Lambertian model of reflectance I: shape from shading and photometric stereo Ronen Basri Weizmann Institute of Science Variations due to lighting (and pose) Relief Dumitru Verdianu Flying Pregnant Woman

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 21: Light, reflectance and photometric stereo Announcements Final projects Midterm reports due November 24 (next Tuesday) by 11:59pm (upload to CMS) State the

More information

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading CENG 477 Introduction to Computer Graphics Ray Tracing: Shading Last Week Until now we learned: How to create the primary rays from the given camera and image plane parameters How to intersect these rays

More information

Local vs. Global Illumination & Radiosity

Local vs. Global Illumination & Radiosity Last Time? Local vs. Global Illumination & Radiosity Ray Casting & Ray-Object Intersection Recursive Ray Tracing Distributed Ray Tracing An early application of radiative heat transfer in stables. Reading

More information

Ligh%ng and Reflectance

Ligh%ng and Reflectance Ligh%ng and Reflectance 2 3 4 Ligh%ng Ligh%ng can have a big effect on how an object looks. Modeling the effect of ligh%ng can be used for: Recogni%on par%cularly face recogni%on Shape reconstruc%on Mo%on

More information

Recovering light directions and camera poses from a single sphere.

Recovering light directions and camera poses from a single sphere. Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,

More information

COMP 558 lecture 16 Nov. 8, 2010

COMP 558 lecture 16 Nov. 8, 2010 Shading The term shading typically refers to variations in irradiance along a smooth Lambertian surface. Recall that if a surface point is illuminated by parallel light source from direction l, then the

More information

Image Processing 1 (IP1) Bildverarbeitung 1

Image Processing 1 (IP1) Bildverarbeitung 1 MIN-Fakultät Fachbereich Informatik Arbeitsbereich SAV/BV (KOGS) Image Processing 1 (IP1) Bildverarbeitung 1 Lecture 20: Shape from Shading Winter Semester 2015/16 Slides: Prof. Bernd Neumann Slightly

More information

Shading and Recognition OR The first Mrs Rochester. D.A. Forsyth, UIUC

Shading and Recognition OR The first Mrs Rochester. D.A. Forsyth, UIUC Shading and Recognition OR The first Mrs Rochester D.A. Forsyth, UIUC Structure Argument: History why shading why shading analysis died reasons for hope Classical SFS+Critiques Primitives Reconstructions

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Surround Structured Lighting for Full Object Scanning

Surround Structured Lighting for Full Object Scanning Surround Structured Lighting for Full Object Scanning Douglas Lanman, Daniel Crispell, and Gabriel Taubin Brown University, Dept. of Engineering August 21, 2007 1 Outline Introduction and Related Work

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows. CSCI 420 Computer Graphics Lecture 18 Global Illumination Jernej Barbic University of Southern California BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Angel Ch. 11] 1 Global Illumination

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Radiance. Radiance properties. Radiance properties. Computer Graphics (Fall 2008)

Radiance. Radiance properties. Radiance properties. Computer Graphics (Fall 2008) Computer Graphics (Fall 2008) COMS 4160, Lecture 19: Illumination and Shading 2 http://www.cs.columbia.edu/~cs4160 Radiance Power per unit projected area perpendicular to the ray per unit solid angle in

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Light & Perception Announcements Quiz on Tuesday Project 3 code due Monday, April 17, by 11:59pm artifact due Wednesday, April 19, by 11:59pm Can we determine shape

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Other Reconstruction Techniques

Other Reconstruction Techniques Other Reconstruction Techniques Ruigang Yang CS 684 CS 684 Spring 2004 1 Taxonomy of Range Sensing From Brain Curless, SIGGRAPH 00 Lecture notes CS 684 Spring 2004 2 Taxonomy of Range Scanning (cont.)

More information

The Rendering Equation. Computer Graphics CMU /15-662

The Rendering Equation. Computer Graphics CMU /15-662 The Rendering Equation Computer Graphics CMU 15-462/15-662 Review: What is radiance? Radiance at point p in direction N is radiant energy ( #hits ) per unit time, per solid angle, per unit area perpendicular

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Integrated three-dimensional reconstruction using reflectance fields

Integrated three-dimensional reconstruction using reflectance fields www.ijcsi.org 32 Integrated three-dimensional reconstruction using reflectance fields Maria-Luisa Rosas 1 and Miguel-Octavio Arias 2 1,2 Computer Science Department, National Institute of Astrophysics,

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison CHAPTER 9 Classification Scheme Using Modified Photometric Stereo and 2D Spectra Comparison 9.1. Introduction In Chapter 8, even we combine more feature spaces and more feature generators, we note that

More information

Global Illumination and the Rendering Equation

Global Illumination and the Rendering Equation CS294-13: Special Topics Lecture #3 Advanced Computer Graphics University of California, Berkeley Handout Date??? Global Illumination and the Rendering Equation Lecture #3: Wednesday, 9 September 2009

More information

Inverse Light Transport (and next Separation of Global and Direct Illumination)

Inverse Light Transport (and next Separation of Global and Direct Illumination) Inverse ight Transport (and next Separation of Global and Direct Illumination) CS434 Daniel G. Aliaga Department of Computer Science Purdue University Inverse ight Transport ight Transport Model transfer

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

3D and Appearance Modeling from Images

3D and Appearance Modeling from Images 3D and Appearance Modeling from Images Peter Sturm 1,Amaël Delaunoy 1, Pau Gargallo 2, Emmanuel Prados 1, and Kuk-Jin Yoon 3 1 INRIA and Laboratoire Jean Kuntzmann, Grenoble, France 2 Barcelona Media,

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Recovery of Fingerprints using Photometric Stereo

Recovery of Fingerprints using Photometric Stereo Recovery of Fingerprints using Photometric Stereo G. McGunnigle and M.J. Chantler Department of Computing and Electrical Engineering Heriot Watt University Riccarton Edinburgh EH14 4AS United Kingdom gmg@cee.hw.ac.uk

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

3D Photography: Active Ranging, Structured Light, ICP

3D Photography: Active Ranging, Structured Light, ICP 3D Photography: Active Ranging, Structured Light, ICP Kalin Kolev, Marc Pollefeys Spring 2013 http://cvg.ethz.ch/teaching/2013spring/3dphoto/ Schedule (tentative) Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar

More information

Announcements. Radiometry and Sources, Shadows, and Shading

Announcements. Radiometry and Sources, Shadows, and Shading Announcements Radiometry and Sources, Shadows, and Shading CSE 252A Lecture 6 Instructor office hours This week only: Thursday, 3:45 PM-4:45 PM Tuesdays 6:30 PM-7:30 PM Library (for now) Homework 1 is

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Passive 3D Photography

Passive 3D Photography SIGGRAPH 99 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University http:// ://www.cs.cmu.edu/~seitz Talk Outline. Visual Cues 2. Classical Vision Algorithms 3. State of

More information