Towards Outdoor Photometric Stereo

Size: px
Start display at page:

Download "Towards Outdoor Photometric Stereo"

Transcription

1 Towards Outdoor Photometric Stereo Lap-Fai Yu 1 Sai-Kit Yeung 2 Yu-Wing Tai 3 Demetri Terzopoulos 1 Tony F. Chan 4 craigyu@ucla.edu saikit@sutd.edu.sg yuwing@kaist.ac.kr dt@cs.ucla.edu tonyfchan@ust.hk 1 2 University of California, Los Angeles Singapore University of Technology and Design 3 Korea Advanced Institute of Science and Technology 4 Hong Kong University of Science and Technology Abstract This paper presents a framework to perform outdoor photometric stereo by utilizing environment light. We present the main considerations in designing the framework and steps of the processing pipeline. Our framework extends existing algorithms to meet the robustness and variability necessary to operate out of a laboratory environment. We verify our environment light photometric stereo framework using both synthetic and real world examples. Our experimental results are promising even for objects captured in outdoor environments with very complicated natural lighting. 1. Introduction Photometric stereo is a useful technique for capturing surface details by estimating pixel-dense surface normals. In conventional photometric stereo setting, multiple images of an object under different lighting conditions are captured. Through analyzing the change of shading in different images, surface normals can be estimated by fitting different reflectance models [23, 11, 7, 22, 20, 3, 4, 21, 14, 29, 6] to the observed images. In majority of the previous papers, photometric stereo data was captured in a dark room with a single directional/point light source per each input image. Since their focuses were to tackle technical challenges such as reflectance model assumptions, self-shadow, specular, outliers and so on, experiments were usually conducted in a well-controlled laboratory environment which provided a fair evaluation while minimizing uncertainties as possible. This paper aims to bring photometric stereo out from laboratory by providing a processing framework which extends the basic Lambertian surface model from a single light source in a laboratory to environment light in outdoor scene. Although the Lambertian model is the simplest model, previous works [28, 27, 25] have demonstrated high quality normals estimation even for materials that do not fully satisfy the Lambertian material assumption. We describe Part of the work was done when Lap-Fai Yu was a visiting student at SUTD. Figure 1. H ORSE (S UNLIGHT ). Top row: captured environment light. Middle row: example input images. Bottom row: the left two images are the normal maps N displayed as N L with L = ( 13, 13, 13 )T, and L = ( 13, 13, 13 )T respectively. The third image shows the color coded normal map. The fourth and the fifth images show two different views of the reconstructed surface. Note the variation of input images under different environment light condition. our processing pipeline step-by-step including our acquisition system, pre-processing, main algorithm and postprocessing which leads to our high quality normals estimation in natural environment. We will also describe our implementation considerations behind the process. Figure 1 shows an example of our input images, the estimated normal map and the reconstructed 3D surface. A mirror sphere is used to measure the environment map of incoming light. By exploiting the variation of input images under different environment light, we can estimate the surface normals using our environment light photometric stereo model. To handle outliers such as shadows, highlights and/or small misalignment errors across input images, we apply low rank matrix completion [13] to pre-process the input images. Our normal estimation method is then formulated in an alternating optimization framework which alternates between the normal estimation, and the estimation of environment light that contributes to the intensity of a pixel. Finally, the total variation regularization [18, 5] is applied to refine the estimated normals using spatial supports.

2 Method Surface Assumption Calibration Object Capturing Environment #Images Ours Lambertian ; varying albedo mirror sphere natural illumination 6-10 (theoretical:4) [12] Lambertian; uniform albedo (painted to match calibration sphere) same-material sphere natural illumination 1 [4] Lambertian; varying albedo none dark room general unknown lighting, fixed intensity (theoretical:27) [1] mixtures of from >= 20k none natural illumination fundamental materials over a year [16] isotropic BRDF mirror sphere natural illumination 1 Table 1. Comparisons between our work and previous works. We demonstrate the effectiveness of our framework through various experiments including different background scenes; indoor scene with different combination of light; outdoor scene with moving sunlight. 2. Related Work There is a vast amount of literature on photometric stereo, with representative works including [23, 24, 11, 7, 20, 3, 28, 27, 4, 25, 19, 21, 14, 29, 6]. Among those previous works, many of them are based on the Lambertian surface model [23] which describes the observed intensity by a simple dot product equation between surface normals and lighting directions. Since there are three unknowns in this simplest reflectance model, at least three lighting directions are required to solve the normal direction estimation problem [23, 24]. When there are more than three input images, the optimal normal direction for each pixel can be obtained by using least square fitting [28, 27] or by robust low rank minimization [25] or by subspace clustering [21]. In this work, we will also assume the surface reflectance satisfies this Lambertian surface model. There are also large amount of works trying to tackle the Lambertian surface model assumption. For instances, Tagare and defigueiredo [22] extended the Lambertian model into a m-lobed reflective map; Solomon and Ikeuchi [20] used the Torrance-Sparrow model; Hertzmann and Seitz [10] used reference object with same material and known geometry to compute surface normals through analogy; Nayar et al. [15] used a hybrid reflectance model (Torrance-Sparrow and Beckmann-Spizzichino); Goldman et al. [9] optimized the shape and the BRDFs alternatively by assuming a set of basis materials modeled as isotropic Ward model; Yeung et al. [29] applied orientation consistency to estimate normals for transparent objects. To the best of our knowledge, there are only a handful of previous works that consider photometric stereo under general/natural lighting. Work in [4] uses a low order spherical harmonics to model general lighting, akin to environment mapping representation [17]. Their model contains 27 variables for optimization and thus requires significantly more input images comparing to conventional photometric stereo. Contrary to [4], work in [12] demonstrates a shapefrom-shading algorithm under natural illumination. However, their requirement of calibration sphere with the same material BRDF as the captured object limits its practicality. Recent work in [16] relaxes this restriction by replacing the same material sphere with a mirror sphere to calibrate incoming light, while work in [30] utilizes prior depth information obtained from depth cameras to constrain the problem. Yet, their methods still share some common limitations in shape-from-shading in which the estimated surface normals can be easily biased by outliers since there is only one input image. Work that is most relevant to ours is [1] in which they captured over twenty thousands outdoor webcam images throughout the year to perform photometric stereo. The robustness comes from their large amount of data and a smart data selection process. However, capturing such large amount of data is not a easy task. Our work aims to provide a practical framework for outdoor photometric stereo which struggles for the balance between the large amount of data [4, 1] and the requirement of additional calibration object [12, 16]. By using a mirror sphere to calibrate incoming light, our work requires only around 6 to 10 input images. Comparing to the previous works, our work is handy but accurate. Table 1 depicts the comparisons between our work and the aforementioned previous works. 3. Environment Light Photometric Stereo In this section, we describe the basic model of our environment light photometric stereo followed by the description of our experimental setting for data acquisition Basic model In the Lambertian surface model, an intensity of a pixel depends on the lighting direction L, the surface normal N and the surface albedo ρ 1 : I(x) = ρ(x)n(x) L(x) (1) where I is the captured image and x is image coordinate. For a common photometric stereo setting, L is assumed to 1 We assume the camera response function is linear

3 Figure 2. Left: Our simple setup for data acquisition. A mirror sphere is placed near the object to measure the environment map of incoming lights. Middle: From the image of mirror sphere, we can estimate the environment map of incoming light using method in [8]. Right: An intensity of a pixel is the summation of ρ(x)n ci Li. For each pixel, only a portion of the environment light should have effects on its intensity, depending on its normal orientation. We need to estimate the effective light directions which shine on each pixel. be a distant, directional light source. Thus, L is spatially invariant and can be easily estimated from the input images [19] or from a calibration object [28]. To solve the surface normal N in Eqn. 1, we capture multiple images each taken with a different lighting direction. Hence, we have the number of observations more than the number of unknowns in Eqn. 1 and N can be solved effectively using methods in [23, 28, 25, 21]. Now, suppose we have multiple directional light sources, we can extend Eqn. 1 by summing up the contribution of each light source to the pixel s intensity as follow: I(x) = K 1 ρ(x)n(x) ci Li K i=1 (2) where K is the number of light sources in a scene, and ci is the strength of light source from the lighting direction Li. In an extreme case where the incoming light is from all directions in a 3D world, we can still describe the intensity of an image using Eqn. 2 with K equal to infinity. An interesting observation to Eqn. 2 is that the number of unknowns for N remains the same as in Eqn. 1 and Eqn. 2 is still a linear equation when ρ and Li are known. This gives us the basic model for our environment light photometric stereo Data acquisition A major component in our environment light photometric stereo problem is the estimation of light sources Li in Eqn. 2. In conventional photometric stereo setting, data is acquired in a darkroom and the lighting directions can be well calibrated with fixed light sources. In our problem, although we can use the same setting with multiple light sources shining on the captured object simultaneously, this data acquisition method does not add value to the conventional photometric stereo setting. Hence, one of our main contribution is to bring the photometric stereo setting outside a laboratory environment by using environment light. We use a mirror sphere to capture the strength of incoming light from all directions in terms of an environment map [8]. Environment map is a common method in rendering to simulate the effect of distance light sources shining on object surfaces. We adopt it to our environment light photometric stereo problem for surface normal estimation. Figure 2 shows our experimental setup for data acquisition. After we acquire the environment map, we uniformly sample the lighting directions in 3D world using icosahedron with sub-division [2] to obtain densely calibrated directional light sources. We take a local average in an environment map around each lighting direction as the strength of light source coming from that direction. In our implementation, we uniformly sample 2562 lighting directions on the environment map to approximate the effects of incoming light from all directions in a 3D world. Note that when the dot product between N(x) and Li is negative, Li does not contribute to the intensity of I(x) in Eqn. 2. Hence, we also need to estimate the effective lighting directions (Right of Figure 2) that contributes to the intensity of a pixel. Details of this process will be given in the next section. 4. Normal Estimation Algorithm In this section, we describe our normal estimation algorithm. We first present our pre-processing based on lowrank matrix completion. Then, we present our method for normal and light contribution refinement. Finally, we describe how to include the total variation regularization [18, 5] to post-process surface normal using spatial support Pre-processing via low-rank matrix completion Our input data contains different sources of errors, e.g., shadow, highlights, and even pixel misalignment across different captures. These errors can easily affect our algorithm performance. To handle these outlier errors, we adopt the low-rank matrix completion [13, 25] technique. We stack our input images as follow: D = [vec(i1 ) vec(in )], (3) where vec(ij ) = [Ij (1),, Ij (m)]t for j = 1,, n is the vectorized input image, m is number of pixels in the object mask, and n is number of input images. Since our environment light model in Eqn. 2 is linear and the span of N is 3, the rank of the matrix D is at most 3. However, due to the various errors, we observe that the rank of D is larger than 3 in our input data. As studied in [25], these errors in photometric stereo are usually sparse. Hence, we can isolate the sparse errors by transforming the problem into matrix rank minimization: mina,e A + λ E 1 s.t. D = A + E, (4) where A is a rank 3 matrix, E is error residuals, and and 1 are the nuclear norm and l1 -norm, respectively, and λ > 0 is a weighting parameter. We use the Accelerated

4 Proximal Gradient [13] to solve Eqn. 4. The clean low-rank matrix A is solved for each color channel separately and will be used as input data in the subsequent steps. The use of low-rank matrix completion for preprocessing offers us three major advantages: first, by isolating the errors such as specular highlights and shadows, our normal estimation algorithm is robust; second, although we fix our captured object and camera physically, small misalignment errors during data capture are inevitable. This preprocessing step makes our method robust to small misalignment errors by repairing the mis-aligned pixels with proximal values which ensures the rank 3 property of matrix A; third, the low-rank matrix completion allows us to relax the strict requirement of Lambertian surface model in comparing with method in [12], as long as the non-lambertian surface observation can be factorized into the sparse residual matrix E. Figure 3 compares our results without and with lowrank matrix completion Normal refinement using least square Solution In this sub-section, we present our method for normal estimation. We will first assume that we know the lighting directions that contribute to the intensity of a pixel. In the next sub-section, we will describe how to refine the contribution of each lighting direction given the surface normal. Hence, the two steps for normal and lighting direction contribution refinement will be performed in an alternating optimization fashion. In order to deal with surface albedo, we follow the procedures in [28] to choose a denominator image and to estimate the surface normals from ratio images: K i I j = N I d N K i c i j Li j c i d Li d where I d is the denominator image, and I j, j = 1,, n 1 are the input after low-rank matrix completion. From Eqn. 5, we can re-write Eqn. 2 into: (5) AN = 0 (6) K where A = [I j i c i d Li d I K d i c i j Li j ]. The least square solution of N can be solved by singular value decomposition (SVD) which explicitly enforces N = Light contribution refinement Given the estimated normal direction, now we want to refine the contribution of light that has effects to the intensity of a pixel. Without self-occlusion, this can be achieved by fitting a hemisphere of light where the dot product between pixel normal and lighting direction is larger than zero (Figure 2). We propose a simple heuristic method to evaluate self-occlusion, by comparing the normal direction between neighboring pixels. If the normal direction between neighboring pixels forms a concave shape and the current normal Figure 3. Self-comparisons of our results without and with lowrank matrix completion. Figure 4. Self-comparisons of our results without and with TV regularization. direction is closer to {0, 0, 1} T, it is likely that the incoming light from the neighboring directions is being occluded. Hence, we give a smaller weight to the light from that projected lighting direction. We evaluate this self-occlusion for all directions within local neighborhood and finally obtain a weighted mask for the contribution of lighting direction. Note that this weighted mask is a relative contribution to the light from different directions. We initialize the normal direction and the corresponding hemisphere of environment light using exhaustive search, which minimizes the errors from the input images. The exhaustive search algorithm generally gives good initialization but it is slow if the search space is large. Therefore, we only sample 42 different normal directions for the exhaustive search initialization, i.e., an icosahedron of subdivision 1. This gives us a good balance between accuracy and efficiency in our alternating optimization framework Spatial refinement using TV regularization Up to this sub-section, our normal estimation method processes each pixel individually. As demonstrated in many previous works [28, 9, 19], spatial regularization is useful in error correction as well as improving the overall accuracy of the estimated surface normals. In this section, we adopt the L1-norm vectorial total variation [5] to refine the estimated surface normals from previous section. The energy function we want to minimize is defined as follow: N = arg min N N N 2 + λ Ω N (7) where Ω N is the vectorial first derivative of N defined over a local neighborhood in Ω, N is the solution from

5 Figure 5. Input environments and images for the synthetic examples S PHERE and B UNNY. the previous sub-section and λ = 0.1 is the regularization weight. We refer readers to [5] for more detail. Figure 7 shows intermediate results during AO iterations and Figure 4 compares result without and with spatial refinement with TV regularization. After the TV regularization, we obtain our final results for normal estimation. We use the technique from [26] to reconstruct 3D surface from the estimated normals. 5. Experimental Results (a) Ground truth (b) Result (c) Ground truth (d) Result Figure 6. Comparison between ground truth and normal maps obtained using nine environments. The results are obtained after four iterations of the AO process. We verify the efficacy of our proposed method using both synthetic and real examples. Our first experiment tested our approach using synthetic input images where ground truth normal maps are available. We analyze the effects on the number of input images to the solution space, and the convergency behavior of our AO through the synthetic examples. After that, we validate our method qualitatively with real world examples Quantitative evaluation with synthetic images Two synthetic examples S PHERE and B UNNY were used for quantitative evaluation. The B UNNY dataset is available online2. We use the environment maps from [8] to render the synthetic input images as shown in Figure 5. We show the color coded ground truth normal maps and estimated normal maps in Figure 6 for qualitative comparison. Our approach faithfully estimates the surface normals which closely resemble the ground truth normals. The corresponding RMS error for the S PHERE and B UNNY are and respectively. To evaluate the robustness of our method, we plot the RMS error of the estimated normals with different number of input images in Figure 7. As expected, the RMS error decreases as the number of images increases, and the decreases are less significant after more than 5 input images. We also analyze the convergency behavior of our approach 2 Iteration 0 Iteration 1 Iteration 2 teration 3 Figure 7. Analysis of our alternating optimization framework. Top: RMS error of the estimated normals, using different number of input images, against the number of iterations. Our optimization process converges. Bottom: qualitative illustration of our intermediate results on B UNNY. Iteration 0 is the result after the exhaustive search initialization. empirically, by plotting the RMS error against the number of iterations (Figure 7). It shows that our approach converges to the stable state in 4-5 iterations for both examples Qualitative evaluation with real images After validating our algorithm in synthetic cases, we tested our algorithm on various real world examples under different lighting conditions: different background scenes; indoor scene with different indirect light sources; outdoor scene with sunlight direction changing over a day. All input

6 S PHERE B UNNY C OUPLE M OTHER &BABY H ORSE H EAD (I NDOOR ) C HEF (I NDOOR ) S HOE (I NDOOR ) H ORSE (S UNLIGHT ) C HEF (S UNLIGHT ) #images size time (sec) Table 2. Running times were measured on a 3.33GHz Intel Xeon PC. Our implementation is on Matlab R2009b. Figure 9. Input environments and images for the real examples S HOE (I NDOOR ), H ORSE H EAD (I NDOOR ) and C HEF (I NDOOR ) Figure 10. Input environments and images for the real examples C HEF (S UNLIGHT ). Figure 8. Input environments and images for the real examples C OUPLE and M OTHER &BABY images and results are provided in supplementary material. The respective running times are depicted in Table 2. Different background scenes. Based on the synthetic case s success, we verified our approach in real world scenarios step by step. Our first experiment closely mimics the synthetic experiment, by capturing the object under different background scenes. We used ten input images for the examples C OUPLE and M OTHER &BABY. The input images are shown in Figure 8. Figure 11 depicts the results. Our reconstructed normal maps and surfaces look faithful. We show the N L images under two different lighting conditions to show the shading. We also show the images of real objects alongside with the reconstructed surfaces captured from similar viewpoint. The corresponding zoom-in view illustrates the details preserved in our reconstructed object surface. For example, the arms and legs of the C OUPLE are clearly estimated. Also, we can clearly see the mother is holding her baby in M OTHER &BABY. Indoor scene with different combination of light. Next, we evaluate our results in an indoor scene with different lighting conditions, by turning on/off different light sources in a room. Note the presence of ambient light, and the use of indirect light sources (e.g. table lamp, floor lamp) in the scene. We captured the results for S HOE (I NDOOR ), H ORSE H EAD (I NDOOR ) and C HEF (I NDOOR ) under this setting. Some of our input environments and input images are shown in Figure 9. Figure 11 shows the results. As can be seen from the zoom-in, our results are very good under this condition setting, where subtle details such as textures on the shoe were faithfully reconstructed. Outdoor scene with moving sunlight. Our final experiments were conducted in an outdoor environment directly using sunlight for reconstruction. It is also the main goal of this project. We captured images of the objects every hour from morning 10am to evening 5pm, obtaining eight input images. We picked H ORSE (S UNLIGHT ) and C HEF (S UN LIGHT ) as the testing objects. The input images and results are shown in Figure 1, Figure 10 and Figure 11. The results of C HEF (I NDOOR ) are shown alongside to facilitate comparisons. We find that the results in outdoor setting is not as good as indoor setting. Part of the reasons is that the variation of sunlight is not as much as indoor scene. The sun moves in a trajectory while the light sources in indoor scene are well distributed in different directions. However, as shown in Figure 11, our results are still reasonable. 6. Conclusion and Future Work We have presented a framework which uses environment light for doing photometric stereo. It is an important step towards outdoor photometric stereo. While our environment light photometric stereo is simple, it is effective in modeling the effect of complex environment light to the intensity of a pixel. Combining our simple system setup for data capture with our optimization framework for normal estimation, we demonstrate high quality normal estimation even

7 C OUPLE M OTHER &BABY S HOE (I NDOOR ) H ORSE H EAD (I NDOOR ) C HEF (I NDOOR ) C HEF (S UNLIGHT ) (a) (b) (c) (d) (e) (f) Figure 11. Results of the real examples. (a) Color-coded normal map. (b-c) Normal map shaded by N L with L = ( 13, 13, 13 )T (g) ( 13, 13, 13 )T, (h) and L= respectively. (d) Novel view of the reconstructed surface. (e-h) Zoom-in view comparing the reconstructed surface with the real object at a similar viewpoint.

8 under complicated indoor and outdoor scenes. Using lowrank matrix completion and total variation regularization techniques, our framework is robust to small object misalignments, shadows, and highlights. We believe that our framework has effectively relieved the limitation of conventional photometric stereo algorithms, which requires a controlled environment for data capture. In the future, we plan to extend our framework to deal with non-lambertian surface model. We also aim at integrating our framework with multi-view structure-from-motion algorithms to reconstruct high quality 3D models in full view. 7. Acknowledgement This research was partially supported by Singapore University of Technology and Design (SUTD) StartUp Grant ISTD and SUTD-MIT International Design Centre (IDC) Research Grant IDSF OH and the National Research Foundation (NRF: ) of Korea funded by the Ministry of Education, Science and Technology. We thank Ka-Keung Lau for his kind help on equipment. We also thank Teresa Wan for narrating the video. References [1] J. Ackermann, F. Langguth, S. Fuhrmann, and M. Goesele. Photometric stereo for outdoor webcams. In CVPR, pages , [2] D. Ballard and C. Brown. Computer vision. In Prentice Hall, [3] S. Barsky and M. Petrou. The 4-source photometric stereo technique for three-dimensional surfaces in the presence of highlights and shadows. IEEE Trans. on PAMI, 25(10): , October [4] R. Basri, D. W. Jacobs, and I. Kemelmacher. Photometric stereo with general, unknown lighting. IJCV, 72(3): , [5] X. Bresson and T. F. Chan. Fast dual minimization of the vectorial total variation norm and applications to color image processing. Technical report, UCLA CAM Report, [6] M. Chandraker, J. Bai, and R. Ramamoorthi. A theory of differential photometric stereo for unknown BRDFs. In IEEE Conference on Computer Vision and Pattern Recognition, pages , [7] E. Coleman, Jr. and R. Jain. Obtaining 3-dimensional shape of textured and specular surfaces using four-source photometry. CGIP, 18(4): , April [8] P. E. Debevec. Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In ACM SIGGRAPH, pages , [9] D. B. Goldman, B. Curless, A. Hertzmann, and S. M. Seitz. Shape and spatially-varying brdfs from photometric stereo. IEEE Trans. on PAMI, 32(6): , [10] A. Hertzmann and S. Seitz. Shape and materials by example: a photometric stereo approach. In CVPR, pages I: , [11] B. Horn. Robot Vision. McGraw-Hill, [12] M. K. Johnson and E. H. Adelson. Shape estimation in natural illumination. In CVPR, pages , [13] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Technical Report UILU-ENG , UIUC, [14] Z. Lu, Y.-W. Tai, M. Ben-Ezra, and M. S. Brown. A framework for ultra high resolution 3d imaging. In CVPR, [15] S. Nayar, K. Ikeuchi, and T. Kanade. Determining shape and reflectance of hybrid surfaces by photometric sampling. IEEE Trans. on Robotics and Automation, 6(4): , [16] G. Oxholm and K. Nishino. Shape and reflectance from natural illumination. In ECCV (1), pages , [17] R. Ramamoorthi and P. Hanrahan. An efficient representation for irradiance environment maps. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2001), 20(3): , [18] L. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D, 60: , [19] B. Shi, Y. Matsushita, Y. Wei, C. Xu, and P. Tan. Selfcalibrating photometric stereo. In CVPR, [20] F. Solomon and K. Ikeuchi. Extracting the shape and roughness of specular lobe objects using four light photometric stereo. IEEE Trans. on PAMI, 18(4): , April [21] K. Sunkavalli, T. Zickler, and H. Pfister. Visibility subspaces: Uncalibrated photometric stereo with shadows. In ECCV, pages , [22] H. Tagare and R. defigueiredo. A theory of photometric stereo for a class of diffuse non-lambertian surfaces. IEEE Trans. on PAMI, 13(2): , February [23] R. Woodham. Photometric method for determining surface orientation from multiple images. Opt. Eng., 19(1): , January [24] R. Woodham. Gradient and curvature from the photometricstereo method, including local confidence estimation. JOSA- A, 11(11): , November [25] L. Wu, A. Ganesh, B. Shi, Y. Matsushita, Y. Wang, and Y. Ma. Robust photometric stereo via low-rank matrix completion and recovery. In ACCV, pages , [26] T.-P. Wu, J. Sun, C. Tang, and H. Shum. Interactive normal reconstruction from a single image. ACM Trans. Graph., 27(5), [27] T.-P. Wu and C.-K. Tang. Photometric stereo via expectation maximization. IEEE Trans. on PAMI, 32(3): , [28] T.-P. Wu, K.-L. Tang, C.-K. Tang, and T.-T. Wong. Dense photometric stereo: A markov random field approach. IEEE Trans. on PAMI, 28(11): , [29] S.-K. Yeung, T.-P. Wu, C.-K. Tang, T. F. Chan, and S. Osher. Adequate reconstruction of transparent objects on a shoestring budget. In CVPR, pages , [30] L.-F. Yu, S.-K. Yeung, Y.-W. Tai, and S. Lin. Shading-based shape refinement of rgb-d images. In CVPR, 2013.

Photometric Stereo with Auto-Radiometric Calibration

Photometric Stereo with Auto-Radiometric Calibration Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp

More information

Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision Due to space limitation in the main paper, we present additional experimental results in this supplementary

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Skeleton Cube for Lighting Environment Estimation

Skeleton Cube for Lighting Environment Estimation (MIRU2004) 2004 7 606 8501 E-mail: {takesi-t,maki,tm}@vision.kuee.kyoto-u.ac.jp 1) 2) Skeleton Cube for Lighting Environment Estimation Takeshi TAKAI, Atsuto MAKI, and Takashi MATSUYAMA Graduate School

More information

Efficient Photometric Stereo on Glossy Surfaces with Wide Specular Lobes

Efficient Photometric Stereo on Glossy Surfaces with Wide Specular Lobes Efficient Photometric Stereo on Glossy Surfaces with Wide Specular Lobes Hin-Shun Chung Jiaya Jia Department of Computer Science and Engineering The Chinese University of Hong Kong {hschung,leojia}@cse.cuhk.edu.hk

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION. Department of Artificial Intelligence Kyushu Institute of Technology

DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION. Department of Artificial Intelligence Kyushu Institute of Technology DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION Kouki Takechi Takahiro Okabe Department of Artificial Intelligence Kyushu Institute of Technology ABSTRACT Separating diffuse

More information

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein Lecture 23: Photometric Stereo Announcements PA3 Artifact due tonight PA3 Demos Thursday Signups close at 4:30 today No lecture on Friday Last Time:

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

Recovering illumination and texture using ratio images

Recovering illumination and texture using ratio images Recovering illumination and texture using ratio images Alejandro Troccoli atroccol@cscolumbiaedu Peter K Allen allen@cscolumbiaedu Department of Computer Science Columbia University, New York, NY Abstract

More information

Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows

Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Todd Zickler, and Hanspeter Pfister Harvard University 33 Oxford St., Cambridge, MA, USA, 02138 {kalyans,zickler,pfister}@seas.harvard.edu

More information

Photometric stereo. Recovering the surface f(x,y) Three Source Photometric stereo: Step1. Reflectance Map of Lambertian Surface

Photometric stereo. Recovering the surface f(x,y) Three Source Photometric stereo: Step1. Reflectance Map of Lambertian Surface Photometric stereo Illumination Cones and Uncalibrated Photometric Stereo Single viewpoint, multiple images under different lighting. 1. Arbitrary known BRDF, known lighting 2. Lambertian BRDF, known lighting

More information

Passive 3D Photography

Passive 3D Photography SIGGRAPH 99 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University http:// ://www.cs.cmu.edu/~seitz Talk Outline. Visual Cues 2. Classical Vision Algorithms 3. State of

More information

High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination

High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination Yudeog Han Joon-Young Lee In So Kweon Robotics and Computer Vision Lab., KAIST ydhan@rcv.kaist.ac.kr jylee@rcv.kaist.ac.kr

More information

Photometric stereo , , Computational Photography Fall 2018, Lecture 17

Photometric stereo , , Computational Photography Fall 2018, Lecture 17 Photometric stereo http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 17 Course announcements Homework 4 is still ongoing - Any questions? Feedback

More information

Ligh%ng and Reflectance

Ligh%ng and Reflectance Ligh%ng and Reflectance 2 3 4 Ligh%ng Ligh%ng can have a big effect on how an object looks. Modeling the effect of ligh%ng can be used for: Recogni%on par%cularly face recogni%on Shape reconstruc%on Mo%on

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

AdequateReconstructionofTransparentObjectsonaShoestringBudget

AdequateReconstructionofTransparentObjectsonaShoestringBudget AdequateReconstructionofTransparentObjectsonaShoestringBudget Sai-Kit Yeung 1,2 Tai-Pang Wu 2 Chi-Keung Tang 2 Tony F. Chan 1,2 Stanley Osher 1 saikit@math.ucla.edu pang@cse.ust.hk cktang@cse.ust.hk tonyfchan@ust.hk

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Re-rendering from a Dense/Sparse Set of Images

Re-rendering from a Dense/Sparse Set of Images Re-rendering from a Dense/Sparse Set of Images Ko Nishino Institute of Industrial Science The Univ. of Tokyo (Japan Science and Technology) kon@cvl.iis.u-tokyo.ac.jp Virtual/Augmented/Mixed Reality Three

More information

Visible Surface Reconstruction from Normals with Discontinuity Consideration

Visible Surface Reconstruction from Normals with Discontinuity Consideration Visible Surface Reconstruction from Normals with Discontinuity Consideration Tai-Pang Wu and Chi-Keung Tang Vision and Graphics Group The Hong Kong University of Science and Technology Clear Water Bay,

More information

Using a Raster Display for Photometric Stereo

Using a Raster Display for Photometric Stereo Using a Raster Display for Photometric Stereo Nathan Funk Singular Systems Edmonton, Canada nathan.funk@singularsys.com Yee-Hong Yang Computing Science University of Alberta Edmonton, Canada yang@cs.ualberta.ca

More information

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray Photometric stereo Radiance Pixels measure radiance This pixel Measures radiance along this ray Where do the rays come from? Rays from the light source reflect off a surface and reach camera Reflection:

More information

Relighting for an Arbitrary Shape Object Under Unknown Illumination Environment

Relighting for an Arbitrary Shape Object Under Unknown Illumination Environment Relighting for an Arbitrary Shape Object Under Unknown Illumination Environment Yohei Ogura (B) and Hideo Saito Keio University, 3-14-1 Hiyoshi, Kohoku, Yokohama, Kanagawa 223-8522, Japan {y.ogura,saito}@hvrl.ics.keio.ac.jp

More information

Assignment #2. (Due date: 11/6/2012)

Assignment #2. (Due date: 11/6/2012) Computer Vision I CSE 252a, Fall 2012 David Kriegman Assignment #2 (Due date: 11/6/2012) Name: Student ID: Email: Problem 1 [1 pts] Calculate the number of steradians contained in a spherical wedge with

More information

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface. Lighting and Photometric Stereo CSE252A Lecture 7 Announcement Read Chapter 2 of Forsyth & Ponce Might find section 12.1.3 of Forsyth & Ponce useful. HW Problem Emitted radiance in direction f r for incident

More information

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7 Lighting and Photometric Stereo Photometric Stereo HW will be on web later today CSE5A Lecture 7 Radiometry of thin lenses δa Last lecture in a nutshell δa δa'cosα δacos β δω = = ( z' / cosα ) ( z / cosα

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

And if that 120MP Camera was cool

And if that 120MP Camera was cool Reflectance, Lights and on to photometric stereo CSE 252A Lecture 7 And if that 120MP Camera was cool Large Synoptic Survey Telescope 3.2Gigapixel camera 189 CCD s, each with 16 megapixels Pixels are 10µm

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Analysis of photometric factors based on photometric linearization

Analysis of photometric factors based on photometric linearization 3326 J. Opt. Soc. Am. A/ Vol. 24, No. 10/ October 2007 Mukaigawa et al. Analysis of photometric factors based on photometric linearization Yasuhiro Mukaigawa, 1, * Yasunori Ishii, 2 and Takeshi Shakunaga

More information

Photometric Stereo. Photometric Stereo. Shading reveals 3-D surface geometry BRDF. HW3 is assigned. An example of photometric stereo

Photometric Stereo. Photometric Stereo. Shading reveals 3-D surface geometry BRDF. HW3 is assigned. An example of photometric stereo Photometric Stereo Photometric Stereo HW3 is assigned Introduction to Computer Vision CSE5 Lecture 6 Shading reveals 3-D surface geometry Shape-from-shading: Use just one image to recover shape. Requires

More information

Photometric Stereo With Non-Parametric and Spatially-Varying Reflectance

Photometric Stereo With Non-Parametric and Spatially-Varying Reflectance Photometric Stereo With Non-Parametric and Spatially-Varying Reflectance Neil Alldrin Todd Zickler David Kriegman nalldrin@cs.ucsd.edu zickler@eecs.harvard.edu kriegman@cs.ucsd.edu University of California,

More information

Prof. Trevor Darrell Lecture 18: Multiview and Photometric Stereo

Prof. Trevor Darrell Lecture 18: Multiview and Photometric Stereo C280, Computer Vision Prof. Trevor Darrell trevor@eecs.berkeley.edu Lecture 18: Multiview and Photometric Stereo Today Multiview stereo revisited Shape from large image collections Voxel Coloring Digital

More information

Comment on Numerical shape from shading and occluding boundaries

Comment on Numerical shape from shading and occluding boundaries Artificial Intelligence 59 (1993) 89-94 Elsevier 89 ARTINT 1001 Comment on Numerical shape from shading and occluding boundaries K. Ikeuchi School of Compurer Science. Carnegie Mellon dniversity. Pirrsburgh.

More information

Highlight detection with application to sweet pepper localization

Highlight detection with application to sweet pepper localization Ref: C0168 Highlight detection with application to sweet pepper localization Rotem Mairon and Ohad Ben-Shahar, the interdisciplinary Computational Vision Laboratory (icvl), Computer Science Dept., Ben-Gurion

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Separating Reflection Components in Images under Multispectral and Multidirectional Light Sources

Separating Reflection Components in Images under Multispectral and Multidirectional Light Sources 2016 23rd International Conference on Pattern Recognition (ICPR) Cancún Center, Cancún, México, December 4-8, 2016 Separating Reflection Components in Images under Multispectral and Multidirectional Light

More information

Lambertian model of reflectance II: harmonic analysis. Ronen Basri Weizmann Institute of Science

Lambertian model of reflectance II: harmonic analysis. Ronen Basri Weizmann Institute of Science Lambertian model of reflectance II: harmonic analysis Ronen Basri Weizmann Institute of Science Illumination cone What is the set of images of an object under different lighting, with any number of sources?

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Announcements. Photometric Stereo. Shading reveals 3-D surface geometry. Photometric Stereo Rigs: One viewpoint, changing lighting

Announcements. Photometric Stereo. Shading reveals 3-D surface geometry. Photometric Stereo Rigs: One viewpoint, changing lighting Announcements Today Photometric Stereo, next lecture return to stereo Photometric Stereo Introduction to Computer Vision CSE152 Lecture 16 Shading reveals 3-D surface geometry Two shape-from-x methods

More information

Robust Model-Free Tracking of Non-Rigid Shape. Abstract

Robust Model-Free Tracking of Non-Rigid Shape. Abstract Robust Model-Free Tracking of Non-Rigid Shape Lorenzo Torresani Stanford University ltorresa@cs.stanford.edu Christoph Bregler New York University chris.bregler@nyu.edu New York University CS TR2003-840

More information

Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo

Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo Miao Liao 1, Xinyu Huang, and Ruigang Yang 1 1 Department of Computer Science, University of Kentucky Department of Mathematics

More information

Using a Raster Display Device for Photometric Stereo

Using a Raster Display Device for Photometric Stereo DEPARTMEN T OF COMP UTING SC IENC E Using a Raster Display Device for Photometric Stereo Nathan Funk & Yee-Hong Yang CRV 2007 May 30, 2007 Overview 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS 1. Background

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry*

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Yang Wang, Dimitris Samaras Computer Science Department, SUNY-Stony Stony Brook *Support for this research was provided

More information

Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances

Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances 213 IEEE Conference on Computer Vision and Pattern Recognition Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances Feng Lu 1 Yasuyuki Matsushita 2 Imari Sato 3 Takahiro Okabe 1 Yoichi Sato

More information

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University Global Illumination CS334 Daniel G. Aliaga Department of Computer Science Purdue University Recall: Lighting and Shading Light sources Point light Models an omnidirectional light source (e.g., a bulb)

More information

Exploiting Shading Cues in Kinect IR Images for Geometry Refinement

Exploiting Shading Cues in Kinect IR Images for Geometry Refinement Exploiting Shading Cues in Kinect IR Images for Geometry Refinement Gyeongmin Choe Jaesik Park Yu-Wing Tai In So Kweon Korea Advanced Institute of Science and Technology, Republic of Korea [gmchoe,jspark]@rcv.kaist.ac.kr,yuwing@kaist.ac.kr,iskweon77@kaist.ac.kr

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Direct Matrix Factorization and Alignment Refinement: Application to Defect Detection

Direct Matrix Factorization and Alignment Refinement: Application to Defect Detection Direct Matrix Factorization and Alignment Refinement: Application to Defect Detection Zhen Qin (University of California, Riverside) Peter van Beek & Xu Chen (SHARP Labs of America, Camas, WA) 2015/8/30

More information

Photometric Stereo.

Photometric Stereo. Photometric Stereo Photometric Stereo v.s.. Structure from Shading [1] Photometric stereo is a technique in computer vision for estimating the surface normals of objects by observing that object under

More information

Lambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science

Lambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science Lambertian model of reflectance I: shape from shading and photometric stereo Ronen Basri Weizmann Institute of Science Variations due to lighting (and pose) Relief Dumitru Verdianu Flying Pregnant Woman

More information

Recovering light directions and camera poses from a single sphere.

Recovering light directions and camera poses from a single sphere. Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

A Shape from Shading Approach for the Reconstruction of Polyhedral Objects using Genetic Algorithm

A Shape from Shading Approach for the Reconstruction of Polyhedral Objects using Genetic Algorithm A Shape from Shading Approach for the Reconstruction of Polyhedral Objects using Genetic Algorithm MANOJ KUMAR RAMA BHARGAVA R. BALASUBRAMANIAN Indian Institute of Technology Roorkee, Roorkee P.O. Box

More information

Photometric Stereo with General, Unknown Lighting

Photometric Stereo with General, Unknown Lighting Photometric Stereo with General, Unknown Lighting Ronen Basri Λ David Jacobs Dept. of Computer Science NEC Research Institute The Weizmann Institute of Science 4 Independence Way Rehovot, 76100 Israel

More information

ShadowCuts: Photometric Stereo with Shadows

ShadowCuts: Photometric Stereo with Shadows ShadowCuts: Photometric Stereo with Shadows Manmohan Chandraker 1 mkchandraker@cs.ucsd.edu Sameer Agarwal 2 sagarwal@cs.washington.edu David Kriegman 1 kriegman@cs.ucsd.edu 1 Computer Science and Engineering

More information

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING Hyunjung Shim Tsuhan Chen {hjs,tsuhan}@andrew.cmu.edu Department of Electrical and Computer Engineering Carnegie Mellon University

More information

Robust Photometric Stereo via Low-Rank Matrix Completion and Recovery

Robust Photometric Stereo via Low-Rank Matrix Completion and Recovery Robust Photometric Stereo via Low-Rank Matrix Completion and Recovery Lun Wu, Arvind Ganesh, Boxin Shi, Yasuyuki Matsushita, Yongtian Wang and Yi Ma, School of Optics and Electronics, Beijing Institute

More information

Heliometric Stereo: Shape from Sun Position

Heliometric Stereo: Shape from Sun Position Heliometric Stereo: Shape from Sun Position Austin Abrams, Christopher Hawley, and Robert Pless Washington University in St. Louis St. Louis, USA Abstract. In this work, we present a method to uncover

More information

Recovery of Fingerprints using Photometric Stereo

Recovery of Fingerprints using Photometric Stereo Recovery of Fingerprints using Photometric Stereo G. McGunnigle and M.J. Chantler Department of Computing and Electrical Engineering Heriot Watt University Riccarton Edinburgh EH14 4AS United Kingdom gmg@cee.hw.ac.uk

More information

Self-calibrating Photometric Stereo

Self-calibrating Photometric Stereo Self-calibrating Photometric Stereo Boxin Shi 1 Yasuyuki Matsushita 2 Yichen Wei 2 Chao Xu 1 Ping Tan 3 1 Key Lab of Machine Perception (MOE), Peking University 2 Microsoft Research Asia 3 Department of

More information

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava 3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Dynamic Shape Tracking via Region Matching

Dynamic Shape Tracking via Region Matching Dynamic Shape Tracking via Region Matching Ganesh Sundaramoorthi Asst. Professor of EE and AMCS KAUST (Joint work with Yanchao Yang) The Problem: Shape Tracking Given: exact object segmentation in frame1

More information

Face Re-Lighting from a Single Image under Harsh Lighting Conditions

Face Re-Lighting from a Single Image under Harsh Lighting Conditions Face Re-Lighting from a Single Image under Harsh Lighting Conditions Yang Wang 1, Zicheng Liu 2, Gang Hua 3, Zhen Wen 4, Zhengyou Zhang 2, Dimitris Samaras 5 1 The Robotics Institute, Carnegie Mellon University,

More information

Optimizing Monocular Cues for Depth Estimation from Indoor Images

Optimizing Monocular Cues for Depth Estimation from Indoor Images Optimizing Monocular Cues for Depth Estimation from Indoor Images Aditya Venkatraman 1, Sheetal Mahadik 2 1, 2 Department of Electronics and Telecommunication, ST Francis Institute of Technology, Mumbai,

More information

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3 Image Formation: Light and Shading CSE 152 Lecture 3 Announcements Homework 1 is due Apr 11, 11:59 PM Homework 2 will be assigned on Apr 11 Reading: Chapter 2: Light and Shading Geometric image formation

More information

Robust Energy Minimization for BRDF-Invariant Shape from Light Fields

Robust Energy Minimization for BRDF-Invariant Shape from Light Fields Robust Energy Minimization for BRDF-Invariant Shape from Light Fields Zhengqin Li Zexiang Xu Ravi Ramamoorthi Manmohan Chandraker University of California, San Diego {zhl378, zex04, ravir, mkchandraker}@eng.ucsd.edu

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Shading and Recognition OR The first Mrs Rochester. D.A. Forsyth, UIUC

Shading and Recognition OR The first Mrs Rochester. D.A. Forsyth, UIUC Shading and Recognition OR The first Mrs Rochester D.A. Forsyth, UIUC Structure Argument: History why shading why shading analysis died reasons for hope Classical SFS+Critiques Primitives Reconstructions

More information

Photometric Stereo with Near Point Lighting: A Solution by Mesh Deformation

Photometric Stereo with Near Point Lighting: A Solution by Mesh Deformation Photometric Stereo with Near Point Lighting: A Solution by Mesh Deformation Wuyuan Xie, Chengkai Dai, and Charlie C. L. Wang Department of Mechanical and Automation Engineering, The Chinese University

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

PHOTOMETRIC STEREO FOR NON-LAMBERTIAN SURFACES USING COLOR INFORMATION

PHOTOMETRIC STEREO FOR NON-LAMBERTIAN SURFACES USING COLOR INFORMATION PHOTOMETRIC STEREO FOR NON-LAMBERTIAN SURFACES USING COLOR INFORMATION KARSTEN SCHLÜNS Fachgebiet Computer Vision, Institut für Technische Informatik Technische Universität Berlin, Franklinstr. 28/29,

More information

Multi-View 3D Reconstruction of Highly-Specular Objects

Multi-View 3D Reconstruction of Highly-Specular Objects Multi-View 3D Reconstruction of Highly-Specular Objects Master Thesis Author: Aljoša Ošep Mentor: Michael Weinmann Motivation Goal: faithful reconstruction of full 3D shape of an object Current techniques:

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Supplementary Material: Specular Highlight Removal in Facial Images

Supplementary Material: Specular Highlight Removal in Facial Images Supplementary Material: Specular Highlight Removal in Facial Images Chen Li 1 Stephen Lin 2 Kun Zhou 1 Katsushi Ikeuchi 2 1 State Key Lab of CAD&CG, Zhejiang University 2 Microsoft Research 1. Computation

More information

Face Relighting with Radiance Environment Maps

Face Relighting with Radiance Environment Maps Face Relighting with Radiance Environment Maps Zhen Wen Zicheng Liu Thomas S. Huang University of Illinois Microsoft Research University of Illinois Urbana, IL 61801 Redmond, WA 98052 Urbana, IL 61801

More information

Robust Principal Component Analysis (RPCA)

Robust Principal Component Analysis (RPCA) Robust Principal Component Analysis (RPCA) & Matrix decomposition: into low-rank and sparse components Zhenfang Hu 2010.4.1 reference [1] Chandrasekharan, V., Sanghavi, S., Parillo, P., Wilsky, A.: Ranksparsity

More information

Shading Models for Illumination and Reflectance Invariant Shape Detectors

Shading Models for Illumination and Reflectance Invariant Shape Detectors Shading Models for Illumination and Reflectance Invariant Shape Detectors Peter Nillius Department of Physics Royal Institute of Technology (KTH) SE-106 91 Stockholm, Sweden nillius@mi.physics.kth.se Josephine

More information

LightSlice: Matrix Slice Sampling for the Many-Lights Problem

LightSlice: Matrix Slice Sampling for the Many-Lights Problem LightSlice: Matrix Slice Sampling for the Many-Lights Problem SIGGRAPH Asia 2011 Yu-Ting Wu Authors Jiawei Ou ( 歐嘉蔚 ) PhD Student Dartmouth College Fabio Pellacini Associate Prof. 2 Rendering L o ( p,

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Motohiro Nakamura 1, Takahiro Okabe 1, and Hendrik P. A. Lensch 2 1 Kyushu Institute of Technology 2 Tübingen University

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

Other Reconstruction Techniques

Other Reconstruction Techniques Other Reconstruction Techniques Ruigang Yang CS 684 CS 684 Spring 2004 1 Taxonomy of Range Sensing From Brain Curless, SIGGRAPH 00 Lecture notes CS 684 Spring 2004 2 Taxonomy of Range Scanning (cont.)

More information

Multi-view stereo. Many slides adapted from S. Seitz

Multi-view stereo. Many slides adapted from S. Seitz Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window

More information

Epipolar geometry contd.

Epipolar geometry contd. Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives

More information

Self-similarity Based Editing of 3D Surface Textures

Self-similarity Based Editing of 3D Surface Textures J. Dong et al.: Self-similarity based editing of 3D surface textures. In Texture 2005: Proceedings of the 4th International Workshop on Texture Analysis and Synthesis, pp. 71 76, 2005. Self-similarity

More information

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons Reflectance 1 Lights, Surfaces, and Cameras Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons 2 Light at Surfaces Many effects when light strikes a surface -- could be:

More information

Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL

Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL Zhixin Shu 1 Ersin Yumer 2 Sunil Hadap 2 Kalyan Sunkavalli 2 Eli Shechtman 2 Dimitris Samaras 1,3 1 Stony Brook University

More information

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Ko Nishino, Zhengyou Zhang and Katsushi Ikeuchi Dept. of Info. Science, Grad.

More information

Specularities Reduce Ambiguity of Uncalibrated Photometric Stereo

Specularities Reduce Ambiguity of Uncalibrated Photometric Stereo CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Specularities Reduce Ambiguity of Uncalibrated Photometric Stereo Ondřej Drbohlav Radim Šára {drbohlav,sara}@cmp.felk.cvut.cz O. Drbohlav, and R.

More information

Variational Multiframe Stereo in the Presence of Specular Reflections

Variational Multiframe Stereo in the Presence of Specular Reflections Variational Multiframe Stereo in the Presence of Specular Reflections Hailin Jin Anthony J. Yezzi Stefano Soatto Electrical Engineering, Washington University, Saint Louis, MO 6330 Electrical and Computer

More information