Parallel Lighting and Reflectance Estimation based on Inverse Rendering

Size: px
Start display at page:

Download "Parallel Lighting and Reflectance Estimation based on Inverse Rendering"

Transcription

1 Parallel Lighting and Reflectance Estimation based on Inverse Rendering Tomohiro Mashita Hiroyuki Yasuhara Alexander Plopski Kiyoshi Kiyokawa Haruo Takemura Figure 1: Virtual object overlay examples: The upper row shows the original image. The highlighting varies depending on the viewpoint. The middle row is an overlay with the estimated reflectance and lighting added. The variation of highlighting for virtual object behaves similarly to that of real object. The lower row is an overlay with a virtual sphere added. Our system renders appropriate AR scenery in terms of lighting environment because of the adequate highlight and attached shadow. A BSTRACT Photometric registration is one of the more challenging problems related to augmented reality (AR) because the simultaneous estimations of both lighting and reflectance are especially difficult problems due to large number of parameters and ill-posed problems. As a result, most currently utilized lighting and reflectance estimation mashita@ime.cmc.osaka-u.ac.jp yasuhara@lab.ime.cmc.osaka-u.ac.jp alexander.plopski@lab.ime.cmc.osaka-u.ac.jp kiyo@ime.cmc.osaka-u.ac.jp takemura@ime.cmc.osaka-u.ac.jp methods employ light probes such as mirror spheres, omnidirectional cameras, or require preliminary scanning of the target object. However, these light probe types are not fully suitable for AR systems. In this paper, we introduce an in-situ lighting and reflectance estimation method that does not require specific light probes and/or preliminary scanning. Our method uses images taken from multiple viewpoints while data accumulation and lighting and reflectance estimations run in the background of the primary AR system. As a result, our method requires little manipulations for image collection. We tested our method in simulated environment and simple real environments. Index Terms: H.5.1 [Information interfaces and presentation]: Artificial, Augmented, Virtual Realities ; I.4.8 [Image Processing and Computer Vision]: Photometric registration 102

2 1 INTRODUCTION Photometric registration is one of the more important issues that must be handled by augmented reality (AR) systems that are intended to display photorealistic virtual objects. To achieve photometric registration, various data, including the geometric and photometric environment of the real scene, is required. Furthermore, in the case of an AR system, which shows virtual objects generated from real objects, the reflections properties also should be utilized. One of the more practical methods used to obtain lighting environment information requires the use of an omnidirectional camera or a spherical mirror [1, 2, 3]. However, such devices are not commonly available. This makes them unsuitable for mobile AR systems. It is also possible to estimate lighting and reflectance from an image [4]. However, one of the problematic points of this method type is the abundance of ill-posed problems and the nonlinear optimization of the lighting and reflectance parameters. Therefore, there are few AR systems that can estimate lighting and reflectance in real-time. Therefore, in order to achieve an efficient in-situ and photorealistic AR system, it is necessary to solve the problems related to lighting and reflection estimations. To accomplish this, we developed a system that estimates lighting and reflectance from input images and geometric information while running in the background of an AR system. Since one of our purposes is to separate lighting and reflectance from multiple images, we adopted an approach that minimizes the differences between a real object and its synthesized images. In this paper, we report the use of a dichromatic reflectance model that incorporates the Phong specular reflection model. The other assumptions used for the estimation are the static and white light sources and homogeneous reflectance parameters of an object. In an AR system, the time from the start of the system to the time the user begins the AR experience should be minimized. Therefore, initial pre-processing and object scanning by the user should be minimized as well. Our system can minimize those initial user operation requirements because the system begins to create estimations using data obtained as soon as the user starts experiencing the AR scene. Furthermore, our system shows the current results provided by the estimation process, which runs in background, and renews the input images used for optimization purposes every iteration. Contributions and limitations Contribution 1: No specific light probe or preliminary scanning of target object. In our method, users do not have to prepare specific light probes such as spherical mirrors or omnidirectional cameras, and it is not necessary to make a preliminary scan of a target. Instead, the system selects a key frame and estimates lighting and reflectance parameters using another thread. As a result, there are few requirements from system startup to the time when the user begins the AR experience. Contribution 2: Simultaneous estimation of lighting and reflectance consisting of specular and diffuse. Our method separates lighting and reflectance from observed intensity. Therefore, it is applicable to AR systems that control lighting and reflectance in environment, such as a system for relighting with an estimated object s reflectance or a system for editing an object s color and highlights under a real lighting environment. Limitation 1: Inapplicable to textured objects. Our proposed method assumes homogeneous surface material properties, constant surface color and specularity. Limitation 2: Low expressiveness for lighting and specular reflectance. The lighting model used in this study incorporates the Phong reflectance model and multiple point light sources. While these models are simple and require fewer parameters, the resulting expressiveness of lighting and reflection is somewhat low. 2 RELATED WORKS A number of approaches to achieving photometric registration have been studied previously. Those methods can be grouped as methods that use reflectance information from a specified object [7, 1, 8], methods that use shading from a specified object [9, 10], and methods that do not use specified objects [2, 3, 11, 12, 14, 15, 16]. Our method is categorized as one that does not require the use of a specified object. Therefore, in this section, we will discuss methods that do not use specific objects as light probes. Lalonde et al. [11] proposed a method for estimating lighting environments from a single outdoor image. This method estimates the sun position and visibility by determining the ground, vertical surfaces, and their relation to a convex object. Additionally, Liu and Granier [16] proposed a method to track outdoor lighting variations in which two types of sequentially variable light sources (the sun and sky) are assumed, and the relative intensities of sunlight and sky light are estimated by using a sparse set of planar feature points extracted from each frame. Knecht et al. [13] proposed a reflectance properties estimation method that runs interactive framerate and does not need preprocessing. This method estimates Bidirectional Reflectance Distribution Function (BRDF) using known geometry and lighting environment. Neverova et al. [12] proposed a method estimating the position and color of multiple light sources using an RGB-D camera. Their method is based on the assumption that the surface is dichromatic, reflectance is homogeneous, and a small surface light source is used. The method decomposes an original image into specular shading, diffuse shading, and albedo, and then identifies a lighting condition that minimizes the differences between the original and rendered images. Gruber et al. [14] proposed an AR system that performs shadow in both ways, real-to-virtual and virtual-to-real shadows. This system estimates distant light fields using an RGB-D input image by determining the relationships between the various normal vectors present in a real scene and their intensities. This system assumes that the observed surface is lambertian. Furthermore, to reduce computational costs, the distant light field is approximated by spherical harmonics. Consequently, this system is capable of running dynamic scenes in real-time. Jachnik et al. [15] proposed an AR system that performs realistic reflection by capturing surface light fields [17]. A surface light field is a hemispherical lighting and reflectance property that covers a particular plane. This proposed system extracts specular the component from the captured surface light field, which is then used to create an environmental map estimate. However, to obtain a surface light field, the user is required to scan a target object hemispherically, which restricts the use of this system to static environments. 3 LIGHTING AND REFLECTANCE ESTIMATION 3.1 Problem Setting Lighting Previous AR systems used for estimating lighting environments assume distant light source models because of reduced computational costs and/or an assumption of outdoor use. In the case of a distant light assumption, such systems only estimate the direction of the light sources. However, in an indoor environment, light sources (such as desktop or ceiling lights) are normally close to the subject of the estimation, which means that the light source model should also have a distance property. 103

3 Geometric registration Some methods obtain geometric information in real-time by using a common or RGB-D camera [5, 6, 20]. In our method, we assume that such geometric information, including positions and orientations of a moving camera and the normal vector on the surface of an object, are all known. Reflectance A dichromatic reflectance model, specifically the Phong model [21], is used as the reflectance model in our AR system because of its simplicity and need for fewer parameters. Other dichromatic reflectance models used in AR systems include the Torrance-Sparrow model [22] and the Oren-Nayar model [23]. 3.2 Parameter estimation A lighting estimation is basically achieved by applying the steepest descent method to the error function, which is defined as a square of difference between the real and synthesized images. The acquisition speed for initial values and the minimal number of parameters required for optimization are critical to the accuracy of our system because it is designed to optimize numerous parameters simultaneously. As a result, it was necessary to introduce a new method for determining initial values Initial value estimation Lighting direction estimation The position of a light source consists of distance d L from a particular point in a real environment and direction is based on the normal of a plane (ϕ L,θ L ), where ϕ L and θ L are the azimuth angle and zenith angle, respectively. The range of ϕ L and θ L are [0,2π) and [0,π/2], respectively. To estimate lighting and reflectance, we introduce an intensity map to show a distribution of reflected intensity obtained from multiple views. This intensity map is generated by identifying areas with high levels of reflected light, which are then used to plot light directions. Lighting direction estimates are achieved by clustering the high intensity areas. In our method, we used Leader-based Clustering [18] to produce estimated directions. However, before clustering is performed, the intensity map is transformed to a gray scale image and smoothed with a Gaussian filter. Finally, the reflected direction of the representative point for each cluster is assumed as a lighting direction. Object color An initial value of object color is also estimated from the object s diffuse reflection. There are two approaches to produce a diffuse color estimation. Nishino et al. [19] used the minimum value of the observed intensity as the diffuse color, while Wood et al. [17] used a median value. In our proposal, we adopt a median value for the initial value of object color estimation because Jachnik et al. [15] showed the robust ability of median filters for tracking errors in an AR system. Other parameters For parameters whose initial values are difficult to estimate, values were assigned heuristically. In practice, the light distance is defined based on the height of the ceiling, while the coefficients of the ambient reflection, diffuse reflection, specular reflection, and the shininess parameter of the target material are defined as the center of each range Optimization The number of parameters in the case of m light sources are (10 + 3m), as shown in Table 1. The directions of light sources are not included in the non-linear optimization because we assume that the estimation described in Sec produces a satisfactory level of accuracy. However, if an AR system is to be used in a dynamic lighting environment, the lighting direction estimation should also be included in the optimization process. In this paper, we assume the following lighting and reflectance conditions: No Initial value Estimation Estimation Image, Camera position and orientation Initialized? Yes Optimization Estimation thread Rendering AR Scene Lighting and Reflectance Parameters Figure 2: Flow of Lighting and reflectance estimation Uniformed diffuse color and fixed specular color Using this assumption, we can disregard the diffuse and specular reflectance differences resulting from the geometric difference. The color of the specular reflection is fixed as (O sr,o sg,o sb ) = (1,1,1). White point light sources We assume white point light sources. A white light source assumption is common in lighting environment estimations because light color estimation is basically ill-posed. Using this assumption, the geometric dependent diffuse color and light source color parameters are excluded from the non-linear optimization parameters. Additionally, the specular reflection of the Phong reflection model used in our system does not consider the color of an object. Due to the need to reduce parameter numbers, our system estimates those (7 + m) parameters for optimization. The error function for the optimization is as follows: E = v views x,y pixels λ R,G,B R v (x,y,λ) S v (x,y,λ) 2, (1) v views x,y pixels λ R,G,B 1 where, R v (x,y,λ) is the pixel value at the (x, y) pixel coordinates of vth frame and S v (x,y,λ) is the pixel value of a rendered image with the same geometric information and current parameters. 4 IMPLEMENTATION Our system consists of two processes, lighting and reflectance estimation and scene rendering. Figure 2 shows the flow of our system. Since these processes are handled independently using multiple threads, an AR scene is rendered using the current best parameters obtained from the estimation thread. Our system uses a desktop personal computer (PC) (3.40 GHz Intel(R) Core(TM) i7-2600, nvidia GeForce GTX 460, 8 GByte memory) and a camera (Point Grey Flea3, 640 x 480 pixels, 30 fps). Table 1: Number of parameters Parameter Range Description L i ϕ i [0,2π) θ i [0,π] Direction of m light sources d Li [0, ] Distance of m light sources O dλ [0,1] Diffuse reflection color (λ R,G,B) O sλ [0,1] Specular reflection color (λ R,G,B) k a [0,1] Ambient reflection coefficient k d [0,1] Diffuse reflection coefficient k s [0,1] Specular reflection coefficient n [0,128] Shininess parameter for material 104

4 Residual error (a) Model 1 (b) Model 2 Figure 3: Simulation models Estimation error [deg] Number of viewpoints Estimation error [deg] Model Number of viewpoints Model 2 Figure 4: Relationships between the lighting direction estimation error and the number of viewpoints Our system uses Parallel Tracking And Mapping (PTAM) [20] for the geometric registration. PTAM is also processed independently using another thread. To gather images used for the estimation, our system captures key frames at 10-degree movement intervals. Since the system can begin estimations using a one key frame, a user can begin using the system without scanning an object or waiting for images to accumulate. 5 RESULTS 5.1 Simulation environment A simulation environment is used for trial and evaluation because the estimated parameters can be compared to those parameters used for rendering. Figure 3 shows the models used for our simulation Light source estimations The relationships between the number of viewpoints and the lighting direction estimation error are shown in Fig 4. These results show that the lighting direction estimation error decreases with increases to the number of viewpoints. The variation of residual error from 10 viewpoints is shown in Fig. 5, where Model 2 shown in Fig. 3 is used for this evaluation. The residual error decreases significantly until the 50th iteration. The synthesized images using parameters during the estimation process are shown in Fig. 6, where Fig. 6 (a) is a real image and Figs. (b) to (f) are images synthesized using the parameters included during the optimization process. The average processing time per iteration is sec Estimation of multiple light sources An evaluation of the estimation method used to obtain the number of light sources and their directions was performed in a simulated environment with eight point light sources. The results of estimations produced from 10 and 20 viewpoints are shown in Figs. 7 and 8, where the green and yellow lines and dots indicate ground truth Number of iterations Figure 5: Variation of residual error with a simulation environment and estimated direction, respectively. These results show that while an incorrect estimate was produced when 10 viewpoints were used, the correct number of light sources could be estimated from 20 viewpoints. The accuracy of the light source direction estimations also improved when the number of viewpoints increased. Closeups of 7 (b) and 8 (b) are shown in Fig. 9. Figure 9 (a), the case of 10 viewpoints, shows that there are no data around some of the correct view directions and only two light sources were estimated correctly. In contrast, Fig. 9 (b) shows that the light source direction is estimated correctly. 5.2 Real environment We then demonstrated our method in a real environment. In this test, the system assumes the target object to be used for the estimation is square because the proposed system has not yet been combined with an object shape detection system. The target used was an expanded polystyrene board to which color and shininess was added via spray paint. The user is required to assign four feature points to define a target plane Lighting direction estimation in a real environment An example of the intensity map is shown in Fig. 10. This intensity map, which is a pixel image, was generated from 26 viewpoints, smoothed with a Gaussian filter, and then transformed into a gray scale image. The yellow cross in the figure is the centroid of a cluster of high intensity pixels Optimization Figure 11 shows the variation of residual error using 16 input images. As can be seen in the figure, the residual error greatly decreases until the tenth iteration, and there are few differences in the visual appearance after that point. Figure 12 shows variations to the synthesized images within an optimization. The processing time is 7.46 sec for the tenth iteration, and the average time per iteration is sec Overlaying virtual object We then demonstrated the process of overlaying a virtual object. Our system does not prevent the user from experiencing AR because the estimation process runs in the background. Generally speaking, the lighting and reflectance converge to low difference state within a few seconds. Figure 1 shows the result of a virtual sphere overlay using the estimated parameters. In this figure, we can see that the highlighting for each virtual object varies depending on the viewpoint. The position and spreading of the highlight in the middle row of Fig.1 indicates that the lighting direction estimation is valid. The lower row in Fig. 1 shows that our system renders 105

5 Real image Initial values (a) Light source direction (b) Intensity map Figure 8: Example of estimation of light source direction from 20 viewpoints 5 iterations 10 iterations (a) Close-up of Fig. 7 (b) (b) Close-up of Fig. 8 (b) Figure 9: Close-ups of Figs. 7 (b) and 8 (b) 50 iterations 228 iterations Figure 6: Appearance variation due to iterations (a) Light source direction (b) Intensity map Figure 7: Example of light source direction from 10 viewpoints appropriate AR scenery in terms of lighting environment because of the adequate highlight and attached shadow. 6 DISCUSSION The results of the simulations discussed above confirmed that the intensity maps can be used to estimate the directions of light sources in some cases, but that the estimated result was insufficient to express a real lighting environment in other cases because of the assumption made when using point light sources. There were also cases of unstable light source number estimates and inaccurate lighting directions when numerous light sources were present in a real scene. To address these issues, it will be necessary to improve the clustering algorithm. The processing time for optimization is about seven sec on the condition that the convergence of optimization is defined as the user might be unaware the difference by iteration. Although the processing time required for optimization increases with the number of viewpoints and light sources, we believe that the convergence time is practical because the main AR system shows the virtual object using the current estimation and users do not have to wait for the optimization to complete. To achieve faster and/or more accurate optimization, another reflectance and lighting model or optimization method will be necessary. However, the appropriate processing time and estimation accuracy depends on how the method is applied because the relationships between processing time and accuracy are basically trade-offs. The photoreality achieved using the proposed method is cur- rently limited because the lighting environment is limited to multiple point light sources. To improve the reality level, a distant light field should be adopted and the limitation on homogeneous reflectance parameters should be relaxed. Furthermore, the ability to handle complicated texture reflections needs to be introduced. 7 CONCLUSION In this paper, we presented an online lighting and reflectance estimation method for AR systems. This method estimates the parameters of lighting consisting of multiple point light sources and reflectance via the Phong reflection model. The estimation consists of initial values estimations and non-linear parameter optimizations that minimize the differences between real and synthesized images. The estimation process runs in the background of an AR system uses the current best parameters when rendering virtual objects. We evaluated the performance of our method using a synthesized environment. The result of the evaluations show that the accuracy of lighting direction estimation and the number of light sources increases with the number of input images. Additionally, we implemented our proposal on an AR system to demonstrate its utility in a real environment. The system shows that lighting and reflectance are estimated online and virtual objects are rendered appropriately by using the best current estimation results. Our plans for future work includes applying a more complicated lighting environment, relaxing the limitations related to uniformed diffuse and specular, and the implementation of a system combined with geometric registration method that can ascertain detailed shapes in real-time. ACKNOWLEDGEMENTS This research was funded in part by Grant-in-Aid for Scientific Research (B), # from the Japan Society for the Promotion of Science (JSPS), Japan. REFERENCES [1] M. Kanbara and N. Yokoya: Real-time estimation of light source environment for photorealistic augmented reality. In proceedings of the 17th International Conference on Pattern Recognition (ICPR), volume 2, pages , [2] T. Kakuta, T. Oishi, and K. Ikeuchi: Virtual kawaradera: Fast shadow texture for augmented reality. In Proceedings of the 10th International Society on Virtual Systems and MultiMedia (VSMM), pages , [3] B. T. Nikodỳm: Global illumination computation for augmented reality. Master s thesis, Czech Technical University in Prague,

6 Figure 10: Intensity map example real image initial value Residual error one iteration 5 iterations Number of iterations Figure 11: Variation of residual error with a real environment 10 iterations 100 iterations Figure 12: Synthesized image variations [4] K. Hara and K. Nishino: Variational estimation of inhomogeneous specular reflectance and illumination from a single view. The Journal of the Optical Society of America A, 28(2), pages , [5] R. A. Newcombe and A. J. Davison: Live dense reconstruction with a single moving camera. In Proceedings of the 23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages , [6] R. A. Newcombe, A. J. Davison, S. Izadi, P. Kohli, O. Hilliges, J. Shotton, D. Molyneaux, S. Hodges, D. Kim, and A. Fitzgibbon: Kinectfusion: Real-time dense surface mapping and tracking. In Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pages , [7] P. Debevec: Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques (SIGGRAPH), pages , [8] T. Aoto, T. Taketomi, T. Sato, Y. Mukaigawa, and N. Yokoya: Position Estimation of Near Point Light Sources using Clear Hollow Sphere. In Proceedings of 21st IAPR International Conference on Pattern Recognition (ICPR2012), pages , [9] I. Sato, Y. Sato, and K. Ikeuchi: Illumination distribution from brightness in shadows: Adaptive estimation of illumination distribution with unknown reflectance properties in shadow regions. In Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV), volume 2, pages , [10] T. Takai, S. Iino, A. Maki, and T. Matsuyama: 3-D Lighting Environment Estimation with Shading and Shadows Image and Geometry Processing for 3-D Cinematography (R. Ronfard & G. Taubin, Eds.), Geometry and Computing 5, Springer, (2010). [11] J. F. Lalonde, A. A. Efros, and S. G. Narasimhan: Estimating the natural illumination conditions from a single outdoor image. International Journal of Computer Vision, pages 1 23, [12] N. Neverova, D. Muselet, and A. Trémeau: Lighting estimation in indoor environments from low-quality images. In Proceedings of the 12th European Conference on Computer Vision (ECCV), pages , [13] M. Knecht, G. Tanzmeister, C. Traxler, and M. Wimmer Interactive BRDF Estimation for Mixed-Reality Applications Journal of International Conference on Computer Graphics, Visualization and Computer Vision (WSCG):pp (2012) [14] L. Gruber, T. Richter-Trummer, and D. Schmalstieg: Real-time photometric registration from arbitrary geometry. In Proceedings of the 11th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), [15] J. Jachnik, R. A. Newcombe, and A. J. Davison: Real-time surface light-field capture for augmentation of planar specular surfaces. In Proceedings of the 11th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), [16] Y. Liu and X. Granier: Online tracking of outdoor lighting variations for augmented reality with moving cameras. IEEE Transactions on Visualization and Computer Graphics (TVCG), volume 18, number 4, pages , [17] D. N. Wood, D. I. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. H. Salesin, and W. Stuetzle: Surface light fields for 3D photography. In Proceedings of the 27th annual conference on computer graphics and interactive techniques (SIGGRAPH), pages , [18] A. Kirmse, T. Udeshi, P. Bellver, and J. Shuma: Extracting patterns from location history. In Proceedings of the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (GIS), pages , [19] K. Nishino, Z. Zhang, and K. Ikeuchi: Determining reflectance parameters and illumination distribution from a sparse set of images for view-dependent image synthesis. In Proceedings of the 8th IEEE International Conference on Computer Vision (ICCV), volume 1, pages , [20] G. Klein and D. Murray: Parallel tracking and mapping for small AR workspaces. In Proceedings of the 6th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pages , [21] B. T. Phong: Illumination for computer generated pictures. Communications of ACM, volume 18, number 6, pages , [22] K. E. Torrance and E. M. Sparrow: Theory for off-specular reflection from roughened surfaces. The Journal of the Optical Society of America, volume 57, number 9, pages , [23] M. Oren and S.K. Nayar: Generalization of Lambert s Reflectance Model. In Proceedings of the 21st annual conference on computer graphics and interactive techniques (SIGGRAPH), pages ,

Skeleton Cube for Lighting Environment Estimation

Skeleton Cube for Lighting Environment Estimation (MIRU2004) 2004 7 606 8501 E-mail: {takesi-t,maki,tm}@vision.kuee.kyoto-u.ac.jp 1) 2) Skeleton Cube for Lighting Environment Estimation Takeshi TAKAI, Atsuto MAKI, and Takashi MATSUYAMA Graduate School

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Re-rendering from a Dense/Sparse Set of Images

Re-rendering from a Dense/Sparse Set of Images Re-rendering from a Dense/Sparse Set of Images Ko Nishino Institute of Industrial Science The Univ. of Tokyo (Japan Science and Technology) kon@cvl.iis.u-tokyo.ac.jp Virtual/Augmented/Mixed Reality Three

More information

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Ko Nishino, Zhengyou Zhang and Katsushi Ikeuchi Dept. of Info. Science, Grad.

More information

Real-Time Surface Light-field Capture for Augmentation of Planar Specular Surfaces

Real-Time Surface Light-field Capture for Augmentation of Planar Specular Surfaces Real-Time Surface Light-field Capture for Augmentation of Planar Specular Surfaces Jan Jachnik Imperial College London Richard A. Newcombe Imperial College London Andrew J. Davison Imperial College London

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Relighting for an Arbitrary Shape Object Under Unknown Illumination Environment

Relighting for an Arbitrary Shape Object Under Unknown Illumination Environment Relighting for an Arbitrary Shape Object Under Unknown Illumination Environment Yohei Ogura (B) and Hideo Saito Keio University, 3-14-1 Hiyoshi, Kohoku, Yokohama, Kanagawa 223-8522, Japan {y.ogura,saito}@hvrl.ics.keio.ac.jp

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Recovering light directions and camera poses from a single sphere.

Recovering light directions and camera poses from a single sphere. Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface. Lighting and Photometric Stereo CSE252A Lecture 7 Announcement Read Chapter 2 of Forsyth & Ponce Might find section 12.1.3 of Forsyth & Ponce useful. HW Problem Emitted radiance in direction f r for incident

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

Lighting affects appearance

Lighting affects appearance Lighting affects appearance 1 Source emits photons Light And then some reach the eye/camera. Photons travel in a straight line When they hit an object they: bounce off in a new direction or are absorbed

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction The central problem in computer graphics is creating, or rendering, realistic computergenerated images that are indistinguishable from real photographs, a goal referred to as photorealism.

More information

Analysis of photometric factors based on photometric linearization

Analysis of photometric factors based on photometric linearization 3326 J. Opt. Soc. Am. A/ Vol. 24, No. 10/ October 2007 Mukaigawa et al. Analysis of photometric factors based on photometric linearization Yasuhiro Mukaigawa, 1, * Yasunori Ishii, 2 and Takeshi Shakunaga

More information

Point Light Source Estimation based on Scenes Recorded by a RGB-D camera

Point Light Source Estimation based on Scenes Recorded by a RGB-D camera BOOM et al.: POINT LIGHT SOURCE ESTIMATION USING A RGB-D CAMERA 1 Point Light Source Estimation based on Scenes Recorded by a RGB-D camera Bastiaan J. Boom 1 http://homepages.inf.ed.ac.uk/bboom/ Sergio

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 3, 2017 Lecture #13 Program of Computer Graphics, Cornell University General Electric - 167 Cornell in

More information

Ligh%ng and Reflectance

Ligh%ng and Reflectance Ligh%ng and Reflectance 2 3 4 Ligh%ng Ligh%ng can have a big effect on how an object looks. Modeling the effect of ligh%ng can be used for: Recogni%on par%cularly face recogni%on Shape reconstruc%on Mo%on

More information

Learning Lightprobes for Mixed Reality Illumination

Learning Lightprobes for Mixed Reality Illumination Learning Lightprobes for Mixed Reality Illumination David Mandl 1 Kwang Moo Yi 2 Peter Mohr 1 Peter Roth 1 Pascal Fua 2 Vincent Lepetit 3 Dieter Schmalstieg 1 Denis Kalkofen 1 1 Graz University of Technology

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor Takafumi Taketomi, Tomokazu Sato, and Naokazu Yokoya Graduate School of Information

More information

Re-rendering from a Sparse Set of Images

Re-rendering from a Sparse Set of Images Re-rendering from a Sparse Set of Images Ko Nishino, Drexel University Katsushi Ikeuchi, The University of Tokyo and Zhengyou Zhang, Microsoft Research Technical Report DU-CS-05-12 Department of Computer

More information

3D Editing System for Captured Real Scenes

3D Editing System for Captured Real Scenes 3D Editing System for Captured Real Scenes Inwoo Ha, Yong Beom Lee and James D.K. Kim Samsung Advanced Institute of Technology, Youngin, South Korea E-mail: {iw.ha, leey, jamesdk.kim}@samsung.com Tel:

More information

And if that 120MP Camera was cool

And if that 120MP Camera was cool Reflectance, Lights and on to photometric stereo CSE 252A Lecture 7 And if that 120MP Camera was cool Large Synoptic Survey Telescope 3.2Gigapixel camera 189 CCD s, each with 16 megapixels Pixels are 10µm

More information

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Motohiro Nakamura 1, Takahiro Okabe 1, and Hendrik P. A. Lensch 2 1 Kyushu Institute of Technology 2 Tübingen University

More information

Illumination Estimation from Shadow Borders

Illumination Estimation from Shadow Borders Illumination Estimation from Shadow Borders Alexandros Panagopoulos, Tomás F. Yago Vicente, Dimitris Samaras Stony Brook University Stony Brook, NY, USA {apanagop, tyagovicente, samaras}@cs.stonybrook.edu

More information

Photometric Stereo.

Photometric Stereo. Photometric Stereo Photometric Stereo v.s.. Structure from Shading [1] Photometric stereo is a technique in computer vision for estimating the surface normals of objects by observing that object under

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Color Photometric Stereo and Virtual Image Rendering Using Neural Networks

Color Photometric Stereo and Virtual Image Rendering Using Neural Networks Electronics and Communications in Japan, Part 2, Vol. 90, No. 12, 2007 Translated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. J89-D, No. 2, February 2006, pp. 381 392 Color Photometric Stereo and Virtual

More information

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7 Lighting and Photometric Stereo Photometric Stereo HW will be on web later today CSE5A Lecture 7 Radiometry of thin lenses δa Last lecture in a nutshell δa δa'cosα δacos β δω = = ( z' / cosα ) ( z / cosα

More information

Photometric Stereo with Auto-Radiometric Calibration

Photometric Stereo with Auto-Radiometric Calibration Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp

More information

Shading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham

Shading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham Shading / Light Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham Phong Illumination Model See Shirley, Ch 10 and http://en.wikipedia.org/wiki/phong_shading

More information

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3 Image Formation: Light and Shading CSE 152 Lecture 3 Announcements Homework 1 is due Apr 11, 11:59 PM Homework 2 will be assigned on Apr 11 Reading: Chapter 2: Light and Shading Geometric image formation

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 27, 2015 Lecture #18 Goal of Realistic Imaging The resulting images should be physically accurate and

More information

Assignment #2. (Due date: 11/6/2012)

Assignment #2. (Due date: 11/6/2012) Computer Vision I CSE 252a, Fall 2012 David Kriegman Assignment #2 (Due date: 11/6/2012) Name: Student ID: Email: Problem 1 [1 pts] Calculate the number of steradians contained in a spherical wedge with

More information

Face Relighting with Radiance Environment Maps

Face Relighting with Radiance Environment Maps Face Relighting with Radiance Environment Maps Zhen Wen Zicheng Liu Thomas S. Huang University of Illinois Microsoft Research University of Illinois Urbana, IL 61801 Redmond, WA 98052 Urbana, IL 61801

More information

Instant Mixed Reality Lighting from Casual Scanning

Instant Mixed Reality Lighting from Casual Scanning Instant Mixed Reality Lighting from Casual Scanning Thomas Richter-Trummer 1,2 Denis Kalkofen 1 Jinwoo Park 3 Dieter Schmalstieg 1 1 Graz University of Technology 2 Bongfish GmbH 3 Korea Advanced Institute

More information

Shape and Appearance from Images and Range Data

Shape and Appearance from Images and Range Data SIGGRAPH 2000 Course on 3D Photography Shape and Appearance from Images and Range Data Brian Curless University of Washington Overview Range images vs. point clouds Registration Reconstruction from point

More information

Single image based illumination estimation for lighting virtual object in real scene

Single image based illumination estimation for lighting virtual object in real scene 2011 12th International Conference on Computer-Aided Design and Computer Graphics Single image based illumination estimation for lighting virtual object in real scene Xiaowu Chen, Ke Wang and Xin Jin State

More information

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template The Lecture Contains: Diffuse and Specular Reflection file:///d /...0(Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2028/28_1.htm[12/30/2015 4:22:29 PM] Diffuse and

More information

Capturing light. Source: A. Efros

Capturing light. Source: A. Efros Capturing light Source: A. Efros Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection? Lenses

More information

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry*

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Yang Wang, Dimitris Samaras Computer Science Department, SUNY-Stony Stony Brook *Support for this research was provided

More information

Shading. Brian Curless CSE 557 Autumn 2017

Shading. Brian Curless CSE 557 Autumn 2017 Shading Brian Curless CSE 557 Autumn 2017 1 Reading Optional: Angel and Shreiner: chapter 5. Marschner and Shirley: chapter 10, chapter 17. Further reading: OpenGL red book, chapter 5. 2 Basic 3D graphics

More information

Light Reflection Models

Light Reflection Models Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 21, 2014 Lecture #15 Goal of Realistic Imaging From Strobel, Photographic Materials and Processes Focal Press, 186.

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

Self-similarity Based Editing of 3D Surface Textures

Self-similarity Based Editing of 3D Surface Textures J. Dong et al.: Self-similarity based editing of 3D surface textures. In Texture 2005: Proceedings of the 4th International Workshop on Texture Analysis and Synthesis, pp. 71 76, 2005. Self-similarity

More information

Model-based Enhancement of Lighting Conditions in Image Sequences

Model-based Enhancement of Lighting Conditions in Image Sequences Model-based Enhancement of Lighting Conditions in Image Sequences Peter Eisert and Bernd Girod Information Systems Laboratory Stanford University {eisert,bgirod}@stanford.edu http://www.stanford.edu/ eisert

More information

Face Re-Lighting from a Single Image under Harsh Lighting Conditions

Face Re-Lighting from a Single Image under Harsh Lighting Conditions Face Re-Lighting from a Single Image under Harsh Lighting Conditions Yang Wang 1, Zicheng Liu 2, Gang Hua 3, Zhen Wen 4, Zhengyou Zhang 2, Dimitris Samaras 5 1 The Robotics Institute, Carnegie Mellon University,

More information

Dual Back-to-Back Kinects for 3-D Reconstruction

Dual Back-to-Back Kinects for 3-D Reconstruction Ho Chuen Kam, Kin Hong Wong and Baiwu Zhang, Dual Back-to-Back Kinects for 3-D Reconstruction, ISVC'16 12th International Symposium on Visual Computing December 12-14, 2016, Las Vegas, Nevada, USA. Dual

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein Lecture 23: Photometric Stereo Announcements PA3 Artifact due tonight PA3 Demos Thursday Signups close at 4:30 today No lecture on Friday Last Time:

More information

Published in: Proceedings: International Conference on Graphics Theory and Applications, Setubal, Portugal

Published in: Proceedings: International Conference on Graphics Theory and Applications, Setubal, Portugal Aalborg Universitet Real-Time Image-Based Lighting for Outdoor Augmented Reality under Dynamically Changing Illumination Conditions Madsen, Claus B.; Jensen, Tommy; Andersen, Mikkel S. Published in: Proceedings:

More information

Virtual Photometric Environment using Projector

Virtual Photometric Environment using Projector Virtual Photometric Environment using Projector Yasuhiro Mukaigawa 1 Masashi Nishiyama 2 Takeshi Shakunaga Okayama University, Tsushima naka 3-1-1, Okayama, 700-8530, Japan mukaigaw@ieee.org Abstract In

More information

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting R. Maier 1,2, K. Kim 1, D. Cremers 2, J. Kautz 1, M. Nießner 2,3 Fusion Ours 1

More information

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons Reflectance 1 Lights, Surfaces, and Cameras Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons 2 Light at Surfaces Many effects when light strikes a surface -- could be:

More information

Face Relighting with Radiance Environment Maps

Face Relighting with Radiance Environment Maps Face Relighting with Radiance Environment Maps Zhen Wen University of Illinois Urbana Champaign zhenwen@ifp.uiuc.edu Zicheng Liu Microsoft Research zliu@microsoft.com Tomas Huang University of Illinois

More information

w Foley, Section16.1 Reading

w Foley, Section16.1 Reading Shading w Foley, Section16.1 Reading Introduction So far, we ve talked exclusively about geometry. w What is the shape of an object? w How do I place it in a virtual 3D space? w How do I know which pixels

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi, Francois de Sorbier and Hideo Saito Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku,

More information

DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION. Department of Artificial Intelligence Kyushu Institute of Technology

DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION. Department of Artificial Intelligence Kyushu Institute of Technology DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION Kouki Takechi Takahiro Okabe Department of Artificial Intelligence Kyushu Institute of Technology ABSTRACT Separating diffuse

More information

Comp 410/510 Computer Graphics. Spring Shading

Comp 410/510 Computer Graphics. Spring Shading Comp 410/510 Computer Graphics Spring 2017 Shading Why we need shading Suppose we build a model of a sphere using many polygons and then color it using a fixed color. We get something like But we rather

More information

Illumination from Shadows

Illumination from Shadows IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. Y, NO. Y, MON 2002 0 Illumination from Shadows Imari Sato, Yoichi Sato, and Katsushi Ikeuchi Abstract In this paper, we introduce a

More information

Simple Lighting/Illumination Models

Simple Lighting/Illumination Models Simple Lighting/Illumination Models Scene rendered using direct lighting only Photograph Scene rendered using a physically-based global illumination model with manual tuning of colors (Frederic Drago and

More information

Color Alignment in Texture Mapping of Images under Point Light Source and General Lighting Condition

Color Alignment in Texture Mapping of Images under Point Light Source and General Lighting Condition Color Alignment in Texture Mapping of Images under Point Light Source and General Lighting Condition Hiroki Unten Graduate School of Information Science and Technology The University of Tokyo unten@cvliisu-tokyoacjp

More information

Using a Raster Display Device for Photometric Stereo

Using a Raster Display Device for Photometric Stereo DEPARTMEN T OF COMP UTING SC IENC E Using a Raster Display Device for Photometric Stereo Nathan Funk & Yee-Hong Yang CRV 2007 May 30, 2007 Overview 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS 1. Background

More information

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 CSE 167: Introduction to Computer Graphics Lecture #7: Color and Shading Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday,

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading CENG 477 Introduction to Computer Graphics Ray Tracing: Shading Last Week Until now we learned: How to create the primary rays from the given camera and image plane parameters How to intersect these rays

More information

Recovering illumination and texture using ratio images

Recovering illumination and texture using ratio images Recovering illumination and texture using ratio images Alejandro Troccoli atroccol@cscolumbiaedu Peter K Allen allen@cscolumbiaedu Department of Computer Science Columbia University, New York, NY Abstract

More information

Computer Graphics. Illumination and Shading

Computer Graphics. Illumination and Shading () Illumination and Shading Dr. Ayman Eldeib Lighting So given a 3-D triangle and a 3-D viewpoint, we can set the right pixels But what color should those pixels be? If we re attempting to create a realistic

More information

Eigen-Texture Method : Appearance Compression based on 3D Model

Eigen-Texture Method : Appearance Compression based on 3D Model Eigen-Texture Method : Appearance Compression based on 3D Model Ko Nishino Yoichi Sato Katsushi Ikeuchi Institute of Industrial Science, The University of Tokyo 7-22-1 Roppongi, Minato-ku, Tokyo 106-8558,

More information

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Jean-François Lalonde, Srinivasa G. Narasimhan and Alexei A. Efros {jlalonde,srinivas,efros}@cs.cmu.edu CMU-RI-TR-8-32 July

More information

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS Tsunetake Kanatani,, Hideyuki Kume, Takafumi Taketomi, Tomokazu Sato and Naokazu Yokoya Hyogo Prefectural

More information

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information

More information

Lecture 15: Shading-I. CITS3003 Graphics & Animation

Lecture 15: Shading-I. CITS3003 Graphics & Animation Lecture 15: Shading-I CITS3003 Graphics & Animation E. Angel and D. Shreiner: Interactive Computer Graphics 6E Addison-Wesley 2012 Objectives Learn that with appropriate shading so objects appear as threedimensional

More information

Announcements. Image Formation: Light and Shading. Photometric image formation. Geometric image formation

Announcements. Image Formation: Light and Shading. Photometric image formation. Geometric image formation Announcements Image Formation: Light and Shading Homework 0 is due Oct 5, 11:59 PM Homework 1 will be assigned on Oct 5 Reading: Chapters 2: Light and Shading CSE 252A Lecture 3 Geometric image formation

More information

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 CSE 167: Introduction to Computer Graphics Lecture #6: Lights Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 Announcements Thursday in class: midterm #1 Closed book Material

More information

視覚情報処理論. (Visual Information Processing ) 開講所属 : 学際情報学府水 (Wed)5 [16:50-18:35]

視覚情報処理論. (Visual Information Processing ) 開講所属 : 学際情報学府水 (Wed)5 [16:50-18:35] 視覚情報処理論 (Visual Information Processing ) 開講所属 : 学際情報学府水 (Wed)5 [16:50-18:35] Computer Vision Design algorithms to implement the function of human vision 3D reconstruction from 2D image (retinal image)

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Computer Graphics (CS 4731) Lecture 16: Lighting, Shading and Materials (Part 1)

Computer Graphics (CS 4731) Lecture 16: Lighting, Shading and Materials (Part 1) Computer Graphics (CS 4731) Lecture 16: Lighting, Shading and Materials (Part 1) Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Why do we need Lighting & shading? Sphere

More information

Mapping textures on 3D geometric model using reflectance image

Mapping textures on 3D geometric model using reflectance image Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi Ikeuchi The University of Tokyo Cyra Technologies, Inc. The University of Tokyo fkurazume,kig@cvl.iis.u-tokyo.ac.jp

More information

DECOMPOSING and editing the illumination of a photograph

DECOMPOSING and editing the illumination of a photograph IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017 1 Illumination Decomposition for Photograph with Multiple Light Sources Ling Zhang, Qingan Yan, Zheng Liu, Hua Zou, and Chunxia Xiao, Member, IEEE Abstract Illumination

More information

Illumination & Shading

Illumination & Shading Illumination & Shading Goals Introduce the types of light-material interactions Build a simple reflection model---the Phong model--- that can be used with real time graphics hardware Why we need Illumination

More information

Shading & Material Appearance

Shading & Material Appearance Shading & Material Appearance ACM. All rights reserved. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/. MIT EECS 6.837 Matusik

More information

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute Lightcuts Jeff Hui Advanced Computer Graphics 2010 Rensselaer Polytechnic Institute Fig 1. Lightcuts version on the left and naïve ray tracer on the right. The lightcuts took 433,580,000 clock ticks and

More information

Estimation of Surface Spectral Reflectance on 3D. Painted Objects Using a Gonio-Spectral Imaging System

Estimation of Surface Spectral Reflectance on 3D. Painted Objects Using a Gonio-Spectral Imaging System Estimation of Surface Spectral Reflectance on 3D Painted Objects Using a Gonio-Spectral Imaging System Akira Kimachi, Shogo Nishi and Shoji Tominaga Osaka Electro-Communication University, Chiba University

More information

Lighting and Shading Computer Graphics I Lecture 7. Light Sources Phong Illumination Model Normal Vectors [Angel, Ch

Lighting and Shading Computer Graphics I Lecture 7. Light Sources Phong Illumination Model Normal Vectors [Angel, Ch 15-462 Computer Graphics I Lecture 7 Lighting and Shading February 12, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/ Light Sources Phong Illumination Model

More information

The Shading Probe: Fast Appearance Acquisition for Mobile AR

The Shading Probe: Fast Appearance Acquisition for Mobile AR The Shading Probe: Fast Appearance Acquisition for Mobile AR Dan Andrei Calian * Kenny Mitchell Derek Nowrouzezahrai Jan Kautz * * University College London Disney Research Zürich University of Montreal

More information

Object Shape and Reflectance Modeling from Observation

Object Shape and Reflectance Modeling from Observation Object Shape and Reflectance Modeling from Observation Yoichi Sato 1, Mark D. Wheeler 2, and Katsushi Ikeuchi 1 ABSTRACT 1 Institute of Industrial Science University of Tokyo An object model for computer

More information

Lecture 24: More on Reflectance CAP 5415

Lecture 24: More on Reflectance CAP 5415 Lecture 24: More on Reflectance CAP 5415 Recovering Shape We ve talked about photometric stereo, where we assumed that a surface was diffuse Could calculate surface normals and albedo What if the surface

More information

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker CMSC427 Shading Intro Credit: slides from Dr. Zwicker 2 Today Shading Introduction Radiometry & BRDFs Local shading models Light sources Shading strategies Shading Compute interaction of light with surfaces

More information

Shading I Computer Graphics I, Fall 2008

Shading I Computer Graphics I, Fall 2008 Shading I 1 Objectives Learn to shade objects ==> images appear threedimensional Introduce types of light-material interactions Build simple reflection model Phong model Can be used with real time graphics

More information

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray Photometric stereo Radiance Pixels measure radiance This pixel Measures radiance along this ray Where do the rays come from? Rays from the light source reflect off a surface and reach camera Reflection:

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING Hyunjung Shim Tsuhan Chen {hjs,tsuhan}@andrew.cmu.edu Department of Electrical and Computer Engineering Carnegie Mellon University

More information