Estimating the Cook-Torrance BRDF Parameters In-Vivo from Laparoscopic Images
|
|
- Melina James
- 5 years ago
- Views:
Transcription
1 Estimating the Cook-Torrance BRDF Parameters In-Vivo from Laparoscopic Images Abed Malti and Adrien Bartoli ~ab} ALCoV-ISIT, UMR 6284 CNRS/Université d Auvergne, 28 place Henri Dunant, Clermont-Ferrand, France Abstract. SfS (Shape-from-Shading) and view synthesis systems generally assume a diffuse reflection model of the in-vivo tissues where the light is equally reflected in all directions. In other words, they approximate the tissue s BRDF (Bidirectional Reflectance Distribution Function) by the Lambertian model. This is however a coarse assumption since most tissues cast specularities. We propose a method to estimate the reflectance properties of tissues from invivo laparoscopic images. We use the Cook-Torrance BRDF model in order to take into account both diffuse and specular properties of the tissues. Our method estimates online both the BRDF parameters of the observed organ and the light model of the laparoscope. Such an estimation requires the knowledge of the 3D shape and some geometric priors on the light source. For these reasons, our estimation method relies on two assumptions: firstly, that the tissues undergoes rigid motion when the surgeon only explores it, and secondly that laparoscope s light is colinear to the viewing direction in a neighborhood ring around the specular regions. The first assumption allows us to estimate the 3D shape of the organ using for instance classic RSfM (Rigid Structure-from-Motion). The second assumption allows us to estimate the BRDF parameters. The determination of the 3D shape and the BRDF parameters allows us to assign a light direction for each pixel of the image. Experimental results compare the performance of our joint BRDF-and-light estimation method with the widely used Lambertian model. This validation uses both ex-vivo and real in-vivo datasets. It reveals a substantial improvement on SfS 3D reconstruction. 1 Introduction Over the last decade, computer aided laparosurgery has attracted extensive research and interest. It consists in improving the practitioner s perception of the intra-operative environment [9]. In the context of Augmented Reality for Computer Assisted Intervention (AE-CAI), 3D sensing offers a synthetic controllable view-point and is one of the major possible improvements to the current technology. In order to supplement standard laparoscopes with this type of facility, it is of critical importance to recover depth accurately from images. Laparoscopic images exhibit strong variability conditioned on the optical properties and pose of the laparoscope. The reflection of the incident light by the tissues and its appearance according to the view point is described by the so-called BRDF. Estimating this function is likely to improve higher-level tasks such as new view synthesis from SfS. This can help the surgeon to see the organs from different point of views. These views can be substantially augmented with surgery tools. The estimation of a BRDF in-vivo has however not been addressed yet in the context of laparosurgery. Most of the rendering and SfS methods in computer aided surgery
2 use a simple Lambertian reflectance. This model suits diffuse matte surfaces without specularities. However, on the one hand, living tissues tend to be more specular than diffusive and on the other hand, the BRDF parameters may not be shared by the different organs. It is unfortunately not possible to pre-estimate BRDF parameters ex-vivo and use them in-vivo since the reflectance properties change due for instance to moist. We propose an online estimation method for the BRDF of a specific organ. The estimated BRDF follows the analytical model proposed by Cook and Torrance [4]. This BRDF is better adapted than the Lambertian model since it models the surface as a distribution of specular microfacets accounting for color constancy (the amount of variation in chromaticity among a batch of similar tissues). In order to have a better representation of the light-tissue interaction we propose a light model adapted to the context of laparoscopy: our light model uses a direction vector per pixel. The overall light vector flow is fitted to a cubic B-spline parameterized in the 2D pixel space of the image. Experimental comparisons between our proposed method and a Lambertian model using ex-vivo and in-vivo organs reveal substantial improvement on SfS 3D reconstructions. Paper organization. Section 2 presents background and related work. Section 3 reviews the Cook-Torance reflectance model. Section 4 presents our method for BRDF estimation. Section 5 presents our method for light calibration. Section 6 describes the steps of the implementation of our method. Section 7 reports experimental results. Finally section 8 concludes. Our notation will be introduced throughout the paper. image views... image processing neighbors of specular set... tracked features... non-specular set... (ii) BRDF (i) RSfM (iii) Light 3D Shape Cook-Torrance Model (ρ,f, σ) image width BRDF Parameters image height Light directions Fig. 1. Stages of our method for online joint BRDF and light estimation: (i) 3D shape reconstruction of the organ based on RSfM, (ii) assuming light direction is collinear to viewing direction in regions neighboring the specular parts, we use a set of images to obtain a global estimation of the BRDF parameters, and (iii) using the BRDF parameters, we estimate the light direction for each pixel of the laparoscope s image. The specific organ is here a uterus. The Cook-Torrance BRDF model is described in section 3.
3 2 Background and Related Work Describing and modeling surface reflection has been a field of investigation in computer vision and computer graphics for decades. The BRDF describes how the incident light is modulated by a surface patch. It is a positive function of four angular dimensions and that be writtenf(ω i,ω o ), whereω i andω o are unit vectors in the hemisphere centered about the patch normal. These vectors are, respectively, directions of incident light and viewing direction (more precisely reflected light). The BRDF provides a complete description of appearance for optically-thick surfaces for which mutual illumination and sub-surface scattering are negligible [8]. We assume that these conditions hold in our context. One can measure the BRDF of a planar material by sampling the double-hemisphere of input and output (ω i and ω o ) directions with a gonioreflectometer. Since this is extremely slow, and since a slight loss of accuracy is often acceptable for vision and graphics applications, a number of camera-based alternatives have been proposed. Phong [1] proposed a reflectance model for computer graphics that was a linear combination of specular and diffuse reflection. The specular component was spread out around the specular direction by using a cosine function raised to a power. Subsequently, Blinn [2] used similar ideas together with a specular reflection which accounts for the offspecular peaks that occur when the incident light is quasi-orthogonal to the surface normal. Other analytical models have been popular for their compact representation [13, 4]. Recent approaches represent BRDFs as combinations of a set of basis functions [1, 11, 12]. However, the number of bases needs to be kept large to account for viewing and lighting variability and to maintain high frequency details. In the medical context few works addressed the problem of BRDF estimation for tissues. A patient-specific BRDF estimation method for bronchoscopy was proposed in [3]. This method is not well-adapted to laparoscopy mainly for two reasons: (i) it assumes that the light direction is colinear with the viewing direction, which cannot be realistic at every pixel of a laparoscopic image as experimentally demonstrated and (ii) the BRDF model is not adapted to surfaces with a non-negligible specular component. The main contribution of our paper is a new method to describe the joint BRDF-light estimation in endoscopy. Our method features (a) the Cook-Torrance BRDF model, (b) light calibration and (c) runs online, relying on 3D organs shape reconstruction using RSfM. Our working assumptions are as follows: (a) the tissues undergo rigid motion when the surgeon explores it, (b) the light direction is collinear to the viewpoint only in regions around specularities. Our proposed method has three main stages illustrated in figure 1: (i) 3D shape reconstruction of the organ based on RSfM, (ii) a global estimation of the BRDF parameters, and (iii) light direction estimation for each pixel of the laparoscope s image. 3 BRDF Modeling: The Cook-Torrance Model The Cook-Torrance model [4] was developed based on geometrical optics and is considered as one of the most physically plausible model. The basis of this model is a reflectance definition that relates the brightness of an object to the intensity and size of
4 single image V H N L Cook-Torrance Fig. 2. Left: for a Lambertian model the image intensity at a surface point depends on the laparoscope s light source direction and the surface normal at that point. Right: local geometry of reflection as described by Cook and Torrance [4]. In this case, the image intensity at a surface point depends also on the viewing direction V. N is the surface normal at incident point, L is light direction and H is the bisector of L andv. each visible light source. Thus, at a given image pixel q, the predicted image intensity Î depends on three vectors: the shape normaln, the viewing directionv and the light direction L (see figure 2 for a local geometry representation of the reflection). It is given by: Î = ρ π (N L) }{{} diffuse reflectance + F D π (N L) (N V) } {{ } specular reflectance The diffuse reflectance is assumed to be Lambertian and ρ is the diffuse albedo. The constant Fresnel coefficient F represents the refractive index of the tissue. The facet slope distribution function D represents the fraction of the facets that are oriented in the direction ofh, the bisector oflandv. Cook and Torrance [4] used the Beckmann distribution function: ( ) 1 D = σ 2 cos 2 α exp tan2 α σ 2 (2) where α is the angle between N and H. The parameter σ is the root mean square of the microfacets and represents the surface roughness. Some surfaces have two or more scales of roughness, and can be modeled by using more than one distribution function. In these cases, D is a weighted sum of the distribution functions, i.e., (1) D = p w p D(σ p ) (3) where σ j is the surface roughness of the jth distribution and the sum of the weights is one [4]. 4 Estimating the Cook-Torrance BRDF s Parameters In-Vivo For estimating a BRDF, reflections are first measured under various viewing and illumination angles. The data are then usually fitted to an analytical model using leastsquares non-linear minimization [7]. Nonlinear BRDFs that include multiple Gaussianlike functions such as the Cook-Torrance model generally induce a large number of local minima in the cost function.
5 Given an organ of known shape and a light of known direction, we want to estimate the parameters ρ, F and σ of the Cook-Torrance model from a set of laparoscopic images. With known object shape, we have N. For a rigid laparoscope where the light is rigidly mounted on the tip, we cannot consider the light directionl as being known at every pixel of the image. However, we can assume that the light direction is approximately colinear to the surface normal in the vicinity of specularities. For convenience, we denote as S the set of specular pixels in the image, S the set of non-specular pixels which are close neighbors to pixels in S and S the set of all non-specular pixels. The viewing direction is constant and coincides with the camera view axis. In this case, we can seek for the BRDF parameters by writing the parametric prediction equation of the Cook-Torrance model for a given pixel q of the image: ( Î(ρ,F,σ) = aρ +bf 1 σ 2 exp( c ) σ 2), ρ, F, σ (4) where ρ = I s ρ, F = I s F, a = N L π, b = 1 π(n L)(N V)cos 4 α, c = tan2 α. I s is the light intensity and if unknown, ρ and F are estimated up to a scale factor. For notation convenience, the dependence with respect to q is not explicitly displayed in the equations. Thus a first approach to estimate the BRDF parameters ρ, F and σ would be to minimize the RGB error between the predicted intensityî and measured intensityi: ( ( (ρ, F, σ) = argmin I aρ +bf 1 (ρ, F, σ) σ 2 exp( c )) 2 σ 2) (5) S Problem (5) is non-convex and non-linear with respect to σ. Moreover, it is difficult to find a decent initialization for the triplet (ρ, F, σ) to reach a global minimum with non-linear iterative optimization method. However, if we assumeσ = σ as being known, the problem of finding (ρ, F ) turns convex and the optimal values for ρ and F can be found using Second Order Cone Programming [14] to the restricted problem: min t s.t. I ( aρ +bf 1 exp( c )) σ 2 σ t, in S, 2 ρ, F (6) In order to have a global estimate of σ, we embed problem (6) in a global BnB (Branch-and-Bound) estimator as follows: (a) a bound interval of admissible σ values is subdivided into several non-overlapping sub-intervals (except at the boundaries). (b) At the center of each sub-interval we test the feasibility of problem (6) and discard all infeasible sub-intervals. (c) The restricted convex optimization (6) is solved for the center values of each interval. (d) The interval which gives the best solution of problem (6) regarding to the minimum RGB error (5) is kept. We repeat steps (a, b,c, d) until the length of the interval becomes small enough, then we keep the last center σ and the corresponding solutions ρ and F of (6). In our implementation, we experimentally set the initial interval to [1 5, 1] and the threshold interval-length to 1 3. The SOCP problem (6) is solved with YALMIP-toolbox [6] which allows us to detect infeasible
6 sub-intervals and solve the feasible ones. Once we determine the BRDF parameters, we take advantage of the fact that the laparoscope s light source is rigidly attached to the lens tip to estimate a slant and tilt direction 3-vector of the light at each pixel of the laparoscopic image. 5 Calibrating Light In-Vivo We assume that the light direction is colinear to the laparoscope s view point in S to estimate the reflection parameters of the tissues. However this assumption does not hold for all pixels in S. In order to compensate the error that may accumulate from the RGB error of the image intensity, we re-estimate the light direction in S. In the specular region S the light direction is assumed to be perfectly colinear to the viewing direction. In our model, the light direction is parameterized with respect to the image pixelq: L = ( sin(φ)cos(θ), sin(φ)sin(θ), cos(φ) ) forq S L = (,, 1 ), forq S (7) withθthe tilt angle of the light direction in the image plan with respect to the view axis, θ [ π,π]. φ is the slant of the light direction in the image plan with respect to the view axis, φ [, π 2 ]. Again, the dependency with respect to q is not explicitly written for notation convenience. The light direction is estimated by minimizing the RGB error with respect to θ andφ: ( ( N L L = argmin I ρ F 1 σ exp ( ) )) 2 c + 2 σ 2 L π π(n L)(N V)cos 4 (8) (α) S Problem (8) is non-linear with respect to the light parameters and is solved using Levenberg-Marquardt. To have a decent initialization, the light direction is determined by propagating the estimation from the boundary conditionl = ( 1 ) at the specular set S toward its close neighbors. Thus from close neighbors to close neighbors, we determine the light direction at each pixel of the image where a shape normal information is available. Finally, the estimated sample direction angles θ and φ are fitted to a cubic B-spline [5]. 6 Implementation and Calibration Steps We use a set of M laparoscopic images from the exploration of an organ s tissues. In summary, the BRDF and light estimation steps are as follows: Step 1 A geometric shape of the organ s tissues is reconstructed using the M views with RSfM. Step 2 For each view, specularities on the reconstructed shape are detected via combined saturation and lightness thresholding (we use thresholds of saturation.95 and lightness.95 respectively to detect specular pixels) and correspondences between specular pixels and shape points is established. All the specular pixels are gathered into the set S and all the corresponding shape points are labelled with their normals to the surfacen.
7 Step 3 For each view, we determine the set of pixels which are close neighbors to S. In our implementation, we choose the pixels which belong to a ring centered at the specular pixels with 3 pixels of maximum distance from the the specular frontiers. All these neighbor-to-specular pixels are gathered into the set S and all the corresponding shape points are labelled with their normals to the surfacen. Step 4 Using the set S we estimate the BRDF parametersσ,f andρ as described in section 4. The Fresnel parameter F and the diffuse parameter ρ are estimated up to scale since we assume that the light intensity is unknown. Step 5 We calibrate the light direction all over the image plane at pixel resolution as described in section 5. 7 Experimental Results To validate our BRDF-light estimation method, we proceed to a comparison with the classical Lambertian model on three criteria: (i) the RGB error of the M calibration images, (ii) the RGB error on a set ofm test images and (iii) the 3D reconstruction using SfS. The RGB error is computed as the difference between the measured image intensity I by the laparoscope and the predicted image intensity by the considered model (either Cook-Torrance or Lambertian). The SfS 3D reconstruction using the Lambertian model uses the algorithm of Tsai and Shah [15]. In the absence of an SfS algorithm using the Cook-Torrance model, we use the estimated normals by Tsai and Shah algorithm as initial estimates that we iteratively refine as: N = argmin N ( I ( N L ρ + π F 1 σ 2 exp ( c σ 2 ) π(n L)(N V)cos 4 (α) )) 2 (9) In order to upgrade the SfS reconstruction to the metric scale, we use the scale size of the shape. This allows us to compute the 3D error of reconstruction as the norm of difference between the depths of the SfS reconstruction and the ground-truth. 7.1 Ex-Vivo Datasets with Ground-Truth In order to acquire a real ex-vivo dataset we setup an appropriate framework (see figure 3) where we use two laparoscopes fixed through two trocars mounted on a pelvitrainer. The two laparoscopes are mounted to two PointGrey Flea2 color cameras with two c- mounts. The two cameras are synchronized at 15 fps with a resolution of pixels. A light source is mounted to one of the laparoscopes. This laparoscope will be used to evaluate our method for online BRDF and light calibration. Thanks to this setup we can build accurate ground-truth 3D models of ex-vivo organs with stereo views. A set of5 images of a lamb s lungs are acquired with our setup. We use3 images to estimate the BRDF and the light model as described in section 6. We use the set of2 remaining views to compare our method with the Lambertian model by estimating RGB error and SfS 3D reconstructions (see figure 4). As can be seen, our estimation method fits the measured image intensities more precisely and has lower 3D reconstruction error. Figure 5 highlights these quantitative results by showing qualitative improvements in 3D reconstruction when using our method. As can be seen, it better recovers the surface in the presence of specularities.
8 2 laparoscopes surgery tool camera + adapter pelvitrainer light source Fig. 3. Experimental setup to acquire real ex-vivo datasets. Two Pointgrey cameras are synchronized to obtain reference ground-truth data using stereo-views. During the experiment the laparoscope s light source is the unique source of light in the setup. 3 4 [-255] RGB Lambertian Cook-Torrance [-255] RGB Lambertian Cook-Torrance 3D error [mm] SfS(Lambertian) SfS(Cook-Torrance) Fig. 4. Ex-vivo lungs datasets. From left to right: RGB error on calibration images, RGB error on test images and 3D error of the SfS reconstruction (The bounding box surrounding the Lungs has a volume of [mm 3 ]. This volume is used to recover the scale size of the SfS reconstructions.). As can be seen, our estimation method fits the measured image intensities more precisely and has lower reconstruction error. Image Ground-Truth SfS(Lambertian) SfS(Cook-Torrance) Fig. 5. Ex-vivo lungs dataset. SfS reconstructions: as can be seen, the estimated Cook-Torrance model fits better to surface in the presence of specularities.
9 7.2 In-Vivo Datasets The experiment on real data we propose is the 3D reconstruction of a uterus from an in-vivo sequence of 3 images acquired using a monocular Karl Storz laparoscope running at 25 fps with a resolution of pixels. The 3D shape of the uterus is generated during the laparosurgery exploration step of the inside body. We use a set of 2 images to estimate the BRDF and the light model as described in section 6. We use a set of 1 remaining images to evaluate our method with classic Lambertian model by estimating RGB error and SfS 3D reconstructions (see figure 6). It can be observed that with our method, we can have a shift of 2 RGB levels (over 255 range values) while the Lambertian model can reach up to 6 RGB levels of errors. In terms of 3D reconstruction errors, it can be seen that taking into account the specular component of the BRDF brings a clear performance gain above the Lambertian model. Figure 7 highlights these quantitative results by showing qualitative improvements in 3D reconstruction when using our method. As can be seen, it better recovers the surface in the presence of specularities. 5 6 [-255] RGB [-255] RGB D error [mm] Lambertian Cook-Torrance -1 Lambertian Cook-Torrance SfS(Lambertian) SfS(Cook-Torrance) Fig. 6. In-vivo uterus datasets. From left to right: RGB error, RGB error on test images and 3D error of the SfS reconstruction (The bounding box surrounding the uterus has a volume of [mm 3 ] which is consistent with an average human uterus. This volume is used to recover the scale size of the SfS reconstructions.). As can be seen, our estimation method fits better the measured image intensities and has better reconstruction error (here the SfS reconstructions were compared to RSfM 3D reconstructions). 8 Conclusion In this paper, we have presented a method for the in-vivo acquisition of the Cook- Torrance BRDF parameters and calibration of the light direction rigidly mounted on the tip of a laparoscope. We experimentally showed (quantitatively and qualitatively) that this model suits better the physical reflectance of tissues than the Lambertian model. SfS 3D reconstruction gives promising results as an application of our method in the context of 3D laparoscopy. In future work we are planning to reduce the computation time to reach 3D real time requirements. References 1. R. Basri and D. W. Jacobs. Lambertian reflectance and linear subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(2): , February 23.
10 Image Rigid SfM SfS (Lambertian) SfS (Cook-Torrance) Fig. 7. In-vivo uterus dataset. SfS reconstruction. As can be seen, our estimation method reconstructs better the surface in regions which are close to specular areas. 2. J. F. Blinn. Models of light reflection for computer synthesized pictures. In SIGGRAPH, A. J. Chung, F. Deligianni, P. Shah, A. Wells, and G. Yang. Patient-specific bronchoscopy visualization through brdf estimation and disocclusion correction. IEEE TMI, 25(4):53 513, R. L. Cook and K. E. Torrance. A reflectance model for computer graphics. In SIGGRAPH, P. Dierckx. Curve and surface fitting with splines. Oxford University Press, J. Lofberg. Yalmip : A toolbox for modeling and optimization in MATLAB. In Proceedings of the CACSD Conference, A. Ngan, F. Durand, and W. Matusik. Experimental analysis of brdf models. In Proceedings of the Eurographics Symposium on Rendering, F. Nicodemus, J. Richmond, J. Hsia, I. Ginsberg, and T. Limperis. Radiometry. chapter Geometrical considerations and nomenclature for reflectance. Jones and Bartlett Publishers, Inc., S. Nicolau, X. Pennec, L. Soler, and N. Ayache. A complete augmented reality guidance system for liver punctures: First clinical evaluation. In MICCAI, B. T. Phong. Illumination for computer generated pictures. Commun. ACM, 18: , R. Ramamoorthi and P. Hanrahan. Frequency space environment map rendering. In SIG- GRAPH, I. Sato, T. Okabe, Y. Sato, and K. Ikeuchi. Appearance sampling for obtaining a set of basis images for variable illumination. In ICCV, H. Xiao, K. Torrance, F. Sillion, and D. Greenberg. A comprehensive physical model for light reflection. In SIGGRAPH, C. Yu, Y. Seo, and S. W. Lee. Global optimization for estimating a brdf with multiple specular lobes. In CVPR, Ruo Zhang, Ping-Sing Tsai, James Edwin Cryer, and Mubarak Shah. Shape from shading: A survey. PAMI, 21(8):69 76, 1999.
Skeleton Cube for Lighting Environment Estimation
(MIRU2004) 2004 7 606 8501 E-mail: {takesi-t,maki,tm}@vision.kuee.kyoto-u.ac.jp 1) 2) Skeleton Cube for Lighting Environment Estimation Takeshi TAKAI, Atsuto MAKI, and Takashi MATSUYAMA Graduate School
More informationAnnouncement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.
Lighting and Photometric Stereo CSE252A Lecture 7 Announcement Read Chapter 2 of Forsyth & Ponce Might find section 12.1.3 of Forsyth & Ponce useful. HW Problem Emitted radiance in direction f r for incident
More informationAUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY
AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY Summary of thesis by S. Bernhardt Thesis director: Christophe Doignon Thesis
More informationAnd if that 120MP Camera was cool
Reflectance, Lights and on to photometric stereo CSE 252A Lecture 7 And if that 120MP Camera was cool Large Synoptic Survey Telescope 3.2Gigapixel camera 189 CCD s, each with 16 megapixels Pixels are 10µm
More informationMahdi M. Bagher / Cyril Soler / Nicolas Holzschuch Maverick, INRIA Grenoble-Rhône-Alpes and LJK (University of Grenoble and CNRS)
Mahdi M. Bagher / Cyril Soler / Nicolas Holzschuch Maverick, INRIA Grenoble-Rhône-Alpes and LJK (University of Grenoble and CNRS) Wide variety of materials characterized by surface reflectance and scattering
More informationLight source estimation using feature points from specular highlights and cast shadows
Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps
More informationReconstruction of Discrete Surfaces from Shading Images by Propagation of Geometric Features
Reconstruction of Discrete Surfaces from Shading Images by Propagation of Geometric Features Achille Braquelaire and Bertrand Kerautret LaBRI, Laboratoire Bordelais de Recherche en Informatique UMR 58,
More informationPhotometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7
Lighting and Photometric Stereo Photometric Stereo HW will be on web later today CSE5A Lecture 7 Radiometry of thin lenses δa Last lecture in a nutshell δa δa'cosα δacos β δω = = ( z' / cosα ) ( z / cosα
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationRe-rendering from a Dense/Sparse Set of Images
Re-rendering from a Dense/Sparse Set of Images Ko Nishino Institute of Industrial Science The Univ. of Tokyo (Japan Science and Technology) kon@cvl.iis.u-tokyo.ac.jp Virtual/Augmented/Mixed Reality Three
More informationRadiometry and reflectance
Radiometry and reflectance http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 16 Course announcements Homework 4 is still ongoing - Any questions?
More informationEstimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry*
Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Yang Wang, Dimitris Samaras Computer Science Department, SUNY-Stony Stony Brook *Support for this research was provided
More informationComputer Vision Systems. Viewing Systems Projections Illuminations Rendering Culling and Clipping Implementations
Computer Vision Systems Viewing Systems Projections Illuminations Rendering Culling and Clipping Implementations Viewing Systems Viewing Transformation Projective Transformation 2D Computer Graphics Devices
More informationLecture 4: Reflection Models
Lecture 4: Reflection Models CS 660, Spring 009 Kavita Bala Computer Science Cornell University Outline Light sources Light source characteristics Types of sources Light reflection Physics-based models
More informationCapturing light. Source: A. Efros
Capturing light Source: A. Efros Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection? Lenses
More informationLights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons
Reflectance 1 Lights, Surfaces, and Cameras Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons 2 Light at Surfaces Many effects when light strikes a surface -- could be:
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 20: Light, reflectance and photometric stereo Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Properties
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 21: Light, reflectance and photometric stereo Announcements Final projects Midterm reports due November 24 (next Tuesday) by 11:59pm (upload to CMS) State the
More informationCS 5625 Lec 2: Shading Models
CS 5625 Lec 2: Shading Models Kavita Bala Spring 2013 Shading Models Chapter 7 Next few weeks Textures Graphics Pipeline Light Emission To compute images What are the light sources? Light Propagation Fog/Clear?
More informationCMSC427 Shading Intro. Credit: slides from Dr. Zwicker
CMSC427 Shading Intro Credit: slides from Dr. Zwicker 2 Today Shading Introduction Radiometry & BRDFs Local shading models Light sources Shading strategies Shading Compute interaction of light with surfaces
More information3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO., 1 3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption Yael Moses Member, IEEE and Ilan Shimshoni Member,
More informationUnderstanding Variability
Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion
More informationFace Re-Lighting from a Single Image under Harsh Lighting Conditions
Face Re-Lighting from a Single Image under Harsh Lighting Conditions Yang Wang 1, Zicheng Liu 2, Gang Hua 3, Zhen Wen 4, Zhengyou Zhang 2, Dimitris Samaras 5 1 The Robotics Institute, Carnegie Mellon University,
More informationIllumination. Illumination CMSC 435/634
Illumination CMSC 435/634 Illumination Interpolation Illumination Illumination Interpolation Illumination Illumination Effect of light on objects Mostly look just at intensity Apply to each color channel
More informationOther approaches to obtaining 3D structure
Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera
More informationShading & Material Appearance
Shading & Material Appearance ACM. All rights reserved. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/. MIT EECS 6.837 Matusik
More informationChapter 1 Introduction
Chapter 1 Introduction The central problem in computer graphics is creating, or rendering, realistic computergenerated images that are indistinguishable from real photographs, a goal referred to as photorealism.
More informationLigh%ng and Reflectance
Ligh%ng and Reflectance 2 3 4 Ligh%ng Ligh%ng can have a big effect on how an object looks. Modeling the effect of ligh%ng can be used for: Recogni%on par%cularly face recogni%on Shape reconstruc%on Mo%on
More informationChapter 7. Conclusions and Future Work
Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between
More informationHigh Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination
High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination Yudeog Han Joon-Young Lee In So Kweon Robotics and Computer Vision Lab., KAIST ydhan@rcv.kaist.ac.kr jylee@rcv.kaist.ac.kr
More informationDetermining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis
Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Ko Nishino, Zhengyou Zhang and Katsushi Ikeuchi Dept. of Info. Science, Grad.
More informationRendering Light Reflection Models
Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 3, 2017 Lecture #13 Program of Computer Graphics, Cornell University General Electric - 167 Cornell in
More informationComplex Shading Algorithms
Complex Shading Algorithms CPSC 414 Overview So far Rendering Pipeline including recent developments Today Shading algorithms based on the Rendering Pipeline Arbitrary reflection models (BRDFs) Bump mapping
More informationImage Based Lighting with Near Light Sources
Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some
More informationImage Based Lighting with Near Light Sources
Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationAnalysis of Planar Light Fields from Homogeneous Convex Curved Surfaces Under Distant Illumination
Analysis of Planar Light Fields from Homogeneous Convex Curved Surfaces Under Distant Illumination Ravi Ramamoorthi and Pat Hanrahan {ravir,hanrahan}@graphics.stanford.edu http://graphics.stanford.edu/papers/planarlf/
More informationPhotometric Stereo with Auto-Radiometric Calibration
Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationCSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011
CSE 167: Introduction to Computer Graphics Lecture #7: Color and Shading Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday,
More informationShading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham
Shading / Light Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham Phong Illumination Model See Shirley, Ch 10 and http://en.wikipedia.org/wiki/phong_shading
More informationEstimation of Surface Spectral Reflectance on 3D. Painted Objects Using a Gonio-Spectral Imaging System
Estimation of Surface Spectral Reflectance on 3D Painted Objects Using a Gonio-Spectral Imaging System Akira Kimachi, Shogo Nishi and Shoji Tominaga Osaka Electro-Communication University, Chiba University
More informationPhotometric stereo. Recovering the surface f(x,y) Three Source Photometric stereo: Step1. Reflectance Map of Lambertian Surface
Photometric stereo Illumination Cones and Uncalibrated Photometric Stereo Single viewpoint, multiple images under different lighting. 1. Arbitrary known BRDF, known lighting 2. Lambertian BRDF, known lighting
More informationLecture 22: Basic Image Formation CAP 5415
Lecture 22: Basic Image Formation CAP 5415 Today We've talked about the geometry of scenes and how that affects the image We haven't talked about light yet Today, we will talk about image formation and
More informationHybrid Textons: Modeling Surfaces with Reflectance and Geometry
Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu
More informationFace Relighting with Radiance Environment Maps
Face Relighting with Radiance Environment Maps Zhen Wen Zicheng Liu Thomas S. Huang University of Illinois Microsoft Research University of Illinois Urbana, IL 61801 Redmond, WA 98052 Urbana, IL 61801
More informationSpecular Reflection Separation using Dark Channel Prior
2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com
More informationImage-based BRDF Representation
JAMSI, 11 (2015), No. 2 47 Image-based BRDF Representation A. MIHÁLIK AND R. ĎURIKOVIČ Abstract: To acquire a certain level of photorealism in computer graphics, it is necessary to analyze, how the materials
More informationLocal Reflection Models
Local Reflection Models Illumination Thus Far Simple Illumination Models Ambient + Diffuse + Attenuation + Specular Additions Texture, Shadows, Used in global algs! (Ray tracing) Problem: Different materials
More informationReflection models and radiometry Advanced Graphics
Reflection models and radiometry Advanced Graphics Rafał Mantiuk Computer Laboratory, University of Cambridge Applications To render realistic looking materials Applications also in computer vision, optical
More informationCENG 477 Introduction to Computer Graphics. Ray Tracing: Shading
CENG 477 Introduction to Computer Graphics Ray Tracing: Shading Last Week Until now we learned: How to create the primary rays from the given camera and image plane parameters How to intersect these rays
More informationLight Reflection Models
Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 21, 2014 Lecture #15 Goal of Realistic Imaging From Strobel, Photographic Materials and Processes Focal Press, 186.
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationGeneral Principles of 3D Image Analysis
General Principles of 3D Image Analysis high-level interpretations objects scene elements Extraction of 3D information from an image (sequence) is important for - vision in general (= scene reconstruction)
More informationComputer-Aided Surgery of the Uterus by Augmenting the Live Laparoscopy Stream with Preoperative MRI Data
Computer-Aided Surgery of the Uterus by Augmenting the Live Laparoscopy Stream with Preoperative MRI Data Adrien Bartoli, Nicolas Bourdel, Michel Canis, Pauline Chauvet, Toby Collins, Benoît Magnin, Daniel
More informationIllumination from Shadows
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. Y, NO. Y, MON 2002 0 Illumination from Shadows Imari Sato, Yoichi Sato, and Katsushi Ikeuchi Abstract In this paper, we introduce a
More informationLocal Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller
Local Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Graphics Pipeline Hardware Modelling Transform Visibility Illumination + Shading Perception, Interaction Color Texture/ Realism
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationMonocular Template-based Reconstruction of Inextensible Surfaces
Monocular Template-based Reconstruction of Inextensible Surfaces Mathieu Perriollat 1 Richard Hartley 2 Adrien Bartoli 1 1 LASMEA, CNRS / UBP, Clermont-Ferrand, France 2 RSISE, ANU, Canberra, Australia
More informationLighting affects appearance
Lighting affects appearance 1 Source emits photons Light And then some reach the eye/camera. Photons travel in a straight line When they hit an object they: bounce off in a new direction or are absorbed
More informationOther Reconstruction Techniques
Other Reconstruction Techniques Ruigang Yang CS 684 CS 684 Spring 2004 1 Taxonomy of Range Sensing From Brain Curless, SIGGRAPH 00 Lecture notes CS 684 Spring 2004 2 Taxonomy of Range Scanning (cont.)
More informationImage Processing 1 (IP1) Bildverarbeitung 1
MIN-Fakultät Fachbereich Informatik Arbeitsbereich SAV/BV (KOGS) Image Processing 1 (IP1) Bildverarbeitung 1 Lecture 20: Shape from Shading Winter Semester 2015/16 Slides: Prof. Bernd Neumann Slightly
More informationRadiometry. Reflectance & Lighting. Solid Angle. Radiance. Radiance Power is energy per unit time
Radiometry Reflectance & Lighting Computer Vision I CSE5A Lecture 6 Read Chapter 4 of Ponce & Forsyth Homework 1 Assigned Outline Solid Angle Irradiance Radiance BRDF Lambertian/Phong BRDF By analogy with
More informationGlobal Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller
Global Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Reading Foley, van Dam (better): Chapter 16.7-13 Angel: Chapter 5.11, 11.1-11.5 2 Limitation of local illumination A concrete
More informationToday. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models
Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global
More informationExperimental Validation of Analytical BRDF Models
Experimental Validation of Analytical BRDF Models Addy Ngan, Frédo Durand, Wojciech Matusik Massachusetts Institute of Technology Goal Evaluate and analyze the performance of analytical reflectance models
More informationFully Automatic Endoscope Calibration for Intraoperative Use
Fully Automatic Endoscope Calibration for Intraoperative Use Christian Wengert, Mireille Reeff, Philippe C. Cattin, Gábor Székely Computer Vision Laboratory, ETH Zurich, 8092 Zurich, Switzerland {wengert,
More informationAssignment #2. (Due date: 11/6/2012)
Computer Vision I CSE 252a, Fall 2012 David Kriegman Assignment #2 (Due date: 11/6/2012) Name: Student ID: Email: Problem 1 [1 pts] Calculate the number of steradians contained in a spherical wedge with
More informationLight. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?
Light Properties of light Today What is light? How do we measure it? How does light propagate? How does light interact with matter? by Ted Adelson Readings Andrew Glassner, Principles of Digital Image
More informationSurface Reflection Models
Surface Reflection Models Frank Losasso (flosasso@nvidia.com) Introduction One of the fundamental topics in lighting is how the light interacts with the environment. The academic community has researched
More informationThin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging
Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging Bingxiong Lin, Yu Sun and Xiaoning Qian University of South Florida, Tampa, FL., U.S.A. ABSTRACT Robust
More informationShading. Brian Curless CSE 557 Autumn 2017
Shading Brian Curless CSE 557 Autumn 2017 1 Reading Optional: Angel and Shreiner: chapter 5. Marschner and Shirley: chapter 10, chapter 17. Further reading: OpenGL red book, chapter 5. 2 Basic 3D graphics
More informationGlobal Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University
Global Illumination CS334 Daniel G. Aliaga Department of Computer Science Purdue University Recall: Lighting and Shading Light sources Point light Models an omnidirectional light source (e.g., a bulb)
More informationComparison of BRDF-Predicted and Observed Light Curves of GEO Satellites. Angelica Ceniceros, David E. Gaylor University of Arizona
Comparison of BRDF-Predicted and Observed Light Curves of GEO Satellites Angelica Ceniceros, David E. Gaylor University of Arizona Jessica Anderson, Elfego Pinon III Emergent Space Technologies, Inc. Phan
More informationOverview. Radiometry and Photometry. Foundations of Computer Graphics (Spring 2012)
Foundations of Computer Graphics (Spring 2012) CS 184, Lecture 21: Radiometry http://inst.eecs.berkeley.edu/~cs184 Overview Lighting and shading key in computer graphics HW 2 etc. ad-hoc shading models,
More informationCS5620 Intro to Computer Graphics
So Far wireframe hidden surfaces Next step 1 2 Light! Need to understand: How lighting works Types of lights Types of surfaces How shading works Shading algorithms What s Missing? Lighting vs. Shading
More informationRendering Light Reflection Models
Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 27, 2015 Lecture #18 Goal of Realistic Imaging The resulting images should be physically accurate and
More informationPhysics-based Vision: an Introduction
Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned
More informationStereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17
Stereo Wrap + Motion CSE252A Lecture 17 Some Issues Ambiguity Window size Window shape Lighting Half occluded regions Problem of Occlusion Stereo Constraints CONSTRAINT BRIEF DESCRIPTION 1-D Epipolar Search
More informationw Foley, Section16.1 Reading
Shading w Foley, Section16.1 Reading Introduction So far, we ve talked exclusively about geometry. w What is the shape of an object? w How do I place it in a virtual 3D space? w How do I know which pixels
More informationToday. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models
Computergrafik Thomas Buchberger, Matthias Zwicker Universität Bern Herbst 2008 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation
More informationRepresenting the World
Table of Contents Representing the World...1 Sensory Transducers...1 The Lateral Geniculate Nucleus (LGN)... 2 Areas V1 to V5 the Visual Cortex... 2 Computer Vision... 3 Intensity Images... 3 Image Focusing...
More informationInverting the Reflectance Map with Binary Search
Inverting the Reflectance Map with Binary Search François Faure To cite this version: François Faure. Inverting the Reflectance Map with Binary Search. Lecture Notes in Computer Science, Springer, 1995,
More informationExperiments with Edge Detection using One-dimensional Surface Fitting
Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,
More informationRecovering illumination and texture using ratio images
Recovering illumination and texture using ratio images Alejandro Troccoli atroccol@cscolumbiaedu Peter K Allen allen@cscolumbiaedu Department of Computer Science Columbia University, New York, NY Abstract
More informationEpipolar geometry contd.
Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives
More informationCHAPTER 3. From Surface to Image
CHAPTER 3 From Surface to Image 3.1. Introduction Given a light source, a surface, and an observer, a reflectance model describes the intensity and spectral composition of the reflected light reaching
More informationAnnouncements. Light. Properties of light. Light. Project status reports on Wednesday. Readings. Today. Readings Szeliski, 2.2, 2.3.
Announcements Project status reports on Wednesday prepare 5 minute ppt presentation should contain: problem statement (1 slide) description of approach (1 slide) some images (1 slide) current status +
More informationTHREE DIMENSIONAL ACQUISITION OF COLORED OBJECTS N. Schön, P. Gall, G. Häusler
ISSN 143-3346, pp. 63 70, ZBS e. V. Ilmenau, October 00. THREE DIMENSIONAL ACQUISITION OF COLORED OBJECTS N. Schön, P. Gall, G. Häusler Chair for Optics Friedrich Alexander University Erlangen-Nuremberg
More informationOccluded Facial Expression Tracking
Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento
More information2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems
2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems Yeny Yim 1*, Xuanyi Chen 1, Mike Wakid 1, Steve Bielamowicz 2, James Hahn 1 1 Department of Computer Science, The George Washington
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationEfficient Rendering of Glossy Reflection Using Graphics Hardware
Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,
More informationCSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016
CSE 167: Introduction to Computer Graphics Lecture #6: Lights Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 Announcements Thursday in class: midterm #1 Closed book Material
More informationDraft from Graphical Models and Image Processing, vol. 58, no. 5, September Reflectance Analysis for 3D Computer Graphics Model Generation
page 1 Draft from Graphical Models and Image Processing, vol. 58, no. 5, September 1996 Reflectance Analysis for 3D Computer Graphics Model Generation Running head: Reflectance Analysis for 3D CG Model
More informationImage Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3
Image Formation: Light and Shading CSE 152 Lecture 3 Announcements Homework 1 is due Apr 11, 11:59 PM Homework 2 will be assigned on Apr 11 Reading: Chapter 2: Light and Shading Geometric image formation
More informationLocal vs. Global Illumination & Radiosity
Last Time? Local vs. Global Illumination & Radiosity Ray Casting & Ray-Object Intersection Recursive Ray Tracing Distributed Ray Tracing An early application of radiative heat transfer in stables. Reading
More informationRecollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows
Recollection Models Pixels Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows Can be computed in different stages 1 So far we came to Geometry model 3 Surface
More informationLambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science
Lambertian model of reflectance I: shape from shading and photometric stereo Ronen Basri Weizmann Institute of Science Variations due to lighting (and pose) Relief Dumitru Verdianu Flying Pregnant Woman
More information13 Distribution Ray Tracing
13 In (hereafter abbreviated as DRT ), our goal is to render a scene as accurately as possible. Whereas Basic Ray Tracing computed a very crude approximation to radiance at a point, in DRT we will attempt
More information