Learning Face Appearance under Different Lighting Conditions

Size: px
Start display at page:

Download "Learning Face Appearance under Different Lighting Conditions"

Transcription

1 Learning Face Appearance under Different Lighting Conditions Brendan Moore, Marshall Tappen, Hassan Foroosh Computational Imaging Laboratory, University of Central Florida, Orlando, FL Abstract- We propose a machine learning approach for estimating intrinsic faces and hence de-illuminating and reilluminating faces directly in the image domain. The most challenging step is de-illumination, where unlike existing methods that require either the 3D geometry or expensive setups, we show that the problem can be solved with relatively simple kernel regression models. For this purpose, the problem of decomposing an observed image into its intrinsic components, i.e. reflectance and albedo, is formulated as a nonlinear regression problem. The estimation of an intrinsic component is then accomplished by estimating local linear constraints on images in terms of derivatives using multi-scale patches of the observed images, comprising from a three-level Laplacian Pyramid. We have evaluated our method on "Extended Yale Face Database B" and shown that despite its simplicity, the method is able to produce realistic results using images taken from only four different lighting orientations. I. INTRODUCTION The illumination of objects immensely affects their appearance in the pictures. In particular, variability in illumination leads to large variability in appearance. Modeling this variability accurately is a fundamental problem that occurs in many areas of computer vision and computer graphics. Applications include, retouching pictures, video editing, post cinematographic effects, improved recognition of objects (e.g. faces and people), etc. In graphics applications, of course, this same variability must be handled from both an analysis and synthesis point of view. Systems must be able to both remove existing illumination effects and introduce new illuminations. Challenges from varying illumination also arise when attempting to accurately model the three dimensional relationships in an image [12] or estimate image gradient information [5], [6]. In this paper, we present a relatively simple, yet very efficient method of modeling illumination variations. Our approach makes two main contributions to this domain of study: Its first characteristic feature is that one does not require to make explicit assumptions about the illuminated object model, e.g. Lambertian, or specular. The key idea is that for a given class of objects (e.g. faces) the inputoutput relationship is implicitly captured by data, and hence a training set can be used to learn how to both deilluminate and re-illuminate both diffuse and specular regions. This work was in part supported by a grant from Electronic Arts - Tiburon. Brendan Moore Marshall Tappen, and Hassan Foroosh are with the School of EECS at the University of Central Florida, {tmoore,mtappen, foroosh}@cs.ucf.edu L Second, it is relatively simple, both conceptually and in terms of implementing the method. In particular, it does not require 3D data or a large set of input images of objects illuminated from all possible directions - e.g. as in light field rendering, or light stage (see below). Hence, given the realistic results produced by the method, the approach may be viewed as a tool for affordable computational photography on a desktop for ordinary users. II. RELATED WORK In face relighting, existing techniques fall into two general categories: 2D image-based techniques; and 3D geometrybased techniques. In [13] an image-based re-rendering technique using the idea of a "Quotient Image" is presented. Given two objects, the quotient image is defined as the ratio of their albedo functions. This representation depends only on relative surface texture information and is therefore independent of illumination. Linear combinations of this low dimensional representation can then be used to generate new illuminations. A limitation of this approach is that the reflectance property of the objects under consideration are all assumed to adhere to the Lambertian reflectance model. In general, the reflectance of a point on an object can be described by a 4D function called the bidirectional reflectance distribution function (BRDF). The Lambertian model condenses the 4D BRDF into a constant that is used to scale the inner product between the surface normal and the light vector. It is shown that images generated by varying the lighting on a collection of Lambertian objects that all have the same shape but differ in their surface albedo can be analytically described using at least three "bootstrap" images of a prototype object and an illumination invariant "signature" image, i.e. the Quotient Image. The prototype objects consist of images of an object from the same class taken under three linearly independent light sources. This data is used to define a subspace, or basis. A new illumination is then generated by taking the pixelwise Cartesian product of a weighted sum of the basis and the Quotient Image. Of course direct knowledge of the parameters of the albedo function that make up the ratio that defines the quotient image are not known. Therefore, the quotient image is estimated by finding the correct set of coefficients, via the minimization of a defined energy function and least squares. In addition to Lambertian assumption, one limitation with the quotient image approach is that it can only be applied to faces that have the same view or pose as the face used in the /08/$ IEEE

2 creation of the quotient image. This limitation is addressed in [14]. In order to accommodate for arbitrary views of the face an image morphing step is introduced into the synthesis process. In [4], a geometric approach using three-dimensional laser scans of the human heads is presented. The threedimensional scans are used to create a morphable face that is then fit to a model. The parameters of the morphable face model can then be used for re-rendering. The major deficit of this approach is the need to build a dense point-topoint correspondence between the 3D model and the training faces. This is computationally expensive and requires manual interventions. In [12] a geometric approach utilizing a frequency-domain view of reflection and illumination is considered via the use of spherical harmonics. Since spherical harmonics form a complete orthonormal basis for functions on the unit sphere the goal is to parameterize the BRDF as a function on the unit sphere. By assuming that the illumination field is independent of surface position they reparameterize the problem in terms of the surface normal. The authors state that lighting is expressed in global coordinates since it is constant over the object surface when viewed with respect to a global reference frame. Therefore it is necessary to relate the parameters to global coordinates. This is accomplished through a set of rotations that operate on the local angles. A similar process is used to expand the local representation and the cosine transfer function. The illumination integral is then shown to be a simple product in terms of spherical harmonic coefficients. Therefore the estimation of the illumination of a convex Lambertian object can be done by solving for the lighting coefficients in the product. The author reported that the reflected light field from a convex Lambertian object can be well approximated using spherical harmonics up to order 2 (a 9 term representation), i.e. as a quadratic polynomial of the Cartesian coordinates of the surface normal vector. There are several deficits in this approach. First, Spherical Harmonics do not have compact support. This means that the majority of the energy contained in the function is not concentrated in one area. Therefore, the method would fail at capturing specular reflections. Also, truncating the basis would cause ringing in the higher frequency specular components. Next, since we must convert the basis representation from local to global coordinates (or vice versa) a rotation matrix that represents the necessary rotations of each of the basis vectors must be constructed. This matrix could get prohibitively large given a large number of basis vectors (such as the basis needed to represent high energy signals). Finally, for the inverse illumination problem we solve for the lighting coefficients by dividing the inverse of the normalization constant by the transfer function. This division could cause computational problems, for instance when the incident light source is roughly aligned with the surface normal. In [7] an image based technique for capturing the reflectance field of the human face is presented. The corlight stage. This device illuminates a subject by rotating the illumination source along two axis, the azimuth and the inclination. While the illumination source is rotated, two calibrated video cameras are used to capture videos at 30 frames per second. This yields 64 subdivisions along the azimuth and 32 divisions along the inclination for a total of 2048 direction samples. From this dense set of samples a reflectance function for each pixel is defined. Essentially, the main idea is that sampling the face under a dense set of illuminations effectively encodes the effects of diffuse reflection, specular reflection, self-shadowing, translucency, mutual illumination, and subsurface scattering. The technique is further extended to novel viewpoints by a re-synthesis technique that decomposes the reflectance function into its specular and diffuse components. This process assumes that the specular and diffuse colors are known. The specular color comes directly from the color of the incident light. The diffuse color comes from the estimation of a diffuse chromaticity ramp. The obvious shortcoming of this approach is its inflexibility for many practical applications and the associated cost of building such apparatus affordable to only the targeted movie industry. On the other hand the quality of the synthesized illumination is directly proportional to the number of images generated in the capture process. Even at course increments, this data set gets rather large. This would make extending this approach to any real-time applications difficult. Second, since the resolution of the reflectance function is equal to the number of sampled directions aliasing could occur in places where there are large changes in pixel values from one illumination to another. For example, the shadow of the nose onto the face, i.e. self-shadowing. Finally, since this technique defines the reflectance solely in terms of directional illumination dappled lighting or partial shadows could be problematic. In [9] an image based approach using the idea of Eigen Light-Fields is presented. The Eigen Light-Fields technique employs Principal Component Analysis to generate a basis, which is then used in a least squares setting. Given a collection of light-fields (plenoptic functions) of objects such as faces, an eigen-decomposition is performed via PCA. This generates an eigen-space of light fields. The approximation of a new light field is then used for relighting. One of the main limitations of the light field approach is that a huge number of images of the object are needed to capture the complete light field. In most computer vision applications it it unreasonable to expect more than a few images of the object. In [11], a hybrid image-based/geometric approach for estimating the principal lighting direction that exists in a set of frontal face images is presented. The technique employs a least squares formulation that minimizes the difference between image pixel data and a so-called shape-albedo matrix. The shape-albedo matrix consists of the Hadamard (element-wise) matrix product of a vectorized albedo map and a matrix of surface normals. To obtain the Normal vector information used in the shape-albedo matrix, a generic shape nerstone in this approach is the use of a device called a model was created using the average 3D shape of 138 head

3 models captured via a 3D scanner. The albedo data was generated by averaging the facial texture information from the Yale data set. Given the estimated lighting direction of an input image, a new illumination is applied by "undoing" the existing illumination and combining this specific albedo data with the generic 3D face model data. The main limitation of this image based approach is of course the assumption that one can always obtain an accurate 3D model. Also, the algorithm is limited to re-lighitng faces under a fixed pose, in this case a frontal view. The approach proposed herein is an image-based method. It should be noted that, illumination for face recognition is not a topic that we are concerned with in this paper. Instead, our main focus is twofold: (i) generating convincing and plausible illuminations; (ii) use little input data, and no special equipment such as the light stage. III. OUR APPROACH Our approach builds upon the work by Tappen et al. [16], [15], who developed methods for estimating intrinsic images. The basic idea is that it is possible to estimate the derivatives of the image of an object from a given class as it would appear with a uniform illumination, which we refer to as the de-illuminated image. The uniformly illuminated image can then be reconstructed from these derivatives. In this context, our method can be viewed as learning to estimate the derivatives of intrinsic images for specific types of objects. For this purpose, the problem of decomposing an observed image into its intrinsic components (i.e. reflectance and albedo) is formulated as a nonlinear regression problem. The estimation of an intrinsic component is done by first estimating a set of local linear constraints in terms of derivatives, from multi-scale patches of the observed image. In our case, a multi-scale patch is comprised of 3x3 pixel data from a three level Laplacian Pyramid. The multi-scale representation effectively allows for larger derivative data to be considered with only a small increase in dimensionality. By operating over multi-scale patches rather than the raw image the system effectively overcomes the curse of dimensionality [2]. For example, given a relatively small image of 320 by 240 pixels we would end up with a dimensional regression problem. This would produce a problem that is too large for standard regression techniques to handle. Operating on patches also allow us to tolerate misalignments. Once the image derivatives are estimated the final image must be computed. Each of the estimated derivatives can be thought of as a constraint that must me met in the estimation of the new component image. This image is found by solving for the image that best satisfies these constraints. The problem is thus reduced to estimating a weight matrix for a basis function. These weights are found by minimizing the squared error between ground truth images and the estimated image. IV. OVERVIEW We divide the relighting process into two main phases. The first phase focuses on recovering the face as it would appear under uniform illumination. We will refer to this step as "de-illuminating" the face. This is the same type of image as an intrinsic image from [15], [16]. In the second phase, new illuminations are synthesized from the de-illuminated face image. As we will show in Section VII, these can be combined with the de-illuminated face to produce new images of the face under various illuminations. Throughout the rest of this paper, we will refer to this second phase as the re-illumination phase. We first describe in Section V our approach for deilluminating faces. Following that, Section VI describes how these faces can be re-illuminated. V. DE-ILLUMINATING FACES We treat the de-illumination step as an estimation problem. Given an image of a face under some known illumination, we use non-linear regression to estimate the derivatives of the de-illuminated face image. In other words, we estimate what the result of filtering the de-illuminated face with derivative filters would look like. Once the derivative estimates have been computed, the de-illuminated face is reconstructed from them using systems of equations designed for solving the Poisson equation [1]. A. Basic Formulation for Derivative Estimators The non-linear regression system works by modeling the derivative estimates as a linear combination of non-linear basis functions. The goal of regression is to predict the value of one or more continuous target variables t, given the value of a d-dimensional vector x of input variables [3]. This can be accomplished by finding the linear combination of basis functions that will give us the correct derivatives, t, given an input point x from the source feature space. Formally, this can be expressed by defining a function y (x, w) such that, M-1 y(x; W) = E Wjj(X) = WT (X) (1) where w are weight parameters that control the overall contribution of each of the b basis functions. There are many possible choices of basis functions. For our application, we choose X to be the Gaussian radial basis function (RBF) due to its well-studied analytic behavior across multiple scales: j5(x) = exp {(X ll)2} (2) where,u is the mean, which controls the location of each of the basis functions and s2 is the variance, which controls the spatial scale. It should be noted that the classical Gaussian kernel typically includes a normalization coefficient. For our purposes, this coefficient is unnecessary because each basis function is multiplied by a corresponding weight parameter, Wi.

4 Fig. 1. Example de-illumination training data for the Small Faces. Each column represents a source training set for a particular illumination model. In this case: illumination from the right; illumination from the top; illumination from the left; illumination from the bottom. The far right column is the uniformly illuminated target training data from which the derivatives are generated. B. Taking Advantage of Regularities in Facial Appearance Because we have focused on faces, we can take advantage of the statistically regular structure in faces. The face images are only roughly aligned so that key facial features, such as the eyes, are approximately at known locations. Images are then divided into patches. In our current implementation, they are divided into twenty rows and twenty columns, for a total of 400 patches. We will refer to these large patches as face patches For each face patch, two estimators are trained. One estimator predicts the horizontal derivatives of each patch and the second estimator predicts the vertical derivatives. We break up the face image into patches so that each estimator can specialize on particular part of the face. While we do not use an explicit 3D model of the faces, this approach implicitly assumes that faces have a regular structure. Upon estimating derivatives and then re-integrating them patches are forced to blend together seamlessly. We operate on patches, rather than pixels to make our system more robust to noise and misalignment errors, while implicitly incorporating structure. C. Learning to Estimate De-Illuminated Derivatives We find the weights, wj in Equation 1 by training the regressors on images of faces taken under multiple illuminations, such as images from the Yale Face Database [8]. When an image of the face under uniform illumination is not available to serve as the target image, we have found that a suitable substitute can be created by averaging different illuminations. The training data set is divided into different illumination sets based on the location of the principal light source, i.e. left, top, right, bottom. We learn one deillumination model for each illumination location. As mentioned above, we learn two estimators for each face patch in the image. The inputs, x, to the estimators are 3x3 image patches contained in one of the larger face patches. Each of these 3x3 image patches is pre-processed by subtracting the mean value of all image patches in that particular face patch. We will denote the set of all 3x3 patches associated with a face patch as X. Since each basis function in Equation 1 is a function of,u and s, we need to find appropriate values for these parameters. The problem of finding suitable values for,u is formulated as a k-means optimization problem. Given the set of n data points in X and an integer k, we would like to find the set of k points, called centers, that minimize the mean squared distance from each data point to its nearest center [10]. Once suitable cluster means are found, the scaling parameter s is calculated. For our problem the parameter s is actually a matrix, i.e. a covariance matrix S. For each cluster, which is represented by a mean and its associated data points, the within-cluster covariance is calculated. This makes the basis functions multivariate Gaussians of the form In this form special care must be taken when calculating -1. In general, E may not be invertible. This is typically due to the creation of an under-determined system which is caused by too few points being assigned to a particular cluster. For these cases a regularized pseudo-inverse is used. With our inputs defined as X, our targets defined as T, and our basis functions defined as q(x) as in Equation 3, we can now find the value for w in Equation 1. The solution for w can be defined as the value that minimizes the sumof-squares error function defined as IN 2 E(w) = E {tn WTq(Xn)} n2= - (4) Where t, is a instance of a target from T and xj is an instance of a sample from X. Taking the derivative of Equation 4 with respect to w yields N d E(w) = E:t{nWT O(Xn)I (Xn)T dwe() S n=1 Setting Equation 5 equal to zero and rearranging N (NA I: tni( X)T -WI T c E(X )O(Xn)T = n=g Solving for w = oj (x) (x pj), Y, -, (x pj) n=f (3) (5) (6) - w (7) = (..DT..D) 1..DTt

5 Where wt is the solution that minimizes the difference between the targets (the observed derivative values) and the weighted sum of the basis functions evaluated at a particular x value. Jb is an N x M matrix that contains all of the basis functions evaluated at every sample point and has the form o(xi) 10 (X2) (D =I 1i(xi) 51 (X2)... M-1(x1) ** * M-1 (X2) (8) \ 00 (XN) q1 (XN) * **M-1(XN) ) After the construction of the training source feature space, the training target features are calculated. As stated above, we model the changes in illumination in terms of the changes in the image derivatives. Therefore, we calculate the horizontal and vertical derivatives for each pixel, in each subsection, over each uniformly illuminated image in the training set. The resulting derivatives are then placed in a matrix that represents the training target feature space, which we define as T. Using non-linear regression, we estimate the diffuse illumination model described by the relationships between the training source features and training target features. The process of non-linear regression is detailed bellow. The output is a vector of weighting values that are used to control the individual contributions of a set of non-linear functions. To generate a new uniformly-illuminated image from an input picture (i.e. one that is not a part of the training set, but contains one of the learned illumination models: left, right, top, bottom), we segment the new image, generate the image patches, and vectorize the patches. We can think of each vectorized patch as a point in d-dimensional space. For each point, the vertical and horizontal derivatives are then readily estimated by calculating the inner product between the weight vector and a vector that contains the value of the point at each basis. Once this is done the final image is generated by Poisson integration of estimated vertical and horizontal derivatives [1]. VI. RE-ILLUMINATION The re-illumination phase is nearly identical to the deillumination stage. The main difference is that the goal has changed from calculating the de-illuminated face to calculating new illuminations. In a simplified image formation model, images are the product of the de-illuminated face and an illumination image. This makes the illumination image similar to the shading images from [16], [15]. This image has also been referred as the quotient image [13]. The only other difference from the de-illumination stage is that the input images are de-illuminated faces. Besides these two differences, the illumination estimation involves the same basic steps of estimating derivative values and integrating them to form re-illuminated images. Fig. 2. Example re-illumination training data for the Small Faces. The far left column is the uniforn-y illuminated source training data. Each remaining column represents the quotient image source training set for a particular illumination model. In this case: illumination from the right; illumination from the top; illumination from the left; illumination from the bottom. VII. RESULTS Extensive experiments are run on images of faces generated from the "Extended Yale Face Database B" [8], with only some depicted herein due to limited space. Additional results are shown in supplementary material. Images from this database were cropped and only roughly aligned to create two data sets: Small Faces and Large Faces. The Small Face Data consisted of images of faces that were scaled to 80 x 100 pixels in size and included hair and some residual background information. The Large Face data consisted of images of faces that were cropped to contain just the center portion of the face. These cropped images were 136 x 70. Having small and large faces enabled us to see how scale affects the approach. Both sets of image data were then partitioned into four illumination sets: illumination from the right; illumination from the top; illumination from the left; illumination from the bottom. Figure 1 shows an example of the source training data and target training data for the De-illumination of the Small Faces. Figure 2 shows an example of the source training data and target training data for the Re-illumination of the Small Faces. The data for both Small and Large Faces was separated into two sets, a training set and a testing set. The training set for the Small Faces consisted of fifteen faces. The remaining images in the database that were not part of the training set were used as testing set. Similarly, the training set for the Large Faces consisted of twenty one faces, with the remaining images in the database used as the testing set. The results presented show two de-illumination / re-

6 illumination scenarios. The first being the situation where de-illumination is not required. This can be considered the optimal scenario and would apply to images that are already illuminated by a relatively uniform light source, such as those taken outdoors. These results are shown in Figure 4 In the second scenario, shown in Figures 3 (Large images) and 5 (small images), the harder task of de-illumination must be performed prior to re-illumination. The results presented for this scenario show how the method performs despite being given harsh illuminations in the test images. These harsh conditions make the image recovery process far more challenging due to the fact that large portions of the input image to be de-illuminated are unknown due to shadows. VIII. OBSERVATIONS AND CONCLUDING REMARKS By inspecting the results in Figures 4, and 5, we can make the following observations and remarks. Re-Illuminating is Easier than De-Illuminating: The images that were only re-illuminated, rather than being deilluminated and re-illuminated are, in general, higher quality. This indicates that much of the degradation in the de-illumination/re-illumination steps enters during the deillumination step, which almost all existing methods avoid by either using special overly expensive equipment (e.g. light stage) or using a huge number of images of the same object illuminated from many possible directions. The System Must Hallucinate Data: Examining the left side of the faces, it is clear that the method is hallucinating details that are in the shadow. While the method does a good job, more research is needed on the type of cost functions that will allow the method to better represents these details. Again, note that this problem is not tackled by other existing methods, since they either assume perfect alignment and same pose, or use special equipment and setups. To conclude, we have presented a machine learning approach that takes advantage of the statistical regularities for a class of illuminated objects in images to both de-illuminate and re-illuminate. The underlying idea in this model is that by breaking down the image into small patches it is possible to learn estimators for specific parts of the face. This leads to a system that is able to produce realistic results and yet straightforward to implement on a desktop without special equipment or huge number of illumination directions per input training object. REFERENCES [1] A. Agrawal, R. Raskar, and R. Chellappa. What is the range of surface reconstructions from a gradient field? European Conference on Computer Vision, 1: , [2] R. E. Bellman. Adaptive Control Processes. Princeton University Press, [3] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, NY, [4] V. Blanz, S. Romdhani, and T. Vetter. Face identification across different poses and illuminations with a 3d morphable model. In Proc. Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 20-21: , [5] R. Brunelli and T. Poggio. Face recognition: features versus templates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(10): , [6] H. F. Chen, P. N. Belhumeur, and D. Jacobs. In search of illumination invariants. IEEE Conference on Computer Vision and Pattern Recognition, Proceedings., 1: , [7] P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar. Acquiring the reflectance field of a human face. In K. Akeley, editor, Siggraph 2000, Computer Graphics Proceedings, pages ACM Press / ACM SIGGRAPH / Addison Wesley Longman, [8] A. Georghiades, P. N. Belhumeur, and D. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6): , [9] R. Gross, S. Baker, I. Matthews, and T. Kanade. Face recognition across pose and illumination. In S. Z. Li and A. K. Jain, editors, Handbook of Face Recognition. Springer-Verlag, June [10] R. Kanungo, D. Mount, N. Netanyahu, C. Piatko, R. Silverman, and A. Wu. An efficient k-means clustering algorithm: analysis and implementation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7): , [11] K. Lee and B. Moghaddam. A practical face relighting method for directional lighting normalization. IEEE International Workshop on Analysis and Modeling of Faces and Gestures, [12] R. Ramamoorthi. Modeling illumination variation with spherical harmonics. Academic Press, Burlington, MA, [13] A. Shashua and T. Riklin-Raviv. The quotient image: class-based re-rendering and recognition with varying illuminations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2): , [14] A. Stoschek. Image-based re-rendering of faces for continuous pose and illumination directions. IEEE Conference on Computer Vision and Pattern Recognition, 1: , [15] M. F. Tappen, E. H. Adelson, and W. T. Freeman. Estimating intrinsic component images using non-linear regression. IEEE Conference on Computer Vision and Pattern Recognition, 2: , [16] M. F. Tappen, W. T. Freeman, and E. H. Adelson. Recovering intrinsic images from a single image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(9): , Source De-illuminated Face Re-illuminated Faces Fig. 3. Example of the De-illumination and Re-illumination process for the Large Dataset

7 Source Re-illuminated Faces Fig. 4. Example of the Re-illumination process.

8 Source De-illuminated Face Re-illuminated Faces Fig. 5. Example of the De-illumination and Re-illumination process.the source image on the left is used to first produce an image of face under uniform illumination. We refer to this face as the de-illuminated face. The de-illuminated face is combined with estimated illuminations to synthesize images of the face under a number of different illuminations.

Recovering illumination and texture using ratio images

Recovering illumination and texture using ratio images Recovering illumination and texture using ratio images Alejandro Troccoli atroccol@cscolumbiaedu Peter K Allen allen@cscolumbiaedu Department of Computer Science Columbia University, New York, NY Abstract

More information

Face Re-Lighting from a Single Image under Harsh Lighting Conditions

Face Re-Lighting from a Single Image under Harsh Lighting Conditions Face Re-Lighting from a Single Image under Harsh Lighting Conditions Yang Wang 1, Zicheng Liu 2, Gang Hua 3, Zhen Wen 4, Zhengyou Zhang 2, Dimitris Samaras 5 1 The Robotics Institute, Carnegie Mellon University,

More information

The Quotient Image: Class Based Recognition and Synthesis Under Varying Illumination Conditions

The Quotient Image: Class Based Recognition and Synthesis Under Varying Illumination Conditions The Quotient Image: Class Based Recognition and Synthesis Under Varying Illumination Conditions Tammy Riklin-Raviv and Amnon Shashua Institute of Computer Science, The Hebrew University, Jerusalem 91904,

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 8, AUGUST /$ IEEE

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 8, AUGUST /$ IEEE IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 8, AUGUST 2008 1331 A Subspace Model-Based Approach to Face Relighting Under Unknown Lighting and Poses Hyunjung Shim, Student Member, IEEE, Jiebo Luo,

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Class-based Multiple Light Detection: An Application to Faces

Class-based Multiple Light Detection: An Application to Faces Class-based Multiple Light Detection: An Application to Faces Christos-Savvas Bouganis and Mike Brookes Department of Electrical and Electronic Engineering Imperial College of Science, Technology and Medicine

More information

Face Relighting with Radiance Environment Maps

Face Relighting with Radiance Environment Maps Face Relighting with Radiance Environment Maps Zhen Wen University of Illinois Urbana Champaign zhenwen@ifp.uiuc.edu Zicheng Liu Microsoft Research zliu@microsoft.com Tomas Huang University of Illinois

More information

Recovering 3D Facial Shape via Coupled 2D/3D Space Learning

Recovering 3D Facial Shape via Coupled 2D/3D Space Learning Recovering 3D Facial hape via Coupled 2D/3D pace Learning Annan Li 1,2, higuang han 1, ilin Chen 1, iujuan Chai 3, and Wen Gao 4,1 1 Key Lab of Intelligent Information Processing of CA, Institute of Computing

More information

Face Relighting with Radiance Environment Maps

Face Relighting with Radiance Environment Maps Face Relighting with Radiance Environment Maps Zhen Wen Zicheng Liu Thomas S. Huang University of Illinois Microsoft Research University of Illinois Urbana, IL 61801 Redmond, WA 98052 Urbana, IL 61801

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING Hyunjung Shim Tsuhan Chen {hjs,tsuhan}@andrew.cmu.edu Department of Electrical and Computer Engineering Carnegie Mellon University

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Object. Radiance. Viewpoint v

Object. Radiance. Viewpoint v Fisher Light-Fields for Face Recognition Across Pose and Illumination Ralph Gross, Iain Matthews, and Simon Baker The Robotics Institute, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213

More information

Face Recognition Under Varying Illumination Based on MAP Estimation Incorporating Correlation Between Surface Points

Face Recognition Under Varying Illumination Based on MAP Estimation Incorporating Correlation Between Surface Points Face Recognition Under Varying Illumination Based on MAP Estimation Incorporating Correlation Between Surface Points Mihoko Shimano 1, Kenji Nagao 1, Takahiro Okabe 2,ImariSato 3, and Yoichi Sato 2 1 Panasonic

More information

Face View Synthesis Across Large Angles

Face View Synthesis Across Large Angles Face View Synthesis Across Large Angles Jiang Ni and Henry Schneiderman Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 1513, USA Abstract. Pose variations, especially large out-of-plane

More information

Face Image Synthesis and Interpretation Using 3D Illumination-Based AAM Models

Face Image Synthesis and Interpretation Using 3D Illumination-Based AAM Models Face Image Synthesis and Interpretation Using 3D Illumination-Based AAM Models 40 Salvador E. Ayala-Raggi, Leopoldo Altamirano-Robles and Janeth Cruz-Enriquez Instituto Nacional de Astrofísica Óptica y

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

A Bilinear Illumination Model for Robust Face Recognition

A Bilinear Illumination Model for Robust Face Recognition A Bilinear Illumination Model for Robust Face Recognition Jinho Lee Baback Moghaddam Hanspeter Pfister Raghu Machiraju Mitsubishi Electric Research Laboratories (MERL) 201 Broadway, Cambridge MA 02139,

More information

Photometric stereo. Recovering the surface f(x,y) Three Source Photometric stereo: Step1. Reflectance Map of Lambertian Surface

Photometric stereo. Recovering the surface f(x,y) Three Source Photometric stereo: Step1. Reflectance Map of Lambertian Surface Photometric stereo Illumination Cones and Uncalibrated Photometric Stereo Single viewpoint, multiple images under different lighting. 1. Arbitrary known BRDF, known lighting 2. Lambertian BRDF, known lighting

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Integrated three-dimensional reconstruction using reflectance fields

Integrated three-dimensional reconstruction using reflectance fields www.ijcsi.org 32 Integrated three-dimensional reconstruction using reflectance fields Maria-Luisa Rosas 1 and Miguel-Octavio Arias 2 1,2 Computer Science Department, National Institute of Astrophysics,

More information

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach

More information

FACIAL ANIMATION FROM SEVERAL IMAGES

FACIAL ANIMATION FROM SEVERAL IMAGES International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL ANIMATION FROM SEVERAL IMAGES Yasuhiro MUKAIGAWAt Yuichi NAKAMURA+ Yuichi OHTA+ t Department of Information

More information

Faces and Image-Based Lighting

Faces and Image-Based Lighting Announcements Faces and Image-Based Lighting Project #3 artifacts voting Final project: Demo on 6/25 (Wednesday) 13:30pm in this room Reports and videos due on 6/26 (Thursday) 11:59pm Digital Visual Effects,

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

Component-based Face Recognition with 3D Morphable Models

Component-based Face Recognition with 3D Morphable Models Component-based Face Recognition with 3D Morphable Models B. Weyrauch J. Huang benjamin.weyrauch@vitronic.com jenniferhuang@alum.mit.edu Center for Biological and Center for Biological and Computational

More information

Face Recognition using Eigenfaces SMAI Course Project

Face Recognition using Eigenfaces SMAI Course Project Face Recognition using Eigenfaces SMAI Course Project Satarupa Guha IIIT Hyderabad 201307566 satarupa.guha@research.iiit.ac.in Ayushi Dalmia IIIT Hyderabad 201307565 ayushi.dalmia@research.iiit.ac.in Abstract

More information

Component-based Face Recognition with 3D Morphable Models

Component-based Face Recognition with 3D Morphable Models Component-based Face Recognition with 3D Morphable Models Jennifer Huang 1, Bernd Heisele 1,2, and Volker Blanz 3 1 Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA 2 Honda

More information

An Active Illumination and Appearance (AIA) Model for Face Alignment

An Active Illumination and Appearance (AIA) Model for Face Alignment An Active Illumination and Appearance (AIA) Model for Face Alignment Fatih Kahraman, Muhittin Gokmen Istanbul Technical University, Computer Science Dept., Turkey {fkahraman, gokmen}@itu.edu.tr Sune Darkner,

More information

Generalized Quotient Image

Generalized Quotient Image Generalized Quotient Image Haitao Wang 1 Stan Z. Li 2 Yangsheng Wang 1 1 Institute of Automation 2 Beijing Sigma Center Chinese Academy of Sciences Microsoft Research Asia Beijing, China, 100080 Beijing,

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction The central problem in computer graphics is creating, or rendering, realistic computergenerated images that are indistinguishable from real photographs, a goal referred to as photorealism.

More information

Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms

Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms Rohit Patnaik and David Casasent Dept. of Electrical and Computer Engineering, Carnegie Mellon University,

More information

Three-Dimensional Face Recognition: A Fishersurface Approach

Three-Dimensional Face Recognition: A Fishersurface Approach Three-Dimensional Face Recognition: A Fishersurface Approach Thomas Heseltine, Nick Pears, Jim Austin Department of Computer Science, The University of York, United Kingdom Abstract. Previous work has

More information

Relighting for an Arbitrary Shape Object Under Unknown Illumination Environment

Relighting for an Arbitrary Shape Object Under Unknown Illumination Environment Relighting for an Arbitrary Shape Object Under Unknown Illumination Environment Yohei Ogura (B) and Hideo Saito Keio University, 3-14-1 Hiyoshi, Kohoku, Yokohama, Kanagawa 223-8522, Japan {y.ogura,saito}@hvrl.ics.keio.ac.jp

More information

Creating Invariance To Nuisance Parameters in Face Recognition

Creating Invariance To Nuisance Parameters in Face Recognition Creating Invariance To Nuisance Parameters in Face Recognition Simon J.D. Prince and James H. Elder York University Centre for Vision Research Toronto, Ontario {prince, elder}@elderlab.yorku.ca Abstract

More information

Parametric Manifold of an Object under Different Viewing Directions

Parametric Manifold of an Object under Different Viewing Directions Parametric Manifold of an Object under Different Viewing Directions Xiaozheng Zhang 1,2, Yongsheng Gao 1,2, and Terry Caelli 3 1 Biosecurity Group, Queensland Research Laboratory, National ICT Australia

More information

Learning a Manifold as an Atlas Supplementary Material

Learning a Manifold as an Atlas Supplementary Material Learning a Manifold as an Atlas Supplementary Material Nikolaos Pitelis Chris Russell School of EECS, Queen Mary, University of London [nikolaos.pitelis,chrisr,lourdes]@eecs.qmul.ac.uk Lourdes Agapito

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Face Recognition Markus Storer, 2007

Face Recognition Markus Storer, 2007 Face Recognition Markus Storer, 2007 Agenda Face recognition by humans 3D morphable models pose and illumination model creation model fitting results Face recognition vendor test 2006 Face Recognition

More information

Robust Estimation of Albedo for Illumination-invariant Matching and Shape Recovery

Robust Estimation of Albedo for Illumination-invariant Matching and Shape Recovery Robust Estimation of Albedo for Illumination-invariant Matching and Shape Recovery Soma Biswas, Gaurav Aggarwal and Rama Chellappa Center for Automation Research, UMIACS Dept. of ECE, Dept. of Computer

More information

Face Recognition under Severe Shadow Effects Using Gradient Field Transformation

Face Recognition under Severe Shadow Effects Using Gradient Field Transformation International Journal of Scientific and Research Publications, Volume 3, Issue 6, June 2013 1 Face Recognition under Severe Shadow Effects Using Gradient Field Transformation Parisa Beham M *, Bala Bhattachariar

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

3D Active Appearance Model for Aligning Faces in 2D Images

3D Active Appearance Model for Aligning Faces in 2D Images 3D Active Appearance Model for Aligning Faces in 2D Images Chun-Wei Chen and Chieh-Chih Wang Abstract Perceiving human faces is one of the most important functions for human robot interaction. The active

More information

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field Traditional Image Generation Course 10 Realistic Materials in Computer Graphics Surfaces + BRDFs Reflectance Fields USC Institute for Creative Technologies Volumetric scattering, density fields, phase

More information

A Practical Face Relighting Method for Directional Lighting Normalization

A Practical Face Relighting Method for Directional Lighting Normalization MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com A Practical Face Relighting Method for Directional Lighting Normalization Kuang-Chih Lee, Baback Moghaddam TR2005-092 August 2005 Abstract

More information

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Motohiro Nakamura 1, Takahiro Okabe 1, and Hendrik P. A. Lensch 2 1 Kyushu Institute of Technology 2 Tübingen University

More information

Model-based Enhancement of Lighting Conditions in Image Sequences

Model-based Enhancement of Lighting Conditions in Image Sequences Model-based Enhancement of Lighting Conditions in Image Sequences Peter Eisert and Bernd Girod Information Systems Laboratory Stanford University {eisert,bgirod}@stanford.edu http://www.stanford.edu/ eisert

More information

Eigen Light-Fields and Face Recognition Across Pose

Eigen Light-Fields and Face Recognition Across Pose Appeared in the 2002 International Conference on Automatic Face and Gesture Recognition Eigen Light-Fields and Face Recognition Across Pose Ralph Gross, Iain Matthews, and Simon Baker The Robotics Institute,

More information

Haresh D. Chande #, Zankhana H. Shah *

Haresh D. Chande #, Zankhana H. Shah * Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information

More information

Factorization with Missing and Noisy Data

Factorization with Missing and Noisy Data Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,

More information

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,

More information

Recovering light directions and camera poses from a single sphere.

Recovering light directions and camera poses from a single sphere. Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,

More information

Faces. Face Modeling. Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 17

Faces. Face Modeling. Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 17 Face Modeling Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 17 Faces CS291-J00, Winter 2003 From David Romdhani Kriegman, slides 2003 1 Approaches 2-D Models morphing, indexing, etc.

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Lighting affects appearance

Lighting affects appearance Lighting affects appearance 1 Image Normalization Global Histogram Equalization. Make two images have same histogram. Or, pick a standard histogram, and make adjust each image to have that histogram. Apply

More information

3D Face Modelling Under Unconstrained Pose & Illumination

3D Face Modelling Under Unconstrained Pose & Illumination David Bryan Ottawa-Carleton Institute for Biomedical Engineering Department of Systems and Computer Engineering Carleton University January 12, 2009 Agenda Problem Overview 3D Morphable Model Fitting Model

More information

Light Field Occlusion Removal

Light Field Occlusion Removal Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image

More information

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability 7 Fractions GRADE 7 FRACTIONS continue to develop proficiency by using fractions in mental strategies and in selecting and justifying use; develop proficiency in adding and subtracting simple fractions;

More information

WELCOME TO THE NAE US FRONTIERS OF ENGINEERING SYMPOSIUM 2005

WELCOME TO THE NAE US FRONTIERS OF ENGINEERING SYMPOSIUM 2005 WELCOME TO THE NAE US FRONTIERS OF ENGINEERING SYMPOSIUM 2005 Ongoing Challenges in Face Recognition Peter Belhumeur Columbia University New York City How are people identified? People are identified by

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

Illumination Compensation and Normalization for Robust Face Recognition Using Discrete Cosine Transform in Logarithm Domain

Illumination Compensation and Normalization for Robust Face Recognition Using Discrete Cosine Transform in Logarithm Domain 458 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 36, NO. 2, APRIL 2006 Illumination Compensation and Normalization for Robust Face Recognition Using Discrete Cosine Transform

More information

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein Lecture 23: Photometric Stereo Announcements PA3 Artifact due tonight PA3 Demos Thursday Signups close at 4:30 today No lecture on Friday Last Time:

More information

Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation

Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation Xiujuan Chai 1, 2, Shiguang Shan 2, Wen Gao 1, 2 1 Vilab, Computer College, Harbin Institute of Technology, Harbin,

More information

Non-Rigid Image Registration

Non-Rigid Image Registration Proceedings of the Twenty-First International FLAIRS Conference (8) Non-Rigid Image Registration Rhoda Baggs Department of Computer Information Systems Florida Institute of Technology. 15 West University

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural

More information

Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL

Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL Zhixin Shu 1 Ersin Yumer 2 Sunil Hadap 2 Kalyan Sunkavalli 2 Eli Shechtman 2 Dimitris Samaras 1,3 1 Stony Brook University

More information

Light Field Appearance Manifolds

Light Field Appearance Manifolds Light Field Appearance Manifolds Chris Mario Christoudias, Louis-Philippe Morency, and Trevor Darrell Computer Science and Artificial Intelligence Laboratory Massachussetts Institute of Technology Cambridge,

More information

Image-based BRDF Representation

Image-based BRDF Representation JAMSI, 11 (2015), No. 2 47 Image-based BRDF Representation A. MIHÁLIK AND R. ĎURIKOVIČ Abstract: To acquire a certain level of photorealism in computer graphics, it is necessary to analyze, how the materials

More information

Selecting Models from Videos for Appearance-Based Face Recognition

Selecting Models from Videos for Appearance-Based Face Recognition Selecting Models from Videos for Appearance-Based Face Recognition Abdenour Hadid and Matti Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O.

More information

Lecture 24: More on Reflectance CAP 5415

Lecture 24: More on Reflectance CAP 5415 Lecture 24: More on Reflectance CAP 5415 Recovering Shape We ve talked about photometric stereo, where we assumed that a surface was diffuse Could calculate surface normals and albedo What if the surface

More information

A Study on Similarity Computations in Template Matching Technique for Identity Verification

A Study on Similarity Computations in Template Matching Technique for Identity Verification A Study on Similarity Computations in Template Matching Technique for Identity Verification Lam, S. K., Yeong, C. Y., Yew, C. T., Chai, W. S., Suandi, S. A. Intelligent Biometric Group, School of Electrical

More information

Announcements. Introduction. Why is this hard? What is Computer Vision? We all make mistakes. What do you see? Class Web Page is up:

Announcements. Introduction. Why is this hard? What is Computer Vision? We all make mistakes. What do you see? Class Web Page is up: Announcements Introduction Computer Vision I CSE 252A Lecture 1 Class Web Page is up: http://www.cs.ucsd.edu/classes/wi05/cse252a/ Assignment 0: Getting Started with Matlab is posted to web page, due 1/13/04

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

PRECISELY capturing appearance and shape of objects

PRECISELY capturing appearance and shape of objects IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Non-Lambertian Reflectance Modeling and Shape Recovery of Faces using Tensor Splines Ritwik Kumar, Student Member, IEEE, Angelos Barmpoutis,

More information

PCA and KPCA algorithms for Face Recognition A Survey

PCA and KPCA algorithms for Face Recognition A Survey PCA and KPCA algorithms for Face Recognition A Survey Surabhi M. Dhokai 1, Vaishali B.Vala 2,Vatsal H. Shah 3 1 Department of Information Technology, BVM Engineering College, surabhidhokai@gmail.com 2

More information

Categorization by Learning and Combining Object Parts

Categorization by Learning and Combining Object Parts Categorization by Learning and Combining Object Parts Bernd Heisele yz Thomas Serre y Massimiliano Pontil x Thomas Vetter Λ Tomaso Poggio y y Center for Biological and Computational Learning, M.I.T., Cambridge,

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Supplementary Material for Synthesizing Normalized Faces from Facial Identity Features

Supplementary Material for Synthesizing Normalized Faces from Facial Identity Features Supplementary Material for Synthesizing Normalized Faces from Facial Identity Features Forrester Cole 1 David Belanger 1,2 Dilip Krishnan 1 Aaron Sarna 1 Inbar Mosseri 1 William T. Freeman 1,3 1 Google,

More information

From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose

From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 23, NO. 6, JUNE 2001 643 From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose Athinodoros

More information

Face Recognition Across Poses Using A Single 3D Reference Model

Face Recognition Across Poses Using A Single 3D Reference Model 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Face Recognition Across Poses Using A Single 3D Reference Model Gee-Sern Hsu, Hsiao-Chia Peng National Taiwan University of Science

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

Linear Discriminant Analysis for 3D Face Recognition System

Linear Discriminant Analysis for 3D Face Recognition System Linear Discriminant Analysis for 3D Face Recognition System 3.1 Introduction Face recognition and verification have been at the top of the research agenda of the computer vision community in recent times.

More information

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Ryusuke Homma, Takao Makino, Koichi Takase, Norimichi Tsumura, Toshiya Nakaguchi and Yoichi Miyake Chiba University, Japan

More information

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Face Recognition Under Variable Lighting using Harmonic Image Exemplars

Face Recognition Under Variable Lighting using Harmonic Image Exemplars Face Recognition Under Variable Lighting using Harmonic Image Exemplars Lei Zhang Dimitris Samaras Department of Computer Science SUNY at Stony Brook, NY, 11790 lzhang, samaras @cs.sunysb.edu bstract We

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

Motivation. My General Philosophy. Assumptions. Advanced Computer Graphics (Spring 2013) Precomputation-Based Relighting

Motivation. My General Philosophy. Assumptions. Advanced Computer Graphics (Spring 2013) Precomputation-Based Relighting Advanced Computer Graphics (Spring 2013) CS 283, Lecture 17: Precomputation-Based Real-Time Rendering Ravi Ramamoorthi http://inst.eecs.berkeley.edu/~cs283/sp13 Motivation Previously: seen IBR. Use measured

More information

Mobile Face Recognization

Mobile Face Recognization Mobile Face Recognization CS4670 Final Project Cooper Bills and Jason Yosinski {csb88,jy495}@cornell.edu December 12, 2010 Abstract We created a mobile based system for detecting faces within a picture

More information

A Morphable Model for the Synthesis of 3D Faces

A Morphable Model for the Synthesis of 3D Faces A Morphable Model for the Synthesis of 3D Faces Marco Nef Volker Blanz, Thomas Vetter SIGGRAPH 99, Los Angeles Presentation overview Motivation Introduction Database Morphable 3D Face Model Matching a

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

3D Morphable Model Parameter Estimation

3D Morphable Model Parameter Estimation 3D Morphable Model Parameter Estimation Nathan Faggian 1, Andrew P. Paplinski 1, and Jamie Sherrah 2 1 Monash University, Australia, Faculty of Information Technology, Clayton 2 Clarity Visual Intelligence,

More information

Real-Time Illumination Estimation from Faces for Coherent Rendering

Real-Time Illumination Estimation from Faces for Coherent Rendering Real-Time Illumination Estimation from Faces for Coherent Rendering Sebastian B. Knorr Daniel Kurz Metaio GmbH Figure 1: Our method enables coherent rendering of virtual augmentations (a,b) based on illumination

More information

Ligh%ng and Reflectance

Ligh%ng and Reflectance Ligh%ng and Reflectance 2 3 4 Ligh%ng Ligh%ng can have a big effect on how an object looks. Modeling the effect of ligh%ng can be used for: Recogni%on par%cularly face recogni%on Shape reconstruc%on Mo%on

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

What is Computer Vision? Introduction. We all make mistakes. Why is this hard? What was happening. What do you see? Intro Computer Vision

What is Computer Vision? Introduction. We all make mistakes. Why is this hard? What was happening. What do you see? Intro Computer Vision What is Computer Vision? Trucco and Verri (Text): Computing properties of the 3-D world from one or more digital images Introduction Introduction to Computer Vision CSE 152 Lecture 1 Sockman and Shapiro:

More information