Supplementary Material: Specular Highlight Removal in Facial Images

Similar documents
Physics-based Vision: an Introduction

Color Constancy from Illumination Changes

Specular Reflection Separation using Dark Channel Prior

A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation

Light, Color, and Surface Reflectance. Shida Beigpour

Real-time highlight removal using intensity ratio

Color constancy through inverse-intensity chromaticity space

Abstract. 1. Introduction

Beyond Lambert: Reconstructing Specular Surfaces Using Color

Removing Shadows from Images

Separating Reflections in Human Iris Images for Illumination Estimation

Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes

Nobutoshi Ojima Global R&D Beauty Creation, Kao Corporation, Tokyo, Japan

Multispectral Image Invariant to Illumination Colour, Strength, and Shading

Light source estimation using feature points from specular highlights and cast shadows

Spectral Estimation of Skin Color with Foundation Makeup

Specularity Removal using Dark Channel Prior *

Estimation of Intrinsic Image Sequences from Image+Depth Video

Shadow Removal from a Single Image

Deep Learning in Image and Face Intrinsics

Illumination Estimation from Shadow Borders

Intrinsic Image Decomposition with Non-Local Texture Cues

Shadow detection and removal from a single image

Illuminant Estimation from Projections on the Planckian Locus

Published at the IEEE Workshop on Color and Photometry in Computer Vision 2011, in conjunction with ICCV 2011 in Barcelona, Spain.

Skeleton Cube for Lighting Environment Estimation

Ground truth dataset and baseline evaluations for intrinsic image algorithms

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

Video-Based Illumination Estimation

Generalized Gamut Mapping using Image Derivative Structures for Color Constancy

A Survey of Light Source Detection Methods

High Information Rate and Efficient Color Barcode Decoding

Colour balancing using sclera colour

Estimating the wavelength composition of scene illumination from image data is an

Computational Color Constancy: Survey and Experiments

INTRINSIC IMAGE DECOMPOSITION BY HIERARCHICAL L 0 SPARSITY

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

Computational Photography and Video: Intrinsic Images. Prof. Marc Pollefeys Dr. Gabriel Brostow

An ICA based Approach for Complex Color Scene Text Binarization

Color Constancy in 3D-2D Face Recognition

Estimating Shadows with the Bright Channel Cue

Polarization Multiplexing for Bidirectional Imaging

Bayesian Depth-from-Defocus with Shading Constraints

DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION. Department of Artificial Intelligence Kyushu Institute of Technology

Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture David Eigen, Rob Fergus

Extended Corrected-Moments Illumination Estimation

Automatic Multi-light White Balance Using Illumination Gradients and Color Space Projection

Material-Based Object Segmentation Using Near-Infrared Information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Synthesis of Facial Images with Foundation Make-Up

Analysis of photometric factors based on photometric linearization

Color Structure Recovering in Strong Specular Text Regions

Variational Multiframe Stereo in the Presence of Specular Reflections

INTRINSIC IMAGE EVALUATION ON SYNTHETIC COMPLEX SCENES

Color can be an important cue for computer vision or

Color Constancy from Hyper-Spectral Data

Photometric Stereo with Auto-Radiometric Calibration

Digital Face Makeup by Example

Supplementary Materials

MANY computer vision applications as well as image

Illumination Modeling and Chromophore Identification in Dermatological Photographs for Skin Disease Analysis. Zhao Liu, Josiane Zerubia

An Efficient Face Recognition under Varying Image Conditions

Learning to Remove Soft Shadows Supplementary Material

SHADOW due to different illuminations often causes problems

Extending color constancy outside the visible region

Learning based face hallucination techniques: A survey

PHOTOMETRIC STEREO FOR NON-LAMBERTIAN SURFACES USING COLOR INFORMATION

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL?

Color Constancy by Derivative-based Gamut Mapping

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Supplementary Material

DeepIM: Deep Iterative Matching for 6D Pose Estimation - Supplementary Material

On the distribution of colors in natural images

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing

A Bi-Illuminant Dichromatic Reflection Model for Understanding Images

Re-rendering from a Dense/Sparse Set of Images

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Introduction to Computer Vision. Week 8, Fall 2010 Instructor: Prof. Ko Nishino

Illuminant retrieval for fixed location cameras

Illuminant-Invariant Model-Based Road Segmentation

An Adaptive Threshold LBP Algorithm for Face Recognition

Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL

Deep Structured-Output Regression Learning for Computational Color Constancy

Recovering light directions and camera poses from a single sphere.

Digital Image Forgery detection using color Illumination and Decision Tree Classification

Automatic Image Forgery Exposure using Illuminant Measure of Face Region and Random Object Detection Method

Class-based Multiple Light Detection: An Application to Faces

A Study on Similarity Computations in Template Matching Technique for Identity Verification

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 8, AUGUST /$ IEEE

Detection of Digital Image Splicing Using Luminance

Light-Invariant Fitting of Active Appearance Models

Automatic Shadow Removal by Illuminance in HSV Color Space

Face Relighting with Radiance Environment Maps

Color Photometric Stereo and Virtual Image Rendering Using Neural Networks

Digital Makeup Face Generation

Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

Supplementary Materials for Salient Object Detection: A

Recovering illumination and texture using ratio images

Time-to-Contact from Image Intensity

arxiv: v1 [cs.cv] 7 Dec 2018

Transcription:

Supplementary Material: Specular Highlight Removal in Facial Images Chen Li 1 Stephen Lin 2 Kun Zhou 1 Katsushi Ikeuchi 2 1 State Key Lab of CAD&CG, Zhejiang University 2 Microsoft Research 1. Computation of skin relative absorbance vectors σ m and σ h Skin albedo A can be expressed by a combination of two pigments, melanin and hemoglobin, as A(p) =σm ρm(p) σ ρ h(p) h, (1) where ρ m (p), ρ h (p), σ m, σ h are the pigment densities and relative absorbance vectors of melanin and hemoglobin, respectively. In solving for the relative absorbance vectors σ m, σ h, we first apply the logarithm operation to Eq. (1): log A(p) =ρ m (p)logσ m + ρ h (p)logσ h. (2) Different log A with different pigment densities thus lie in a plane P in the log-rgb space with two bias vectors log σ m and log σ h as shown in Fig. 1(a). To compute the two bias vectors, which are assumed to be invariant from person to person, we use the face dataset in [12]. Diffuse skin patches under a neutral illumination color are first extracted using the method in [1]. We then use the intrinsic image decomposition of [12] to extract the skin albedo without diffuse shading. According to Eq. (2), all the decomposed skin albedo values ideally lie in a plane P formed by log σ m and log σ h in log-rgb space. However, since slight shading variations likely still exist in the decomposed albedo skin samples, the skin albedo values will not lie in a perfect plane. So we use Principal Components Analysis to extract the plane P and project all skin samples onto P along the shading direction 1 =(1, 1, 1) T in log-rgb space to remove the shading effects. After this projection onto P, Independent Components Analysis is applied on the projected skin albedo samples to extract the two independent bias vectors. We compute the bias vectors for each skin patch and use the average as the value of log σ m and log σ h. Figure 1(b) shows varying skin albedos with different melanin and hemoglobin densities using the computed melanin absorbance vector σ m and hemoglobin absorbance vector σ h. This work was done while Chen Li was an intern at Microsoft Research. Among all the examined skin patches, the (average, standard derivation) of angular differences between their estimated bias vectors and the average values of log σ m and log σ h are (3.70, 2.33 ) and (9.61, 8.68 ), respectively. Based on this empirical study and claims in [10], σ m and σ h can be treated as approximately the same among different people, and variations in skin color are mainly caused by differences in pigment densities ρ. 2. Uniform Illumination Chromaticity Estimation Although our method is formulated to estimate directionally variant illumination, we can compare it with techniques for estimating a uniform illumination chromaticity. We specifically compare to IIC [9], SpePL [2], FacePL [6] and FaceGM [1]. IIC [9] assumes a direct correlation between illumination chromaticity and image chromaticity in inverse-intensity chromaticity space. SpePL [2] combines the dichromatic model with a hard Planckian locus constraint. FacePL [6] utilizes a soft Planckian locus constraint and a statistical skin albedo distribution. FaceGM [1] uses gamut mapping with a generic statistical model of skin color to estimate the illumination color. All of the comparison methods are able to handle only uniform illumination chromaticity. For a quantitative comparison, we use the reprocessed version of Gehler s ColorChecker dataset [3] provided by [7], where the images have linear camera responses and calibrated illumination color. From this dataset, we manually select all of the 37 facial images that contain specular reflection and use them for evaluation. We use the error metric suggested by Hordley and Finlayson [4]: Γ T e Γ m e = arccos( ), (3) Γ e Γ m which measures the angle between the RGB triplets of estimated illumination Γ e and ground truth illumination Γ m. The measured angular errors of the proposed method as well as the comparison methods are listed in Tab. 1, including the minimum, median, mean, maximum and standard 4321

Hemoglobin Melanin (a) (b) Figure 1. Melanin-hemoglobin based skin model in log-rgb space. (a) In the log-rgb space, the log of the melanin absorbance vector σ m and hemoglobin absorbance vector σ h forms a plane. (b) Varying skin albedos with different combinations of melanin and hemoglobin density. Table 1. Angular error measurement for illumination chromaticity estimation Algorithm Min Median Mean Max Std IIC [9] 1.90 7.12 8.12 15.1 3.63 SpePL [2] 0.17 2.85 2.89 6.25 1.50 FacePL [6] 3.08 4.49 4.98 9.70 1.75 FaceGM [1] 0.02 3.80 3.96 8.21 1.84 Ours 0.02 0.76 0.88 3.43 0.77 derivation of the errors. The overall performance of our approach surpasses the others, with greater robustness as indicated by the standard deviation of errors. A few estimation results are shown in Fig. 2, and the rest are provided in Fig. 3-Fig. 8. The ground truth is given in the last column of these figures. Because of their Planckian locus constraints, SpePL and FacePL have larger estimation errors when the lighting de- Input IIC [9] SpePL [2] FacePL [6] FaceGM [1] Ours Ground truth 9.48 5.94 5.05 3.66 1.32 0.00 9.74 3.29 7.47 7.43 0.08 0.00 15.1 4.64 4.46 1.52 0.24 0.00 (a) (b) (c) (d) (e) (f) (g) Figure 2. Some results for illumination chromaticity estimation. (a) Input images. (b/c/d/e/f/g) Illumination corrected images with the chromaticity estimated using (b) IIC [9], (c) SpePL [2], (d) FacePL [6], (e) FaceGM [1], (f) our method, and (g) calibrated ground truth using a ColorChecker. Measured errors are listed in the lower right corners. viates from the Planckian assumption, such as in the first row of Fig. 2. FaceGM is relatively less stable, as reflected by its higher standard deviation in Tab. 1, since its gamut mapping approach may lead to a large range of feasible illumination chromaticities, especially for dark skin colors such as in the second row of Fig. 2, which have a relatively narrow gamut. The performance of IIC is limited by the accuracy of specular-diffuse segmentation, which is often challenging for human skin, especially where highlights may be subtle, such as in the third row of Fig. 2. Although our method also utilizes specular highlights for estimating illumination chromaticity, it is still effective in cases where specular highlights are weak because of the melaninhemoglobin based skin model. 3. Additional Results for Specular Highlight Removal In Figure 9-Fig. 13, we present additional results for specular highlight removal using all 37 facial images that contain visible specular highlights in Gehler s ColorChecker dataset [3, 7]. In each figure, column (a) is the input image; columns (b/c/d/e/f) are the separated diffuse images by (b) ILD[8], (c) MDCBF[11], (d) DarkP[5], (e) FacePL[6], and (f) our method. References [1] S. Bianco and R. Schettini. Adaptive color constancy using faces. IEEE Trans. Patt. Anal. and Mach. Intel., 36(8):1505 1518, 2014. [2] G. D. Finlayson and G. Schaefer. Solving for colour constancy using a constrained dichromatic reflection model. Int. Journal of Computer Vision, 42:127 144, 2001. 4322

[3] P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp. Bayesian color constancy revisited. pages 1 8, 06 2008. [4] S. D. Hordley and G. D. Finlayson. Re-evaluating colour constancy algorithms. In ICCV, pages 76 79, 2004. [5] H. Kim, H. Jin, S. Hadap, and I. Kweon. Specular reflection separation using dark channel prior. In CVPR, pages 1460 1467, 2013. [6] C. Li, K. Zhou, and S. Lin. Intrinsic face image decomposition with human face priors. In ECCV (5) 14, pages 218 233, 2014. [7] S. Lynch, M. Drew, and G. Finlayson. Colour constancy from both sides of the shadow edge. In Computer Vision Workshops (ICCVW), 2013 IEEE International Conference on, pages 899 906, 2013. [8] R. Tan and K. Ikeuchi. Separating reflection components of textured surfaces using a single image. IEEE Trans. Patt. Anal. and Mach. Intel., 27(2):178 193, 2005. [9] R. T. Tan, K. Nishino, and K. Ikeuchi. Color constancy through inverse-intensity chromaticity space. J. Opt. Soc. Am. A, 21(3):321 334, Mar 2004. [10] N. Tsumura, N. Ojima, K. Sato, M. Shiraishi, H. Shimizu, H. Nabeshima, S. Akazaki, K. Hori, and Y. Miyake. Image-based skin color and texture analysis/synthesis by extracting hemoglobin and melanin information in the skin. ACM Trans. Graph., 22(3):770 779, July 2003. [11] Q. Yang, S. Wang, and N. Ahuja. Real-time specular highlight removal using bilateral filtering. In ECCV, volume 6314, pages 87 100. 2010. [12] Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin. A closed-form solution to retinex with nonlocal texture constraints. IEEE Trans. Patt. Anal. and Mach. Intel., 34(7):1437 1444, July 2012. 4323

Input IIC [9] SpePL [2] FacePL [6] FaceGM [1] Ours Ground truth 7.12 1.10 3.86 4.74 1.87 0.00 3.31 2.63 3.72 4.05 0.19 0.00 6.93 1.11 4.81 4.93 2.26 0.00 8.27 2.14 3.82 5.31 0.42 0.00 6.27 2.58 3.48 3.76 0.06 0.00 9.74 3.29 7.47 7.43 0.08 0.00 14.2 2.32 4.74 4.17 0.62 0.00 (a) (b) (c) (d) (e) (f) (g) Figure 3. Additional results for illumination chromaticity estimation. (a) Input images. (b/c/d/e/f/g) Illumination corrected images with the chromaticity estimated by using (b) IIC [9], (c) SpePL [2], (d) FacePL [6], (e) FaceGM [1], (f) our method, and (g) calibrated ground truth using a ColorChecker. Below each corrected image is the measured error of the estimated illumination chromaticity. 4324

Input IIC [9] SpePL [2] FacePL [6] FaceGM [1] Ours Ground truth 14.1 2.20 4.74 1.66 0.19 0.00 12.5 2.02 4.11 1.42 0.88 0.00 13.4 4.66 4.46 2.37 0.86 0.00 15.1 4.64 4.32 1.52 0.24 0.00 6.01 3.37 4.12 2.81 0.50 0.00 13.0 1.19 4.50 2.60 0.98 0.00 14.4 1.30 4.34 2.77 1.01 0.00 (a) (b) (c) (d) (e) (f) (g) Figure 4. Additional results for illumination chromaticity estimation. (a) Input images. (b/c/d/e/f/g) Illumination corrected images with the chromaticity estimated by using (b) IIC [9], (c) SpePL [2], (d) FacePL [6], (e) FaceGM [1], (f) our method, and (g) calibrated ground truth using a ColorChecker. Below each corrected image is the measured error of the estimated illumination chromaticity. 4325

Input IIC [9] SpePL [2] FacePL [6] FaceGM [1] Ours Ground truth 5.26 3.23 3.89 0.02 0.71 0.00 13.4 2.96 4.49 4.30 1.26 0.00 10.8 2.85 4.59 1.51 0.99 0.00 9.63 2.74 8.90 7.96 0.63 0.00 1.90 1.61 3.81 3.70 3.43 0.00 9.94 3.08 3.09 2.55 0.93 0.00 9.50 5.94 5.05 3.66 1.32 0.00 (a) (b) (c) (d) (e) (f) (g) Figure 5. Additional results for illumination chromaticity estimation. (a) Input images. (b/c/d/e/f/g) Illumination corrected images with the chromaticity estimated by using (b) IIC [9], (c) SpePL [2], (d) FacePL [6], (e) FaceGM [1], (f) our method, and (g) calibrated ground truth using a ColorChecker. Below each corrected image is the measured error of the estimated illumination chromaticity. 4326

Input IIC [9] SpePL [2] FacePL [6] FaceGM [1] Ours Ground truth 4.67 5.76 6.10 4.99 0.60 0.00 4.72 2.38 3.69 2.60 1.02 0.00 7.57 4.18 6.03 3.81 0.86 0.00 4.04 2.86 7.72 4.20 0.86 0.00 9.38 3.80 5.42 3.52 0.23 0.00 6.13 2.87 5.73 4.58 1.11 0.00 6.86 2.50 9.64 8.21 0.06 0.00 (a) (b) (c) (d) (e) (f) (g) Figure 6. Additional results for illumination chromaticity estimation. (a) Input images. (b/c/d/e/f/g) Illumination corrected images with the chromaticity estimated by using (b) IIC [9], (c) SpePL [2], (d) FacePL [6], (e) FaceGM [1], (f) our method, and (g) calibrated ground truth using a ColorChecker. Below each corrected image is the measured error of the estimated illumination chromaticity. 4327

Input IIC [9] SpePL [2] FacePL [6] FaceGM [1] Ours Ground truth 11.1 3.02 9.70 3.63 0.18 0.00 4.73 1.35 3.86 7.76 0.04 0.00 5.31 4.55 3.71 3.80 0.76 0.00 7.56 5.42 3.77 4.40 0.50 0.00 5.18 0.67 3.69 3.60 2.68 0.00 4.29 6.25 7.03 5.92 0.50 0.00 4.43 1.13 3.73 4.00 0.02 0.00 (a) (b) (c) (d) (e) (f) (g) Figure 7. Additional results for illumination chromaticity estimation. (a) Input images. (b/c/d/e/f/g) Illumination corrected images with the chromaticity estimated by using (b) IIC [9], (c) SpePL [2], (d) FacePL [6], (e) FaceGM [1], (f) our method, and (g) calibrated ground truth using a ColorChecker. Below each corrected image is the measured error of the estimated illumination chromaticity. 4328

Input IIC [9] SpePL [2] FacePL [6] FaceGM [1] Ours Ground truth 4.60 0.17 3.35 3.44 1.61 0.00 5.50 2.93 3.19 5.02 2.13 0.00 (a) (b) (c) (d) (e) (f) (g) Figure 8. Additional results for illumination chromaticity estimation. (a) Input images. (b/c/d/e/f/g) Illumination corrected images with the chromaticity estimated by using (b) IIC [9], (c) SpePL [2], (d) FacePL [6], (e) FaceGM [1], (f) our method, and (g) calibrated ground truth using a ColorChecker. Below each corrected image is the measured error of the estimated illumination chromaticity. 4329

Input ILD [8] MDCBF [11] DarkP [5] FacePL [6] Ours (a) (b) (c) (d) (e) (f) Figure 9. Additional results for specular highlight removal. (a) Input images. (b/c/d/e/f) Separated diffuse images by (b) ILD [8], (c) MDCBF [11], (d) DarkP [5], (e) FacePL [6], and (f) our method. 4330

Input ILD [8] MDCBF [11] DarkP [5] FacePL [6] Ours (a) (b) (c) (d) (e) (f) Figure 10. Additional results for specular highlight removal. (a) Input images. (b/c/d/e/f) Separated diffuse images by (b) ILD [8], (c) MDCBF [11], (d) DarkP [5], (e) FacePL [6], and (f) our method. 4331

Input ILD [8] MDCBF [11] DarkP [5] FacePL [6] Ours (a) (b) (c) (d) (e) (f) Figure 11. Additional results for specular highlight removal. (a) Input images. (b/c/d/e/f) Separated diffuse images by (b) ILD [8], (c) MDCBF [11], (d) DarkP [5], (e) FacePL [6], and (f) our method. 4332

Input ILD [8] MDCBF [11] DarkP [5] FacePL [6] Ours (a) (b) (c) (d) (e) (f) Figure 12. Additional results for specular highlight removal. (a) Input images. (b/c/d/e/f) Separated diffuse images by (b) ILD [8], (c) MDCBF [11], (d) DarkP [5], (e) FacePL [6], and (f) our method. 4333

Input ILD [8] MDCBF [11] DarkP [5] FacePL [6] Ours (a) (b) (c) (d) (e) (f) Figure 13. Additional results for specular highlight removal. (a) Input images. (b/c/d/e/f) Separated diffuse images by (b) ILD [8], (c) MDCBF [11], (d) DarkP [5], (e) FacePL [6], and (f) our method. 4334

Input ILD [8] MDCBF [11] DarkP [5] FacePL [6] Ours (a) (b) (c) (d) (e) (f) Figure 14. Additional results for specular highlight removal. (a) Input images. (b/c/d/e/f) Separated diffuse images by (b) ILD [8], (c) MDCBF [11], (d) DarkP [5], (e) FacePL [6], and (f) our method. 4335