Combining Strategies for White Balance

Similar documents
Improvements to Gamut Mapping Colour Constancy Algorithms

Introduction to color science

Color Constancy from Illumination Changes

Color space transformations for digital photography exploiting information about the illuminant estimation process

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

A Comparison of Computational Color Constancy Algorithms; Part Two: Experiments with Image Data

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

Video-Based Illumination Estimation

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

A Colour Constancy Algorithm Based on the Histogram of Feasible Colour Mappings

Estimating the wavelength composition of scene illumination from image data is an

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL?

Local Linear Models for Improved von Kries Adaptation

Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes

Spectral Images and the Retinex Model

A Comparison of Computational Color. Constancy Algorithms; Part One: Methodology and Experiments with. Synthesized Data

A New Method for RGB to XYZ Transformation Based on Pattern Search Optimization

Light, Color, and Surface Reflectance. Shida Beigpour

Angular Error /Degrees. Angular Error /Degrees Number of Surfaces

THE human visual system perceives the color of an

Color Correction between Gray World and White Patch

Lecture 1 Image Formation.

Colour Reading: Chapter 6. Black body radiators

dependent intensity function - the spectral distribution function (SPD) E( ). The surface reflectance is the proportion of incident light which is ref

Assessing Colour Rendering Properties of Daylight Sources Part II: A New Colour Rendering Index: CRI-CAM02UCS

High Information Rate and Efficient Color Barcode Decoding

Illuminant retrieval for fixed location cameras

Spectral Estimation and Reconstruction.

Sensor Sharpening for Computational Color Constancy

Estimating basis functions for spectral sensitivity of digital cameras

Illuminant Estimation for Object Recognition

Diagonal versus affine transformations for color correction

Color Constancy Beyond Bags of Pixels

Efficient illuminant correction in the Local, Linear, Learned (L 3 ) method

Exploiting Spatial and Spectral Image Regularities for Color Constancy

A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

A New Color Constancy Algorithm Based on the Histogram of Feasible Mappings

Multispectral Image Invariant to Illumination Colour, Strength, and Shading

Using Trichromatic and Multi-channel Imaging

Color Appearance in Image Displays. O Canada!

Geodesic Based Ink Separation for Spectral Printing

Investigating von Kries-like Adaptation Using Local Linear Models

Game Programming. Bing-Yu Chen National Taiwan University

Color Constancy for Multiple Light Sources Arjan Gijsenij, Member, IEEE, Rui Lu, and Theo Gevers, Member, IEEE

Removing Shadows from Images

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image

A New Time-Dependent Tone Mapping Model

Computational Color Constancy: Survey and Experiments

Similarity Image Retrieval System Using Hierarchical Classification

Color can be an important cue for computer vision or

Lecture #2: Color and Linear Algebra pt.1

Automatic Multi-light White Balance Using Illumination Gradients and Color Space Projection

Quantitative Analysis of Metamerism for. Multispectral Image Capture

Minimizing Worst-case Errors for Illumination Estimation

Color by Correlation: A Simple, Unifying Framework for Color Constancy

Minimalist surface-colour matching

Generalized Gamut Mapping using Image Derivative Structures for Color Constancy

Metamer Constrained Colour Correction

Enhancement of Sharpness and Contrast Using Adaptive Parameters

CHAPTER 3 COLOR MEASUREMENT USING CHROMATICITY DIAGRAM - SOFTWARE

A Novel Approach for Shadow Removal Based on Intensity Surface Approximation

A Statistical Consistency Check for the Space Carving Algorithm.

Spectral Adaptation. Chromatic Adaptation

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi

Local Image Registration: An Adaptive Filtering Framework

Color Constancy by Derivative-based Gamut Mapping

Supplementary Material: Specular Highlight Removal in Facial Images

Computational Photography and Video: Intrinsic Images. Prof. Marc Pollefeys Dr. Gabriel Brostow

Herding CATs: A Comparison of Linear Chromatic-Adaptation Transforms for CIECAM97s

Color Constancy from Hyper-Spectral Data

Illuminant Estimation from Projections on the Planckian Locus

Linear Approximation of Sensitivity Curve Calibration

Towards Autonomous Vision Self-Calibration for Soccer Robots

Camera Characterization for Color Research

Spectral Compression: Weighted Principal Component Analysis versus Weighted Least Squares

Nearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

A Survey of Light Source Detection Methods

UvA-DARE (Digital Academic Repository) Edge-driven color constancy Gijsenij, A. Link to publication

Image retrieval based on region shape similarity

THE color of objects may be communicated in many forms.

Vision par ordinateur

Seeing and Reading Red: Hue and Color-word Correlation in Images and Attendant Text on the WWW

Abstract. 1. Introduction

An ICA based Approach for Complex Color Scene Text Binarization

Reconstruction of Surface Spectral Reflectances Using Characteristic Vectors of Munsell Colors

Visual Evaluation and Evolution of the RLAB Color Space

Removing Shadows From Images using Retinex

this is processed giving us: perceived color that we actually experience and base judgments upon.

Thank you for choosing NCS Colour Services, annually we help hundreds of companies to manage their colours. We hope this Colour Definition Report

COMPUTER AND ROBOT VISION

A Novel Criterion Function in Feature Evaluation. Application to the Classification of Corks.

To appear in Pattern Recognition Letters Copyright Elsevier Science 22 2 an example the chromaticity[26] function, used extensively in colour science

An Efficient Need-Based Vision System in Variable Illumination Environment of Middle Size RoboCup

Model-based Enhancement of Lighting Conditions in Image Sequences

Segmentation and Tracking of Partial Planar Templates

Reading. 2. Color. Emission spectra. The radiant energy spectrum. Watt, Chapter 15.

Color and Color Constancy in a Translation Model for Object Recognition

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Transcription:

Combining Strategies for White Balance Simone Bianco, Francesca Gasparini and Raimondo Schettini DISCo, Università degli Studi di Milano-Bicocca, Via Bicocca degli Arcimboldi 8, 20126 Milano, Italy ABSTRACT In this work we consider six methods for automatic white balance available in the literature. The idea investigated does not rely on a single method, but instead considers a consensus decision that takes into account the compendium of the responses of all the considered algorithms. Combining strategies are then proposed and tested both on synthetic and multispectral images, extracted from well known databases. The multispectral images are processed using a digital camera simulator developed by Stanford University. All the results are evaluated using the Wilcoxon Sign Test. Keywords: White balance, Gray World, White Point, Color by Correlation, Color in Perspective, Self Tunable Color Balancing, combining methods 1. INTRODUCTION White balance is the process of removing unrealistic color casts from digital images, mostly due to the acquisition condition (in particular lighting geometry and illuminant color). While the human visual system is able to render the perceived colors of objects almost independent of illumination, (phenomenon known as Color Constancy), digital cameras often have great difficulty with Auto-White Balance (AWB), which in the worst cases can introduce unrealistic damaging colors. Several strategies for AWB are proposed in the literature. In general these require some information about the camera being used, and/or are based on assumptions about the statistical properties of the expected illuminants and surface reflectances. In this work six methods of AWB were considered: Gray World (GW), 1 White Point (WP), 2, 3 an Iterative White Balance procedure (IWB), 4 Color by Correlation (CbC), 5, 6 Color in Perspective (CiP) 7 and our Self Tunable Color Balancing (STCB). 8 The idea investigated in this work does not rely on a single method, but instead considers a consensus decision that takes into account the compendium of the responses of all the considered algorithms. In this paper some combining strategies are then proposed to improve the results in terms of RMS error in the estimated illuminant chromaticity and are compared with the six considered algorithms. In Section 2 the analyzed white balance algorithms are briefly described. In the same section the suggested combining strategies are also reported and exemplified. Section 3 is then divided in two sub-sections considering respectively the experimental results performed on the synthetic data and on the multispectral images. We consider the SONY DXC-930 curves available on the web 9 for the filter transmittances. The synthetic data are processed ideally with the color formation equations, while the multispectral images are processed using a digital camera simulator 10 developed by Stanford University. This software simulates the whole acquisition pipeline of a digital camera, in particular taking into account the different kind of noise involved. All the results are evaluated using the Wilcoxon Sign Test, 11 that determines a score ranking of the considered white balance procedures. Further author information: (Send correspondence to R.S.) F.G.: E-mail: gasparini@disco.unimib.it, Telephone: +39 (0)2 64 48 78 56 R.S.: E-mail: schettini@disco.unimib.it, Telephone: +39 (0)2 64 48 78 40

2. ALGORITHMS Automatic white balance is an under-determined problem and thus impossible to solve in the most general case. 2 Several strategies are proposed in the literature. In general these require some information about the camera being used, and are based on assumptions about the statistical properties of the expected illuminants and surface reflectances. From a computational perspective, white balance is a two-stage process: the illuminant is estimated, and the image colors are then corrected on the basis of this estimate. The correction generates a new image of the scene as if taken under a known standard illuminant. The color correction step is usually based on a diagonal model of illumination change, derived from the Von Kries hypothesis that white balance is an independent gain regulation of the three cone signals, through three different gain coefficients. 12 This diagonal model is generally a good approximation of change in illumination, as demonstrated by Finlayson and al. 13 Though the model lead to large errors, its performance can still be improved with sensor sharpening. 14, 15 In formula the Von Kries hypothesis can be written as: L M S = k L 0 0 0 k M 0 0 0 K S L M S (1) where L,M,S and L,M,S are the initial and post-adaptation cone signals and k L,k M,k S are the scaling coefficients. 12 The scaling coefficients can be expressed as the ratio between the cone responses to a white under the reference illuminant and those of the current one. A typical reference illuminant, which is also the one we have used here, is the D65 CIE standard illuminant. 16 In practical situations the L, M, S retinal wavebands are transformed into CIE XYZ tristimulus values by a linear transformation, or approximated by image RGB values. 17 Whatever the features used to describe the colors, we must have some criteria for estimating the illuminant and thus the scaling coefficients in Equation (1). 2.1. Gray World The gray world algorithm assumes that given an image of sufficiently varied colors, the average surface color in a scene is gray. 1 This means that the shift from gray of the measured averages on the three channels corresponds to the color of the illuminant. Three scaling coefficients, one for each color channel, are therefore set to compensate this shift. 2.2. White Point Assuming that there is always some white in the scene, the white point algorithm looks for it in the image; its chromaticity will then be the chromaticity of the illuminant. The white point algorithm determines this white as the maximum R, maximum G and maximum B found in the image. 2, 3 The scaling coefficients are now obtained by comparing these maxima with the values of the white of the reference illuminant. 2.3. Iterative White Balance The Iterative White Balance algorithm proposed by Huo et al. 4 extracts gray color points in images for color temperature estimation. A gray color point is the point where R, G and B components are equivalent under the canonical light source. A slight color deviation of the gray color point from gray under different color temperature is used to estimate the color temperature of the light source and thus the scaling coefficients. 2.4. Color by Correlation Color by correlation has been introduced by Finlayson et al. 5, 6 The basic idea is to pre-compute a correlation matrix which describes the extent to which the proposed illuminants are compatible with the occurrence of image chromaticities. Each row in the matrix corresponds to a different training illuminant. The matrix columns correspond to possible chromaticity ranges resulting from a discretization of (r, g) space, ordered in any convenient manner. In a further refinement, the correlation matrix has been set up to compute the probability that the observed chromaticities are due to each of the training illuminants. The best illuminant can then be chosen, 18, 19 using a maximum likelihood estimate for example, or other methods described in the literature. There are different versions of Color by Correlation; in this paper we have implemented the one described by Finlayson et al. 5

2.5. Color in Perspective We have considered here the Color in Perspective algorithm, developed by Finlayson, 7 based on Forsyth s gamutmapping approach. 20 The gamut-mapping algorithm considers the set of all possible (R, G, B) due to surfaces in the world under the known, canonical illuminant. This set is convex and is represented by its convex hull. The set of all possible (R, G, B) under the unknown illuminant is similarly represented by its convex hull. Under the diagonal assumption of illumination change, these two hulls are a unique diagonal mapping (a simple 3D stretch) of each other. Because the observed set is normally a proper subset, the mapping to the canonical illuminant is not unique, and Forsyth 20 provides a method for effectively computing the set of possible diagonal maps which is convex set in the space of mapping coefficients. Finlayson s Color in Perspective algorithm adds two additional ideas. 7 First, the gamut-mapping method can be used in the chromaticity space (R/B, G/B). Second, the diagonal maps can be further constrained by restricting them to ones corresponding to expected illuminants. 2.6. Self Tunable Color Balancing Most methods of color balancing do not discriminate between true cast (i.e. a superimposed dominant color) and predominant colors, but are applied in the same way to all images. This may result in an undesirable distortion of the chromatic content with respect to the original scene. Self Tunable Color Balancing (STCB) 8 can be used to avoid this. Before removing the color cast, it classifies the image without requiring any a priori knowledge of its semantic content. The STCB method is a weighted mixture of the white patch and gray world procedures, and permits the solution of cases where one, or both of the two assumptions are not valid. First, a multi-step algorithm classifies the input images as i) no-cast images; ii) evident cast images; iii) ambiguous cast images (images with a feeble cast, or those where the fact that the cast exists or not is a subjective opinion), iv) images with a predominant color that must be preserved, v) unclassifiable images. This classification makes it possible to discriminate between images requiring color correction and those in which the chromaticity must, instead, be preserved. Color correction is then applied only to those images classified as having either an evident or an ambiguous cast. The gain coefficients are estimated by setting at white what we have identified as the white balance region on the basis of the type of cast detected. 2.7. Combining Methods In this paper we analyze different combining schemes of the white balance algorithms to improve the results in terms of RMS error in the estimated illuminant chromaticity. These schemes can be better understood with the aid of Figure 1, where six points simulate the possible different estimations of the illuminant chromaticity obtained from the six uncombined algorithms. The investigated combining methods are listed in the following, where the involved points are explicitly written for a better comprehension: A. Mean: mean value of the results given by the uncombined methods (e.g. points 1-2-3-4-5-6 of Figure 1); B. Nearest2: mean value of the closest 2 results of the uncombined methods (e.g. points 5-6 of Figure 1); C. Nearest10pc: mean value of the results of the uncombined methods with relative distances below the 110% of the distance of the closest 2 (e.g. points 5-6 of Figure 1); D. Nearest30pc: mean value of the results of the uncombined methods with relative distances below the 130% of the distance of the closest 2 (e.g. points 2-3-5-6 of Figure 1); E. NoMax: mean value of the results of the uncombined methods excluding the one with the highest distance from the others (e.g. points 1-2-3-5-6 of Figure 1); F. No2Max: mean value of the results of the uncombined methods excluding the 2 with the highest distance from the others (e.g. points 2-3-5-6 of Figure 1); G. No3Max: mean value of the results of the uncombined methods excluding the 3 with the highest distance from the others (e.g. points 2-3-5 of Figure 1).

Note that, starting from the six points depicted in Figure 1, some combining schemes use the same points to evaluate the illuminant chromaticity, e.g. Nearest2 and Nearest10pc. Obviously, this is due to the particular distribution depicted and is not always the case. 1 2 3 G E A D, F 4 5 B, C 6 Figure 1. A typical distribution of illuminant cromaticity estimations obtained using the uncombined algorithms considered. Given these estimations the combined algorithms use the chromaticity points listed in the following. A: Mean (1-2-3-4-5-6); B: Nearest2 (5-6); C: Nearest10pc (5-6); D: Nearest30pc (2-3-5-6); E: NoMax (1-2-3-5-6); F: No2Max (2-3-5-6); G: No3Max (2-3-5). 3. COLOR CONSTANCY EXPERIMENTS We have analyzed the performance of the six uncombined algorithms and of the seven combined algorithms on a set of synthetic and real image experiments following Barnard et al.18, 19 The algorithms tested are Gray World (GW), White Point (WP), Iterative White Balance (IWB), a version of Color by Correlation (CbC), a version of the Color in Perspective (CiP) and Self Tunable Color Balancing (STCB). For comparison we have also added the Do Nothing (DN). Algorithm accuracy is measured using the RMS error in the chromaticity space. To establish a ranking score that compares all the 14 procedures (6 uncombined, 1 DN and 7 combined), the error distributions of the algorithms are compared.21 Since the underlying error distributions can not be well modeled by a standard distribution, a test that requires no assumptions about the distributions is needed. An appropriate test for this case is the Wilcoxon Sign Test.11, 21 Here is a brief description of the test already made by Hordley and Finlayson:21 let X and Y be random variables representing the RMS error of the illuminant chromaticity estimation of algorithms X and Y. The Wilcoxon Test is used to test the hypothesis that the random variables X and Y are such that p = P (X > Y ) = 0.5. That is, we hypothesize that algorithm X and Y have the same performance. To test the hypothesis H0 : p = 0.5 we consider independent pairs (X1, Y1 ),...,(Xn, Yn ) of errors for N different images. We denote by W the number of images for which Xi > Yi. When H0 is true W is binomially distributed (b(n, 0.5)) and the Wilcoxon test is based on this statistic. We can define an alternative hypothesis H1 : p < 0.5 which if true implies that errors for algorithm X are lower than those for algorithm Y. We accept or reject the null hypothesis at a given significance level α if the probability of observing the results

Figure 2. Sony DXC-930 filters transmittances we observe is less than or equal to α. The value of α we choose defines the error rate we accept when we reject the null hypothesis. For the experiments reported in this paper we choose α = 0.01, i.e. we accept an error rate of 1% (we wrongly reject the null hypothesis in 1% of cases). 3.1. Synthetic images experiments We have computed the performance of the algorithms and of the combining strategies on a test set of 6000 synthetic images, composed of six collections of 1000 scenes, each of them composed respectively of 2, 4, 8, 16, 32 and 64 surfaces. The reflectances of these surfaces have been randomly selected from a database of more than 40000 measured reflectances representative of the world. 22 For each image that we have generated the illuminant has also been randomly chosen from a set of 287 measured illuminants. 9 To generate synthetic sensor responses we have adopted the spectral sensitivities of the SONY DXC-930 digital video camera (depicted in Figure 2). The tristimulus values corresponding to each surface can thus be evaluated using Equations 2: R = G = B = 700nm λ=400nm 700nm λ=400nm 700nm λ=400nm S 1 (λ)i(λ)r(λ) S 2 (λ)i(λ)r(λ) S 3 (λ)i(λ)r(λ) (2) where I(λ) is the spectral power distribution of the illuminant considered, R(λ) the reflectance considered and S 1 (λ), S 2 (λ), S 3 (λ) the spectral sensitivities of the sensor device. For the 6000 scenes, the RMS error in the estimated illuminant chromaticity is evaluated for each of the 14 strategies. The Wilcoxon Sign Test applied to the 14 RMS error distributions gives a 14-by-14 matrix, whose entries are +, - or 0. A plus sign in the ith row and jth column of the matrix means that algorithm i is statistically better than algorithm j when judged according to the Wilcoxon test. A minus implies that it is worse, while a zero implies that the two algorithms are statistically equivalent. Counting the number of plus sings for every row of the matrix gives us a score, representative of the number of the algorithms respect to which the algorithm considered results to be statistically better.

Scores for the algorithms on the whole set of synthetic images are reported in Table 1. Table 1. Scores using the Wilcoxon Sign Test of the algorithms tested on the set of synthetic images: the score represents the number of the algorithms respect to which the algorithm considered results to be statistically better Uncombined Methods Score Combined Methods Score DN 0 Mean 4 GW 3 Nearest2 8 WP 5 Nearest10pc 6 IWB 1 Nearest30pc 7 CbC 9 NoMax 11 CiP 9 No2Max 12 STCB 2 No3Max 13 From the analysis of Table 1, it can be noticed that the best algorithms are the last three combined proposals. While among the uncombined methods, CbC and CiP show the highest performance, according to the results reported by Hordley and Finlayson. 21 Finally, our STCB algorithm shows a very low performance, but this is not surprising because it was specifically designed to work with real images and not ideal data. 3.2. Real images experiments For the second experiment we used the 23 multispectral images acquired by the University of West Anglia. 23 Their RGB versions rendered under the uniform white illuminant are reported in Figure 3. The 11 illuminants adopted to generate the test images correspond to those used in other papers concerning color constancy, 24 i.e. D48, D55, D65, D75, D100, D200, A, B, C, 2000K planckian black body radiator and uniform white. Their spectral power distributions are shown in Figure 4. The test set has been generated acquiring the 23 multispectral images under the 11 illuminants with the Image Systems Evaluation Toolkit (ISET) 10 developed at Stanford University. This software makes it possible to simulate the entire image processing pipeline of a digital camera combining optical modeling and sensor technology simulation. In particular the ISET takes into account the different kinds of noise involved in the acquisition process. As sensor spectral sensitivities we have adopted the same ones from the experiment on synthetic images, i.e. the ones of SONY DXC-930 (Figure 2). The matrices for the CbC and CiP algorithms are the same ones used in the case of the synthetic images experiments. The data processed by the white balance algorithms are the RGB sensor values. All the 23 spectral images have been acquired twice: the first time with the ISET auto-exposure function on, in order to avoid saturation clipping, and the second time with an exposure time set manually at the 120% of the auto-exposure time, in order to have clipping. Thus experiments have been conducted on a set of 506 real images. Results have been handled in the same way as the synthetic experiment, using the Wilcoxon Sign Test. The final results can be reported in three different score tables: Table 2 for the unclipped images, Table 3 for the clipped images and Table 4 for the summary of the results for unclipped and clipped images together. From the analysis of Table 2, it can be noticed that the best algorithm oft all is the white point. This is due to the particular set of images considered. In fact, looking at Figure 3, it is easy to find a white area in each image and thus a white point that in the unclipped situation will be considered by the white point algorithm. When the images are clipped, as probably occurs in the most of the amateur photographs, the best algorithm of all is a combined one (Table 3), while among the uncombined methods our STCB shows the best performance. 4. CONCLUSIONS In this paper we have proposed combining strategies of white balance algorithms available in the literature to improve the illuminant chromaticity estimation and correction for digital images. We have tested and compared

Figure 3. The database of multispectral images used for the real images experiments rendered under the uniform white illuminant Figure 4. Spectral power distributions of the 11 illuminants used (normalized to 1): D48, D55, D65, D75, D100, D200, A, B, C, 2000K planckian black body radiator and uniform white

Table 2. Scores using the Wilcoxon Sign Test of the algorithms tested on the set of real images without clipping: the score represents the number of the algorithms, respect to which the algorithm considered results to be statistically better Uncombined Methods Score Combined Methods Score DN 1 Mean 3 GW 3 Nearest2 7 WP 13 Nearest10pc 6 IWB 0 Nearest30pc 8 CbC 2 NoMax 11 CiP 5 No2Max 12 STCB 8 No3Max 10 Table 3. Scores using the Wilcoxon Sign Test of the algorithms tested on the set of real images with clipping: the score represents the number of the algorithms, respect to which the algorithm considered results to be statistically better Uncombined Methods Score Combined Methods Score DN 1 Mean 3 GW 3 Nearest2 6 WP 3 Nearest10pc 9 IWB 0 Nearest30pc 8 CbC 2 NoMax 12 CiP 6 No2Max 13 STCB 11 No3Max 10 the original algorithms and the combining strategies using both synthetic and real images. In all the situations combined strategies perform globally better. While the best method among the uncombined algorithms changes, depending on the experiment. REFERENCES 1. G. Buchsbaum, A spacial processor model for object color perception, in Journal of Franklin Institute, 310, pp. 1 26, 1980. 2. B. Funt, K. Barnard, and L. Martin, Is machine colour constancy good enough?, in Proc. 5th European Conference on Computer Vision, Freiburg, Germany, pp. 445 459, 1998. 3. V. Cardei, B. Funt, and K. Barndard, White point estimation for uncalibrated images, in Proceedings of the IS&T/SID seventh color umaging conference, Scottsdale, USA, pp. 97 100, 1999. 4. J. Huo, Y. Chang, J. Wang, and X. Wei, Robust automatic white balance algorithm using gray color points in images, in IEEE Transactions on Consumer Electronics, 52, pp. 541 546, 2006. 5. G. Finlayson, S. Hordley, and P. Hubel, Color by correlation: A simple, unifying framework for color constancy, in IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, pp. 1209 1221, 2001. 6. G. Finlayson, P. Hubel, and S. Hordley, Color by correlation, in Proc. IS&T/SID 5th Color Imaging COnf.: Color Science, Systems and Applications, pp. 6 11, 1997. 7. G. Finlayson, Color in perspective, in IEEE Transactions on Pattern Analysis and Machine Intelligence, 18, pp. 1034 1038, 1996. 8. F. Gasparini and R. Schettini, Color balancing of digital photos using simple image statistics, in Pattern Recognition, 37(6), pp. 1201 1217, 2004.

Table 4. Scores using the Wilcoxon Sign Test of the algorithms tested on the whole set of real images, i.e. both with and without clipping. The score represents the number of the algorithms, respect to which the algorithm considered results to be statistically better Uncombined Methods Score Combined Methods Score DN 1 Mean 3 GW 3 Nearest2 6 WP 8 Nearest10pc 7 IWB 0 Nearest30pc 8 CbC 2 NoMax 12 CiP 5 No2Max 13 STCB 10 No3Max 11 9. Simon Fraser University, Computational Vision Lab Data, www.cs.sfu.ca/colour/data. 10. J. Farrell, F. Xiao, P. Catrysse, and B. Wandell, A simulation tool for evaluating digital camera image quality, in Image Quality and System Performance, 5294, pp. 124 131, 2003. 11. R. Hogg and E. Tanis, Probability and Statistical Inference, 2001. Prentice Hall. 12. M. Fairchild, Color Appearance Models, 1997. Addison Wesley. 13. G. Finlayson, M. Drew, and B. Funt, Diagonal transform suffice for color constancy, in Proc. IEEE international conference on computer vision, Berlin, pp. 164 171, 1993. 14. G. Finlayson, M. Drew, and B. Funt, Spectral sharpening: Sensor transformations for improved color constancy, in J. Opt. Soc. Amer. A, 11, pp. 1553 1563, 1994. 15. K. Barnard, F. Ciurea, and B. Funt, Sensor sharpening for computational color constancy, in J. Opt. Soc. Amer. A, 18, pp. 2728 2743, 2001. 16. www.cie.co.at/cie/. 17. E. Land and J. McCann, Lightness and retinex theory, in J. Opt. Soc. Amer, 61(1), pp. 1 11, 1971. 18. K. Barnard, V. Cardei, and B. Funt, A comparison of computational color constancy algorithms; part one: Methodology and experiments with synthetic images, in IEEE Tansactions on Image Processing, 11(9), pp. 972 984, 2002. 19. K. Barnard, V. Cardei, and B. Funt, A comparison of computational color constancy algorithms; part two: Experiments with image data, in IEEE Tansactions on Image Processing, 11(9), pp. 985 996, 2002. 20. D. Forsyth, A novel algorithm for color constancy, in Int. J. Comput. Vis., 5, pp. 5 36, 1990. 21. S. Hordley and G. Finlayson, Re-evaluating color constancy algorithms, in Proceedings of the 17th International Conference on Pattern Recognition, pp. 76 79, 2004. 22. Graphic technology - Standard object colour spectra database for colour reproduction evaluation (SOCS), Technical Report ISO/TR 16066:2003(E). 23. University of West Anglia, http://www2.cmp.uea.ac.uk/ pm/chromagenic/msdb2/. 24. G. Finlayson, Color constancy in diagonal chromaticity space, in IEEE Proc. Fifth Intl. Conf. on Comp. Vision, 1995.