THE human visual system perceives the color of an

Size: px
Start display at page:

Download "THE human visual system perceives the color of an"

Transcription

1 3612 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 8, AUGUST 2012 Chromaticity Space for Illuminant Invariant Recognition Sivalogeswaran Ratnasingam, Member, IEEE, and T. Martin McGinnity, Member, IEEE Abstract In this paper, an algorithm is proposed to extract two illuminant invariant chromaticity features from three image sensor responses. The algorithm extracts these chromaticity features at pixel level and therefore can perform well in scenes illuminated with nonuniform illuminant. An approach is proposed to use the algorithm with cameras of unknown sensitivity. The algorithm was tested for separability of perceptually similar colors under the International Commission on Illumination standard illuminants and obtained a good performance. It was also tested for color-based object recognition by illuminating objects with typical indoor illuminants and obtained a better performance compared to other existing algorithms investigated in this paper. Finally, the algorithm was tested for skin detection invariant to illuminant, ethnic background and imaging device. In this investigation, daylight scenes under different weather conditions and scenes illuminated by typical indoor illuminants were used. The proposed algorithm gives a better skin detection performance compared to widely used standard color spaces. Based on the results presented, the proposed illuminant invariant chromaticity space can be used for machine vision applications including illuminant invariant color-based object recognition and skin detection. Index Terms Chromaticity constancy, color constancy, color-based object recognition, illuminant invariant color space. I. INTRODUCTION THE human visual system perceives the color of an object largely independent of the viewing environment, including illuminant and scene geometry. However, an imaging device records the color of an object depending on the illuminant, scene geometry and the spectral characteristic of the imaging device. The dependency of the image sensor responses on the viewing environment (viewing illuminant and scene geometry) makes it difficult to use the recorded color as a reliable cue in machine vision applications. The Manuscript received June 21, 2011; revised January 5, 2012; accepted March 7, Date of publication April 3, 2012; date of current version July 18, This work was supported by the Centre of Excellence in Intelligent Systems (CoEIS) Project, funded by Northern Ireland Integrated Development Fund and InvestNI. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. David S. Taubman. S. Ratnasingam was with the Intelligent Systems Research Centre, School of Computing and Intelligent Systems, Magee Campus, University of Ulster, Londonderry BT48 7JL, U.K. He is currently with the NICTA, Canberra Research Laboratory, Canberra ACT 2601, Australia, and also the College of Engineering and Computer Science, Canberra 0200, Australia ( sivaloges@yahoo.co.in). T. M. McGinnity is with the Intelligent Systems Research Centre, School of Computing and Intelligent Systems, Magee Campus, University of Ulster, Londonderry BT48 7JL, U.K. ( tm.mcginnity@ulster.ac.uk). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier /TIP /$ IEEE challenge for recognizing the spectral reflectance of a surface of an object is to eliminate the effects of the light source that illuminates the object. This is often referred to as the problem of color constancy. In the past, researchers have proposed a number of different approaches for solving the color constancy problem, by making assumptions about the scene or illuminant or the imaging device or a combination of these. In this paper, we propose an algorithm that extracts two illuminant invariant chromaticity features from three sensor responses. Our objective is to create a 2-D chromaticity space that is similar to the chromaticity spaces rg or xy, but unlike these spaces ensure the proposed space is invariant to changes in the viewing illuminant. The performance of the proposed algorithm is significantly better than those compared in this paper, in that the proposed chromaticity space gives the same values for surfaces with the same reflectance, regardless of the illuminant. This 2-D illuminant invariant space could be used for illuminant invariant recognition purposes in machine vision applications. The structure of this paper is as follows. In Section II, we briefly review the related work and illustrate how this contribution extends existing research. In Section III, we present the mathematical formulation of the proposed algorithm and show the 2-D illuminant invariant space formed by the algorithm. The performance of the algorithm in separating perceptually similar colors is investigated in Section IV. In Section V, the algorithm is investigated for color-based recognition and skin detection invariant to illuminant and compared with alternative methods. Finally, conclusions are drawn in Section VI. II. RELATED WORK Von Kries [1] first proposed a chromatic adaptation model for studying the visual adaptation of the human visual system, assuming that the spectral sensitivity of the sensors does not overlap each other. This adaptation model has been used by several researchers to either estimate the illuminant effect on the image or to extract the illuminant invariant features [2]. Based on the assumption of both the blackbody model of the illuminant power spectrum and infinitely narrow band image sensors, Marchant and Onyango [3] proposed an algorithm to extract shadow invariant features. These authors developed a transformation for color images that is invariant to shading and shadow in daylight scenes. In particular, they process three image sensor responses in the linear scale to extract a reflectance feature that is approximately invariant to shading and shadow. Land and McCann [4] proposed a computational model ( retinex algorithm) for estimating the illuminant

2 RATNASINGAM AND MCGINNITY: CHROMATICITY SPACE FOR ILLUMINANT INVARIANT RECOGNITION 3613 effect on the scene by processing the logarithm of image sensor responses. The advantage of using the logarithm of image sensor responses for solving color constancy is that this allows the reflectance components to be easily separated from the illuminant dependent components. Separation of the illuminant and the reflectance components in the logarithm scale can be easily understood by considering the following basic image equation: R x,e = a x n x I x S x (λ)e x (λ)f(λ)dλ (1) ω where R x,e is the response of an image sensor with spectral sensitivity F(λ) and the dot product between the unit vectors a x n x represents the geometric factor of a scene. The unit vectors a x and n x represent the direction of the light source and the direction of surface normal, respectively. The quantity S(λ) is the surface reflectance of the scene at point x, E(λ) is the spectral power distribution of the light source and I x is the intensity of the illuminant at point x on the scene. The integration is over the visible range nm wavelength. To simplify the basic image equation, assuming the spectral sensitivity of the image sensor is infinitely narrow and applying the sifting property of the Dirac delta function results in R x,e = a x n x I x S x (λ i )E x (λ i ). (2) Taking logarithm of both sides of (2) will result in separation of the different components [4] [6] log(r x,e ) = log{g x I x }+log{e x (λ i )}+log{s x (λ i )} (3) where G x (= a x n x ) is the geometry factor. In (3) the first term depends on the scene geometry and intensity of the illuminant. The second and third terms depend on the chromaticity of the illuminant and the surface reflectance of the object, respectively. Taking logarithm to both sides of (2) has transformed the product terms into simple summation. From (3) it can be seen that a linear combination of sensors may be used to remove the illuminant effect. This is relatively easy compared to working with the linear form as in (1). Based on the infinitely narrow band sensor assumption and the blackbody assumption, Finlayson and Hordley [5] proposed an algorithm to extract an illuminant invariant reflectance feature. The authors first calculate the logarithm difference of sensor responses from one of the sensor responses. This normalization results in a 2-D space and subsequently an eigenvector decomposition was applied to find the two eigenvectors. Then the 2-D space is projected in the direction of the larger eigenvector to obtain an illuminant invariant feature. The authors refer this illuminant invariant feature as 1-D color constancy. However, Ebner [2] showed that in the 1-D color constancy many of the perceptually discriminable reflectances are indiscriminable (i.e., confused). Finlayson and Drew [7] proposed an algorithm by extending the 1-D color constancy by applying it on four image sensor responses to obtain two illuminant invariant features. Recently, Ratnasingam and Collins [6] proposed a relatively simple algorithm that processes four image sensor responses. Ratnasingam et al. [8] showed that the four sensor algorithms proposed by Finlayson and Drew [7] and Ratnasingam and Collins [6] are comparable in performance. However, most consumer cameras have only three sensor responses. Berwick and Lee [9] proposed a 2-D illuminant invariant chromaticity space by processing three sensor responses. In deriving this space the authors used a diagonal transform to model the sensor response and a translation to model the effect of variations in the illuminant spectrum. The main disadvantage of this approach is that it works only in scenes illuminated by a single uniform illuminant. Apart from these algorithms there are color constancy algorithms based upon making assumptions about the scene. One of the well-known algorithms is the grey world algorithm proposed by Buchsbaum [10]. This algorithm assumes that any scene is achromatic and therefore the average value of the sensor response of a scene provides an estimate of the illuminant that illuminates the scene. This algorithm requires a large number of colors to be present in the scene. A weakness of the algorithm is that if the scene has a dominant color (example large region of an image occupied by blue sky or grass land or tree) the algorithm results in an incorrect estimate for the illuminant. Similarly, the white patch algorithm is a simplified version of the Land and McCann s [4] Retinex algorithm [11], [12]. This algorithm relies on the assumption that there is a bright patch that reflects the entire light fell on it and the image sensor responses of the bright patch is used as an estimate of the illuminant. A drawback of this algorithm is a single bright pixel could lead to a bad estimate of the illuminant. Another well-known color constancy algorithm is the gamut mapping algorithm proposed by Forsyth [13]. As this algorithm imposes constraints on the illuminant and reflectance of the scene; this is also referred to as the constraint-based approach. The gamut mapping algorithm has two steps; in the first step two gamuts are obtained, namely the image gamut and the canonical gamut. The canonical gamut is created by taking all the surfaces present in the scene under canonical illuminant. In the second step, under von Kries [1] chromatic adaptation model, the scene illuminant is estimated using the image gamut and the canonical gamut. The main drawbacks of this algorithm are that it is difficult to implement, that it sometimes gives an estimate that is physically unrealizable or in worst cases gives no estimate at all. This algorithm also assumes many surfaces with different colors present in the scene. All three algorithms (white patch, grey world, and gamut mapping) assume that the scene is illuminated uniformly by a single illuminant. This assumption does not often hold in real world scenes. Color constancy algorithms based on Bayesian approaches have also been proposed by modelling the variability of the reflectance and illuminant as random variables. The illuminant is estimated from the posterior distribution conditioned on the given image [14], [15]. Despite the different approaches for extracting illuminant invariant chromaticity for solving color constancy [5], [6], [9], [16], [17], Hordley [18] and Funt et al. [19] concluded that the performance of the existing color constancy algorithms are not good enough for machine vision applications. More recently, Foster [20] investigated the color constancy problem

3 3614 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 8, AUGUST 2012 extensively and concluded that this is a challenging problem to solve and more research work is required. In the next section, we present the mathematical formulation of the proposed algorithm and show the 2-D illuminant invariant space formed by the algorithm III. ALGORITHM We propose an empirical method for extracting illuminant invariant chromaticity features based on the approach proposed by Marchant and Onyango [3]. In particular, Marchant and Onyango [3] processed in the linear scale and extracted a single feature that is invariant to shadow and shading. However, in our proposed method we use the logarithm of sensor responses to easily separate the illuminant component from the reflectance components. In particular, we extract two illuminant invariant features from three sensor responses as follows: F 1 = log(r 2 ) {αlog(r 1 ) + β log(r 3 )} (4) F 2 = log(r 3 ) {γ log(r 1 ) + δ log(r 2 )} (5) where R 1, R 2,andR 3 are image sensor responses and the sensors are numbered starting from the shortest wavelength. The quantities α, β, γ,andδ are the channel coefficients. The first illuminant invariant feature (F 1 ) is formed by estimating the illuminant effect on the sensor response 2 using the responses 1 and 3. Similarly, the second illuminant invariant feature (F 2 ) is formed by estimating the illuminant effect on the sensor response 3 using the responses 1 and 2. It is also possible to form a third feature that will be a combination of the first two features, but we do not exploit this in the current approach. The unwanted variations in the sensor responses caused by the scene geometry, illuminant intensity and power spectrum can be removed if the channel coefficients (α, β, γ, and δ) satisfy the following equation: 1 = α + β (6) λ 2 λ 1 λ 3 1 = γ + δ (7) λ 3 λ 1 λ 2 where λ 1, λ 2,andλ 3 are the wavelengths at which the three image sensors have effective peak sensitivity. To estimate the channel coefficients we use an optimization setting based on the steepest descent algorithm in such a way that the separability of the perceptually similar colors is maximized in the 2-D illuminant invariant chromaticity space formed by the two extracted features. Here, we empirically find the four channel coefficients that provide the best separation of perceptually similar reflectances under different illuminants. To generalize the algorithm for standard illuminants and to obtain a better color constancy performance with real world illuminants we optimize the channel coefficients with the International Commission on Illumination standard illuminants instead of blackbody illuminants. As we are interested in recognizing perceptually similar colors under different illuminants we have chosen 100 pairs of normalized Munsell reflectances with pair wise member separation of three CIELab units as used by Ratnasingam et al. [8]. Ratnasingam et al. [8] used a steepest F 2 F F (a) F 1 (b) Fig. 1. Illuminant invariant chromaticity feature space formed with normalized sensor responses of DXC930 camera and the parameters listed in Table I. In this space illuminant invariant chromaticity features extracted from 202 Munsell reflectances when illuminated by (a) CIE standard illuminant with correlated color temperature (CCT) 6500 K and (b) ten spectra of CIE standard illuminants are shown. The color of the points show the pseudo color of the Munsell test surfaces. descent algorithm to optimize the sensitivity functions and we use the same optimization method to optimize the channel coefficients. In this optimization, the Mahalanobis distance metric was used to determine the number of correctly identifiable color pairs and the channel coefficients were chosen in such a way that the number of correctly identified pairs is maximized [8]. In this optimization, Sony DXC930 camera responses were used as the image sensors and 20 spectra of CIE standard daylight with CCT between 4000 and K were used [8]. Fig. 1 shows the illuminant invariant chromaticity feature space formed when using the sensitivity functions of the Sony DXC930 camera. Each point on this space shows the two features extracted from one of the 202 Munsell samples with pseudo color when illuminated by (a) the CIE standard daylight illuminant with CCT 6500 K and (b) ten spectra of CIE standard illuminants. From this figure it can be seen that the perceptually similar reflectances are nonoverlapping and therefore the space has the basic ability to identify

4 RATNASINGAM AND MCGINNITY: CHROMATICITY SPACE FOR ILLUMINANT INVARIANT RECOGNITION F 2 F 1 points within boundary Percentage of Proposed algorithm, 6 CIELab units 20 Proposed algorithm, 10 CIELab units Finlayson and Drew, 6 CIELab units Finlayson and Drew, 10 CIELab units Full width at half maximum (nm) Fig. 2. Mahalanobis distance boundaries for a pair of Munsell test reflectance samples from 6 CIELab unit pair when illuminated by 20 spectra of CIE standard illuminants. Noise was modelled as Gaussian noise of 100 samples that represent Gaussian noise of 40 db. The cluster points denoted by + and o show the location of the features extracted from the pair of Munsell samples respectively. TABLE I OPTIMUM CHANNEL COEFFICIENTS FOR DXC930 CAMERA Parameter Optimum value Alpha Beta Gamma Delta perceptually similar reflectances. In particular, when changing the illuminant, the extracted reflectance features for surfaces are not overlapping and therefore should not be confused with one another, being discriminable in the proposed feature space. However, using one of the features F 1 or F 2 leads to confusion of some of the perceptually different colors. For example projecting the 2-D space shown in Fig. 1 in the direction of the x-axis is equivalent to only using F 2 and it can be seen that this projection would not be able to separate blues from greens. Similarly, projecting the space in the direction of the y-axis is equivalent to only using F 1, but in this case the same blues are confused with orange-reds. It can be seen that the confusing colors when using either F 1 or F 2 are different and therefore the two proposed features capture different illuminant invariant information about a scene. However, using both features (F 1 and F 2 ) together eliminates any confusion of separating perceptually different colors. In fact, the conclusion that a 2-D space is needed to separate quite different colors has been reached previously by Ebner [2]. Section V investigates the advantages of using two features instead of one for illuminant invariant color-based recognition. IV. SEPARABILITY OF PERCEPTUALLY SIMILAR COLORS The performance of the proposed algorithm was investigated for the ability to recognize perceptually similar colours under the CIE standard illuminants and compared with the four sensor algorithms proposed by Finlayson and Drew [7]. Finlayson and Drew s [7] algorithm starts with four sensor responses and in the first step the sensor responses were normalized Fig. 3. Separability results of the proposed algorithm and Finlayson and Drew s [7] algorithm when tested with perceptually similar Munsell reflectances and the CIE standard illuminants. The responses were multiplied with 40 db Gaussian noise and the resultant responses were quantized to 10 bits. with one of the responses to remove the intensity component and scene geometry component from the responses. The resultant-normalized responses were projected in the direction of illuminant-induced variation on the sensor responses. This projection results in a 2-D space. As there are no consumer cameras that have four sensors, in this investigation Gaussian sensitivity functions were used to model the image sensors. The proposed algorithm and the Finlayson and Drew [7] algorithm are investigated using three and four evenly spread Gaussian sensors in the visible region (400 to 700 nm), respectively. As described by Ratnasingam et al. [8], two sets of 100 pairs of normalized Munsell samples with member separation of six and ten CIELab units were chosen. In the CIELab color space six and ten unit distances are described as good and acceptable match of colors, respectively, in color reproduction [21]. This means in color reproduction, if the distance between the actual and reproduced color of a surface in the CIELab space is six units or below is considered as colorimetrically good reproduction of color, and if the distance is between six and ten CIELab units is considered as colorimetrically acceptable reproduction. Two sets of 20 spectra of CIE standard daylight were chosen, one for fitting the Mahalanobis distance boundary and the other for assessing the correct recognition of pairs as described by Ratnasingam and Collins [8]. For defining the cluster boundary occupied by a pair of Munsell sample, this Munsell pair was illuminated by the first illuminant set and the extracted features were projected onto the 2-D space. These two Munsell samples form two clusters when illuminated by 20 illuminants and multiplied by 100 samples of Gaussian noise [8]. Effectively each Munsell sample results in a cluster of 2000 points on the feature space (see Fig. 2). In the first step the mean of the two clusters were determined and a Mahalanobis distance boundary is drawn with equal distance from the two cluster centers in such a way that the two respective boundaries formed by the pair of samples touch each other (see Fig. 2). Once the cluster boundaries have been determined, the same Munsell pair was

5 3616 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 8, AUGUST 2012 TABLE II OPTIMUM CHANNEL COEFFICIENTS FOR 80-nm FWHM GAUSSIAN SENSORS Parameter Optimum value Alpha Beta Gamma Delta TABLE III PERFORMANCE OF THE PROPOSED ALGORITHM WHEN TESTED WITH MUNSELL AND MEASURED FLORAL REFLECTANCES BY ILLUMINATING WITH CIE STANDARD DAYLIGHT ILLUMINANTS.IN THIS TEST DXC930 CAMERA RESPONSES WERE APPLIED AS INPUT TO THE PROPOSED ALGORITHM Munsell, CIE daylight Floral, CIE daylight Pair wise distance in CIELab space Percentage separability (%) illuminated by the second set of illuminant and projected onto the feature space. The number of feature points fell within the correct Mahalanobis distance boundary was counted and the same procedure was repeated for all the pairs in a test reflectance set and the percentage of correct classification was calculated. The derivation of the proposed algorithm and Finlayson and Drew s [7] algorithm assume infinitely narrow band sensors. To study the impact of breaking this assumption on the performance of the algorithms, the full width at half maximum (FWHM) of the Gaussian sensors were varied between 20 and 200 nm. As described in Section III, the channel coefficients were optimized for interested sensor spectral widths using Gaussian sensitivity functions. The optimum coefficients for 80-nm FWHM are listed in Table II. To perform a realistic investigation of the algorithms, the sensor noise was modelled in such a way that the signal to noise ratio of the responses is 40 db and the resultant responses were quantized to 10 bits [8]. From the results presented in Fig. 3 it can be seen that Finlayson and Drew s [7] four sensor algorithms give slightly better performance. However, real world consumer cameras, including the DXC930, have sensor width (FWHM) in the range of nm. Comparing the performance of the algorithms, the difference in performance in this range of sensor width is small. Particularly, the proposed algorithm gives a better performance with sensor width between 60 and 80 nm compared to the performance obtained with sensor width between 20 and 40 nm. This is because in the proposed approach we have optimized the four channel coefficients with the actual sensitivity function of the image sensors used. Treating these four channel coefficients as independent variables resulted in a better performance even when the initial assumptions do not hold. However, Finlayson and Drew s [7] points within boundary Percentage of Max, 6 CIELab units 20 Max, 10 CIELab units Min, 6 CIELab units Min, 10 CIELab units Full width at half maximum (nm) Fig. 4. Performance of the proposed algorithm with Munsell test reflectance spectra (six units and 10 units pairs). In this test the surfaces were illuminated using the 20 spectra of CIE standard illuminants with the lowest intensity and the highest intensity of six decade range of illuminance variation. The sensors sensitivity functions were modelled using Gaussian functions of different FWHM. The responses were multiplied to 100 samples of sensor noise equivalent to 40 db and the resultant responses were quantized to 10 bits. four sensor algorithms impose an additional constraint that the sum of the channel coefficients is equal to one. This constraint results in degradation in performance of Finlayson and Drew s [7] algorithm when the sensor width increases. Table III lists the performance of the proposed algorithm using DXC930 camera responses in recognizing perceptually similar colors under CIE standard illuminants. From these results we can see that the proposed algorithm gives a good performance in identifying perceptually similar colors with three image sensors of realizable spectral width. To investigate the performance of the algorithm with real world high dynamic range (HDR) variation of illuminance we have tested our approach with 20 spectra of CIE standard illuminants, with illuminance varying over six decades. In particular, we have chosen six decade illuminance variation based on the fact that the shadow, skylight, and direct sunlight cause illuminance variation in the natural scene of the order of six decades [22]. To investigate the recognition of perceptually similar reflectance surfaces in HDR scenes, the 20 spectra of CIE standard test illuminants were scaled in such a way that the illuminance varies over six decades. The performance of the algorithm was investigated by illuminating the Munsell reflectance sets (six- and ten-unit pairs) by illuminants with the lowest and the highest values of intensity in the six decade range. The responses were multiplied with 40-dB Gaussian noise and the resultant responses were quantized to 10 bits. The performance of the algorithm was evaluated as described above by fitting the Mahalanobis distance boundaries around the clusters formed by each of the Munsell sample. Test results are shown in Fig. 4. From these results it can be seen that the proposed algorithm gives almost identical performance with the lowest and highest intensity illuminants of the six decade illuminance range. These results suggest that the algorithm can be used with the HDR scenes to extract useful illuminant invariant chromaticity features that can be used for recognition of perceptually similar colors.

6 RATNASINGAM AND MCGINNITY: CHROMATICITY SPACE FOR ILLUMINANT INVARIANT RECOGNITION x Relative power Wavelength (nm) Fig. 6. scenes. Spectral power distribution of the illuminants used to illuminate the TABLE IV PERCENTAGE OF CORRECT RECOGNITION OF OBJECTS FOR THE PROPOSED AND FINLAYSON AND HORDLEY S [5] ALGORITHM Fig. 5. Typical objects in the group of minimal specularities [19]. V. ILLUMINANT INVARIANT RECOGNITION A. Illuminant Invariant Object Recognition In this section, the performance of the proposed algorithm is investigated for illuminant invariant object recognition using the features F 1 and F 2. The advantage of using two features instead of one for color-based recognition is also investigated. In this investigation four different scenarios of light reflection on a surface are considered. The data used in this test are classified into four groups: nonnegligible dielectric specularities (nine scenes), minimal specularities (15 scenes), metallic specularities (ten scenes), and at least one fluorescent surface in the scene (five scenes) [19]. Some of the typical objects used in the minimal specularity group are shown in Fig. 5 [19]. In this experiment each of the scenes was captured using a Sony DXC930 camera by illuminating with 11 different light sources. The spectra of the light sources used in the experiment are shown in Fig. 6. These light sources include typical indoor illuminants including fluorescent tube and daylight simulators. The data set we used has 39 different objects (see typical objects in Fig. 5) in total and each of the objects was taken under 11 different light sources. Accordingly, 11 different images of the same object and 429 images in total were used in the test. As described by Funt et al. [19], color histograms were used for testing the algorithm for color-based object recognition. Color/chromaticity histogram has been applied by several researchers including Swain and Ballard [23], Funt and Finlayson [24] and Funt et al. [19]. In the first recognition step, a data base of illuminant invariant histograms was formed using the extracted illuminant invariant features for the images in each group. In the test phase the image obtained from the test object was applied to the proposed algorithm and No of objects Proposed algorithm (%) Finlayson and Hordley [5] algorithm (%) White patch algorithm [11] Grey world algorithm [10] Minimal specularities Dielectric specularities Metallic Specularites Fluorescent surfaces two illuminant invariant features were obtained. Using these two features a histogram was formed and this histogram was intersected with the data base of histograms and the object that results in the highest correlation with the query histogram is declared as the best matching object. As Funt et al. [19] suggests that having a small bin size gives a good performance, a histogram was used in this test. To investigate the ability of the proposed algorithm in recognizing objects based on color the number of objects used in each attempt was increased in each step by five. We compare the performance of the proposed algorithm with well-known color constancy algorithms such as white patch [11], [12], gray world [10], and the algorithm proposed by Finlayson and Hordley [5]. Object recognition results of the proposed algorithm and Finlayson and Hordley s [5] algorithm, white patch algorithm and grey world algorithm are given in Table IV. It can be seen that a significantly better performance was obtained for the proposed algorithm compared to all the other algorithms in all cases. The reason for the degradation in performance of Finlayson

7 3618 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 8, AUGUST 2012 Relative sensitivity Wavelength (nm) Fig. 7. CIE standard RGB color matching function (10 ). TABLE V OPTIMUM CHANNEL COEFFICIENTS FOR ANY CONSUMER CAMERA CALCULATED USING THE CIE STANDARD RGB COLOR MATCHING FUNCTIONS Parameter Optimum value Alpha Beta Gamma Delta and Hordley s [5] algorithm is that this algorithm uses only a single feature. As discussed in Section III, some of the perceptually different colors are difficult to separate with only one feature. It can also be noticed that the proposed algorithm also performs well under both matt and specular reflection of light. B. Illuminant Invariant Skin Detection In this section, the 2-D space formed by the proposed algorithm is tested for illuminant invariant skin detection under daylight and indoor illuminant. In particular, we test the space for detection of skin of people from different ethnic backgrounds. The detection of different ethnic skin types is possible as typically the skin reflectance function for different races differs only by a scale factor. The performance of the 2-D space is compared with other color spaces widely used for skin detection including RGB, HSV, and YCbCr. As it has been said that the chroma component of a color does not vary significantly with illuminant variation [25], chroma components of these color spaces were used (CbCr, HS, and rg) for skin detection. In the YCbCr color space Cb and Cr are used to form a 2-D space for skin detection. The RGB space was normalized using the sum of all the three components (R+G+B) to remove any dependency on the intensity and scene geometry and then two of the normalized (r and g) components were used to form a 2-D space and this space was used for skin detection. The HSV color space uses the properties of a color to represent it and this space is quite similar to the way in which the human visual system perceives color. In this color space, the HS sub-space was used for skin detection because the coordinate axis V varies TABLE VI PERFORMANCE OF THE PROPOSED ALGORITHM WHEN TESTED WITH MUNSELL AND MEASURED FLORAL REFLECTANCES BY ILLUMINATING WITH CIE STANDARD ILLUMINANTS.IN THIS TEST THE PROPOSED ALGORITHM WAS TESTED USING DXC930 CAMERA RESPONSES WITH THE CHANNEL COEFFICIENTS LISTED IN TABLES I AND V Munsell, CIE daylight Floral, CIE daylight Test results when using the channel coefficients listed in Table V Pair wise distance in CIELab space Percentage separability (%) Test results when using the channel coefficients listed in Table I Pair wise distance in CIELab space Percentage separability (%) substantially compared to H and S components when changing the illuminant [25]. In the discussion so far, the DXC930 camera with known sensitivity functions has been used. However, it is difficult to measure the sensitivity functions of a camera. Therefore it would be interesting if we can adapt the proposed algorithm in such a way as to be able to use it with sensor responses generated by a consumer camera of unknown sensitivity functions. To find the channel coefficients of a camera of unknown sensitivity functions a new approach is proposed. Here, instead of using the sensitivity functions of a camera, the CIE standard RGB color matching functions (shown in Fig. 7) were used. This is because in imaging devices the camera dependent sensor responses are transformed to CIE RGB color space. These RGB responses are effectively the responses of the CIE standard RGB color matching functions. Therefore, the CIE standard RGB color matching functions were applied as the imaging sensors in finding the optimum channel coefficients to form the two illuminant independent features. The optimum channel coefficients were obtained as described in Section III by using the RGB color matching functions instead of the camera sensitivity functions. The optimum channel coefficients are listed in Table V. We investigated the variation in the performance of the proposed algorithm when using the generic channel coefficients (listed in Table V) instead of the camera specific coefficients (listed in Table I). Here we tested the algorithm with the generic channel coefficients using DXC930 camera sensor responses for separability of perceptually similar reflectances. The results of the algorithm when using the channel coefficients listed in Tables I and V are listed in Table VI. From these results one can see that, when tested with Munsell reflectance data the algorithm gives better performance with the coefficients listed in Table I. However, when tested using six-units floral reflectance data with channel coefficients listed in Table V. The reason could be that we have optimized the channel coefficients using Munsell reflectance data set and Munsell data set does not have saturated colors. However, the measured floral data set has saturated colors [26].

8 RATNASINGAM AND MCGINNITY: CHROMATICITY SPACE FOR ILLUMINANT INVARIANT RECOGNITION 3619 (a) (b) (c) Fig. 8. Skin recognition test results for the proposed algorithm with three consumer cameras (Sony DSC-T10, Samsung P1200, Canon 450D) under shodow, skylight, and direct sunlight. In this test, we have chosen the subjects in such a way that they are from different ethnic backgrounds. (a) Sony DSC-T10 camera. (i) Shadow. (ii) Skylight. (iii) Direct sunlight. (b) Samsung P1200 camera. (i) Shadow. (ii) Skylight. (iii) Direct sunlight. (c) Canon 450D camera. (i) Shadow. (ii) Skylight. (iii) Direct sunlight. TABLE VII SKIN RECOGNITION OF THE PROPOSED ALGORITHM WHEN APPLYING THE GENERIC CHANNEL COEFFICIENTS DERIVED USING CIE STANDARD COLOR MATCHING FUNCTIONS (LISTED IN TABLE V). TEST RESULTS FOR THE THREE CAMERAS USED IN THIS TEST ARE LISTED Camera True positive False positive Sony DSC-T10 Camera Samsung P1200 camera Canon 450D camera The important conclusion from the results listed in Table VI is that the performance of the algorithm does not vary significantly. This means that the proposed algorithm can be used with any consumer camera that has an approximately linear transform of the CIE standard RGB color matching functions. To investigate the proposed algorithm for skin detection under daylight using images captured with unknown camera sensitivity functions, three different consumer cameras (Samsung P1200, Sony DSC-T10, Canon 450D) were used to capture the scene. As the daylight spectrum varies with weather conditions, images were taken under different weather conditions. In this investigation, to test the ability of the proposed algorithm in detecting skin tone regardless of the ethnic background, subjects of different ethnic background were present in the scene. To define the skin boundary the images captured with the Samsung P1200 camera was used. From these images, skin regions were manually extracted and applied as input to the algorithm to obtain illuminant invariant features. These features were projected onto the 2-D space and the Mahalanobis distance boundary was drawn in such a way that approximately 90% of the skin pixels are enclosed by the boundary. In defining the skin region, 90% of the training skin pixels were taken to account for any noise in the manually extracted skin regions (training skin patches). Once the skin region was defined, test images were applied as input to the algorithm and the extracted features were projected onto the 2-D space; pixels which fell inside the defined boundary were colored in red. Similarly, for other color spaces, the training skin regions were used to define the skin region in

9 3620 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 8, AUGUST 2012 (a) (b) (c) (d) (e) Fig. 9. Skin detection results for outdoor lighting under different weather conditions. The images were captured using (i) Samsung P1200 camera, (ii) Sony DSC-T10 Camera, and (iii) Canon 450D camera, respectively. (a) Original images of people with different ethnic background. (i) Shadow (Samsung P1200). (ii) Skylight (Sony DSC-T10). (iii) Direct sunlight (Canon 450D). (b) Skin detection results for the proposed 2-D space. (i) Shadow. (ii) Skylight. (iii) Direct sunlight. (c) Skin detection results for the HS space. (i) Shadow. (ii) Skylight. (iii) Direct sunlight. (d) Skin detection results for the CbCr space. (i) Shadow. (ii) Skylight. (iii) Direct sunlight. (e) Skin detection results for the normalized rg space. (i) Shadow. (ii) Skylight. (iii) Direct sunlight. the 2-D spaces. For a fair comparison a Mahalanobis distance boundary was fitted enclosing approximately 90% of the skin pixels in all the three color spaces. In the test phase any pixels that fell inside the defined boundary were made red.

10 RATNASINGAM AND MCGINNITY: CHROMATICITY SPACE FOR ILLUMINANT INVARIANT RECOGNITION 3621 (a) - (b) (c) (d) Fig. 10. Typical skin detection results for different ethnic people under indoor illuminants. (a) Test results for the proposed 2-D space. (b) Test results for the HS space. (c) Test results for the CbCr space. (d) Test results for the rg space. Experimental results for the proposed 2-D space with the images captured by three different cameras are given in Fig. 8. In this test, we used the generic channel coefficients (listed in Table V) calculated using the CIE standard color matching functions. In particular, we have shown typical results for the three cameras under shadow, skylight, and direct sunlight. For quantitative comparisons we also have listed the true positive and false positive for each camera in Table VII. From these results it can be seen that Sony DSC-T10 Camera gives the best true positive and the Canon 450D camera gives the least false positive. Here, we should note that the performance difference might be from the differences in the sensitivity functions of the image sensors used in these cameras; also other factors including gain, white balancing or other post-processing applied in each of these cameras. Experimental results for the proposed 2-D feature space and the other color spaces (HS, CbCr, and rg) are shown in Fig. 9. A quantitative comparison of the performance of these spaces is given in Table VIII. From the results shown in Fig. 9 and Table VIII, it can be seen that the normalized rg space gives the worst performance. Comparing the spaces CbCr and HS, the HS space performs slightly better than the CbCr space. The proposed 2-D space gives the highest true positive and least false positive for skin detection under different illumination conditions. To investigate the performance of the proposed 2-D space in detecting skin under indoor illuminants the space was tested with images captured under indoor illuminants [27]. These images were acquired over three years under different lighting conditions using unknown consumer cameras. In the

11 3622 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 8, AUGUST 2012 TABLE VIII TEST RESULTS FOR SKIN DETECTION UNDER DIFFERENT OUTDOOR ILLUMINATION CONDITIONS.IN THIS TEST OUR PROPOSED FEATURE SPACE IS COMPARED WITH CbCr, rg, AND HS SPACES USING GENERIC CHANNEL COEFFICIENTS Camera True positive False positive Proposed space CbCr space rg space HS space TABLE IX TEST RESULTS FOR SKIN DETECTION UNDER DIFFERENT INDOOR ILLUMINATION CONDITIONS.IN THIS TEST OUR PROPOSED FEATURE SPACE IS COMPARED WITH CbCr, rg, AND HS SPACES USING GENERIC CHANNEL COEFFICIENTS Camera True positive False positive Proposed space CbCr space rg space HS space skin detection test we have used the generic parameters listed in Table V. As described above a Mahalanobis distance boundary was drawn for each space and all pixels that fell inside the defined boundary were made red. Test results of the propsed 2-D illuminant invariant feature space and the other color spaces with scenes illuminated with indoor illuminants are shown in Fig. 10. The results show the detection of people from different ethnic backgrounds including Asian, Caucasian, African, and Chinese. Comparing these results the CbCr space gives the least performance; while the rg space gives better performance compared to that of the HS space (see Table IX). However, the proposed 2-D space gives the best performance among all the spaces. Based on the results presented in this section and the previous sections, the proposed algorithm gives better performance in terms of illuminant invariant color-based object recognition and skin detection. This suggests that the proposed 2-D feature space can be used in machine vision applications including color-based recognition of objects that do not have a rigid geometry. VI. CONCLUSION An algorithm has been proposed for extracting two illuminant invariant features from three sensor responses. This algorithm extracts the chromaticity features at pixel level. An approach has been proposed to use the algorithm with cameras of unknown sensitivity functions. The algorithm was tested for separability of perceptually similar colors under CIE standard illuminants and obtained a good performance. The algorithm was also tested for color-based object recognition and compared with the existing algorithms. The test results showed that the proposed algorithm gives a better performance compared to the well-known color constancy algorithms in recognizing objects based on illuminant invariant reflectance features. Finally, the proposed algorithm was tested for skin detection invariant to illuminant, ethnic background and imaging device. In this experiment the scenes were taken when illuminated by daylight under different weather conditions and typical indoor lights. The results presented show that the proposed 2-D illuminant invariant chromaticity space can be used for machine vision applications including illuminant invariant color-based object recognition and skin detection. REFERENCES [1] J. von Kries, Beitrag zur physiologie der gesichtsempfinding, Arch. Anat. Physiol., vol. 2, pp , [2] M. Ebner, Colour Constancy (Imaging Science and Technology). New York: Wiley, [3] J. A. Marchant and C. M. Onyango, Shadow-invariant classification for scenes illuminated by daylight, J. Opt. Soc. Amer. A, vol. 17, no. 11, pp , [4] E. H. Land and J. J. McCann, Lightness and retinex theory, J. Opt. Soc. Amer., vol. 61, no. 1, pp. 1 11, [5] G. D. Finlayson and S. D. Hordley, Colour constancy at a pixel, J. Opt. Soc. Amer. A, vol. 18, no. 2, pp , [6] S. Ratnasingam and S. Collins, Study of the photodetector characteristics of a camera for colour constancy in natural scene, J. Opt. Soc. Amer. A, vol. 27, no. 2, pp , [7] G. D. Finlayson and M. S. Drew, 4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities, in Proc. Int. Conf. Comput. Vis., 2001, pp [8] S. Ratnasingam, S. Collins, and J. Hernández-András, Optimum sensors for color constancy in scenes illuminated by daylight, J. Opt. Soc. Amer. A, vol. 27, no. 10, pp , [9] D. Berwick and S. W. Lee, A chromaticity space for specularity, illumination color-and illumination pose-invariant 3-D object recognition, in Proc. Int. Conf. Comput. Vis., 1998, pp [10] G. Buchsbaum, A spatial processor model for object colour perception, J. Franklin Inst., vol. 310, no. 1, pp , [11] B. V. Funt, V. Cardei, and K. Barnard, Learning color constancy, in Proc. IST/SID 4th Color Imag. Conf., Scottsdale, AZ, 1996, pp [12] V. C. Cardei and B. V. Funt, Committee-based color constancy, in Proc. IST/SID 7th Color Imag. Conf.: Color Sci., Syst. Appl., Scottsdale, AZ, 1999, pp [13] G. D. Forsyth, A novel algorithm for color constancy, Int. J. Comput. Vis., vol. 5, no. 1, pp. 5 36, [14] D. Brainard and W. Freeman, Bayesian color constancy, J. Opt. Soc. Amer. A, vol. 14, no. 7, pp , [15] H. Trussell and M. Vrhel, Estimation of illumination for color correction, in Proc. Int. Conf. Acoust., Speech, Signal Process., 1991, pp [16] S. Ratnasingam and S. Collins, An algorithm to determine the chromaticity under non-uniform illuminant, in Proc. Int. Conf. Imag. Signal Process., vol Jul. 2008, pp [17] G. D. Finlayson, S. D. Hordley, and P. M. Hubel, Colour by correlation: A simple unifying theory of colour constancy, in Proc. IEEE Int. Conf. Comput. Vis., Sep. 1999, pp [18] S. Hordley, Scene illuminant estimation: Past, present, and future, Color Res. Appl., vol. 31, no. 4, pp , [19] B. V. Funt, K. Barnard, and L. Martin, Is colour constancy good enough? in Proc. 5th Eur. Conf. Comput. Vis., 1998, pp [20] D. H. Foster, Color constancy, Vis. Res., vol. 51, pp , [21] A. Abrardo, V. Cappellini, M. Cappellini, and A. Mecocci, Art-works colour calibration using the VASARI scanner, in Proc. IST SID s 4th Color Imag. Conf.: Color Sci., Syst. Appl., Scottsdale, AZ, 1996, pp [22] G. C. Holst and T. S. Lomheim, CMOS/CCD Sensors and Camera Systems. Bellingham, WA: SPIE, [23] M. Swain and D. Ballard, Colour indexing, Int. J. Comput. Vis., vol. 7, no. 1, pp , Nov [24] B. V. Funt and G. D. Finlayson, Color constant color indexing, IEEE Trans. Pattern Anal. Mach. Intell., vol. 17, no. 5, pp , May [25] J. Yang, W. Lu, and A. Waibel, Skin-color modeling and adaptation, in Proc. 3rd Asian Conf. Comput. Vis., 1998, pp [26] S. E. J. Arnold, V. Savolainen, and L. Chittka, FReD: The floral reflectance spectra database, Nature Preced., 2008, doi: /npre [27] Vision Group of Essex University Face Database [Online]. Available:

12 RATNASINGAM AND MCGINNITY: CHROMATICITY SPACE FOR ILLUMINANT INVARIANT RECOGNITION 3623 Sivalogeswaran Ratnasingam (M 12) received the B.Sc. Eng. degree in electrical and electronic engineering from the University of Peradeniya, Peradeniya, Sri Lanka, in 2003, the M.Sc. degree (distinction) in mobile and satellite communications from the University of Surrey, Guildford, U.K., in 2006, and the D.Phil. degree from the University of Oxford, Oxford, U.K., in He was with the Intelligent Systems Research Centre, University of Ulster, Londonderry, U.K., from 2010 to 2011, where he worked on topics including haptics, bio-inspired models, and robotics. Since 2011, he has been with National ICT Australia, Canberra, Australia, where he has been working on multi spectral image processing, illuminant estimation, and scene understanding. Dr. Ratnasingam was a recipient of the Nokia prize for best overall performance in M.Sc. in T. Martin McGinnity (S 09 M 83) received the Degree in physics (First Class Hons.) in 1975 and the Ph.D. degree from the University of Durham, Durham, U.K., in He is a Professor of intelligent systems engineering with the Faculty of Computing and Engineering, University of Ulster, Londonderry, U.K. He is currently the Director of the Intelligent Systems Research Centre, Londonderry, which encompasses the research activities of approximately 100 researchers. He was an Associate Dean with the Faculty and the Director of the university s technology transfer company, Innovation Ulster, Londonderry, and a spin-off company, Flex Language Services. He is the author or co-author of more than 260 research papers. His current research interests include computational intelligence, and in particular on computational systems, which explore and model biological signal processing, particularly in relation to cognitive robotics and computation neuroscience. Dr. McGinnity was a recipient of the Senior Distinguished Research Fellowship and the Distinguished Learning Support Fellowship in recognition of his contribution to teaching and research. He is a fellow of the Institution of Engineering and Technology and a Chartered Engineer.

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom

More information

Extending color constancy outside the visible region

Extending color constancy outside the visible region Ratnasingam et al. Vol. 28, No. 4 / April 2 / J. Opt. Soc. Am. A 54 Extending color constancy outside the visible region Sivalogeswaran Ratnasingam,, * Steve Collins, and Javier Hernández-Andrés 2 Department

More information

Analysis of colour constancy algorithms using the knowledge of variation of correlated colour temperature of daylight with solar elevation

Analysis of colour constancy algorithms using the knowledge of variation of correlated colour temperature of daylight with solar elevation Ratnasingam et al. EURASIP Journal on Image and Video Processing 213, 213:14 RESEARCH Open Access Analysis of colour constancy algorithms using the knowledge of variation of correlated colour temperature

More information

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL?

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL? HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL? Gerald Schaefer School of Computing and Technology Nottingham Trent University Nottingham, U.K. Gerald.Schaefer@ntu.ac.uk Abstract Keywords: The images

More information

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca Gray-World assumption on perceptual color spaces Jonathan Cepeda-Negrete jonathancn@laviria.org Raul E. Sanchez-Yanez sanchezy@ugto.mx Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

More information

Introduction to color science

Introduction to color science Introduction to color science Trichromacy Spectral matching functions CIE XYZ color system xy-chromaticity diagram Color gamut Color temperature Color balancing algorithms Digital Image Processing: Bernd

More information

dependent intensity function - the spectral distribution function (SPD) E( ). The surface reflectance is the proportion of incident light which is ref

dependent intensity function - the spectral distribution function (SPD) E( ). The surface reflectance is the proportion of incident light which is ref Object-Based Illumination Classification H. Z. Hel-Or B. A. Wandell Dept. of Computer Science Haifa University Haifa 395, Israel Dept. Of Psychology Stanford University Stanford, CA 9435, USA Abstract

More information

Video-Based Illumination Estimation

Video-Based Illumination Estimation Video-Based Illumination Estimation Ning Wang 1,2, Brian Funt 2, Congyan Lang 1, and De Xu 1 1 School of Computer Science and Infromation Technology, Beijing Jiaotong University, Beijing, China 2 School

More information

Color Constancy from Illumination Changes

Color Constancy from Illumination Changes (MIRU2004) 2004 7 153-8505 4-6-1 E E-mail: {rei,robby,ki}@cvl.iis.u-tokyo.ac.jp Finlayson [10] Color Constancy from Illumination Changes Rei KAWAKAMI,RobbyT.TAN, and Katsushi IKEUCHI Institute of Industrial

More information

Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes

Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes Javier Toro 1 and Brian Funt 2 LBMU 1, Centre Hospitalier de l Université de Montréal Montréal, QC, Canada H2L 2W5 School of

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance

More information

Estimating the wavelength composition of scene illumination from image data is an

Estimating the wavelength composition of scene illumination from image data is an Chapter 3 The Principle and Improvement for AWB in DSC 3.1 Introduction Estimating the wavelength composition of scene illumination from image data is an important topics in color engineering. Solutions

More information

Diagonal versus affine transformations for color correction

Diagonal versus affine transformations for color correction 2108 J. Opt. oc. Am. A/ Vol. 17, No. 11/ November 2000 JOA Communications Diagonal versus affine transformations for color correction Brian V. Funt and Benjamin C. ewis chool of Computing cience, imon

More information

Combining Strategies for White Balance

Combining Strategies for White Balance Combining Strategies for White Balance Simone Bianco, Francesca Gasparini and Raimondo Schettini DISCo, Università degli Studi di Milano-Bicocca, Via Bicocca degli Arcimboldi 8, 20126 Milano, Italy ABSTRACT

More information

A Comparison of Computational Color Constancy Algorithms; Part Two: Experiments with Image Data

A Comparison of Computational Color Constancy Algorithms; Part Two: Experiments with Image Data This work has been accepted for publication in IEEE Transactions in Image Processing. 1 (See http://www.ieee.org/about/documentation/copyright/policies.htm for copyright issue details). A Comparison of

More information

High Information Rate and Efficient Color Barcode Decoding

High Information Rate and Efficient Color Barcode Decoding High Information Rate and Efficient Color Barcode Decoding Homayoun Bagherinia and Roberto Manduchi University of California, Santa Cruz, Santa Cruz, CA 95064, USA {hbagheri,manduchi}@soe.ucsc.edu http://www.ucsc.edu

More information

Color Constancy from Hyper-Spectral Data

Color Constancy from Hyper-Spectral Data Color Constancy from Hyper-Spectral Data Th. Gevers, H. M. G. Stokman, J. van de Weijer Faculty of Science, University of Amsterdam, The Netherlands fgevers, stokman, joostwg@wins.uva.nl Abstract This

More information

Color. making some recognition problems easy. is 400nm (blue) to 700 nm (red) more; ex. X-rays, infrared, radio waves. n Used heavily in human vision

Color. making some recognition problems easy. is 400nm (blue) to 700 nm (red) more; ex. X-rays, infrared, radio waves. n Used heavily in human vision Color n Used heavily in human vision n Color is a pixel property, making some recognition problems easy n Visible spectrum for humans is 400nm (blue) to 700 nm (red) n Machines can see much more; ex. X-rays,

More information

Improvements to Gamut Mapping Colour Constancy Algorithms

Improvements to Gamut Mapping Colour Constancy Algorithms Improvements to Gamut Mapping Colour Constancy Algorithms Kobus Barnard Department of Computing Science, Simon Fraser University, 888 University Drive, Burnaby, BC, Canada, V5A 1S6 email: kobus@cs.sfu.ca

More information

Lecture 1 Image Formation.

Lecture 1 Image Formation. Lecture 1 Image Formation peimt@bit.edu.cn 1 Part 3 Color 2 Color v The light coming out of sources or reflected from surfaces has more or less energy at different wavelengths v The visual system responds

More information

Multispectral Image Invariant to Illumination Colour, Strength, and Shading

Multispectral Image Invariant to Illumination Colour, Strength, and Shading Digital Photography VII, San Francisco, 23-27 January 2011. http://www.cs.sfu.ca/~mark/ftp/ei2011/ei2011.pdf Multispectral Image Invariant to Illumination Colour, Strength, and Shading Mark S. Drew and

More information

Reconstruction of Surface Spectral Reflectances Using Characteristic Vectors of Munsell Colors

Reconstruction of Surface Spectral Reflectances Using Characteristic Vectors of Munsell Colors Reconstruction of Surface Spectral Reflectances Using Characteristic Vectors of Munsell Colors Jae Kwon Eem and Hyun Duk Shin Dept. of Electronic Eng., Kum Oh ational Univ. of Tech., Kumi, Korea Seung

More information

Estimating basis functions for spectral sensitivity of digital cameras

Estimating basis functions for spectral sensitivity of digital cameras (MIRU2009) 2009 7 Estimating basis functions for spectral sensitivity of digital cameras Abstract Hongxun ZHAO, Rei KAWAKAMI, Robby T.TAN, and Katsushi IKEUCHI Institute of Industrial Science, The University

More information

Computational Photography and Video: Intrinsic Images. Prof. Marc Pollefeys Dr. Gabriel Brostow

Computational Photography and Video: Intrinsic Images. Prof. Marc Pollefeys Dr. Gabriel Brostow Computational Photography and Video: Intrinsic Images Prof. Marc Pollefeys Dr. Gabriel Brostow Last Week Schedule Computational Photography and Video Exercises 18 Feb Introduction to Computational Photography

More information

Removing Shadows from Images

Removing Shadows from Images Removing Shadows from Images Zeinab Sadeghipour Kermani School of Computing Science Simon Fraser University Burnaby, BC, V5A 1S6 Mark S. Drew School of Computing Science Simon Fraser University Burnaby,

More information

Non-Linear Masking based Contrast Enhancement via Illumination Estimation

Non-Linear Masking based Contrast Enhancement via Illumination Estimation https://doi.org/10.2352/issn.2470-1173.2018.13.ipas-389 2018, Society for Imaging Science and Technology Non-Linear Masking based Contrast Enhancement via Illumination Estimation Soonyoung Hong, Minsub

More information

A Colour Constancy Algorithm Based on the Histogram of Feasible Colour Mappings

A Colour Constancy Algorithm Based on the Histogram of Feasible Colour Mappings A Colour Constancy Algorithm Based on the Histogram of Feasible Colour Mappings Jaume Vergés-Llahí and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial Technological Park of Barcelona, U

More information

A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation

A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation Graham D. Finlayson 1, Ghalia Hemrit 1, Arjan Gijsenij 2, Peter Gehler 3 1 School of Computing Sciences, University of

More information

Color Constancy for Multiple Light Sources Arjan Gijsenij, Member, IEEE, Rui Lu, and Theo Gevers, Member, IEEE

Color Constancy for Multiple Light Sources Arjan Gijsenij, Member, IEEE, Rui Lu, and Theo Gevers, Member, IEEE IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 2, FEBRUARY 2012 697 Color Constancy for Multiple Light Sources Arjan Gijsenij, Member, IEEE, Rui Lu, and Theo Gevers, Member, IEEE Abstract Color constancy

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Dynamic Range and Weber s Law HVS is capable of operating over an enormous dynamic range, However, sensitivity is far from uniform over this range Example:

More information

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image [6] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image Matching Methods, Video and Signal Based Surveillance, 6. AVSS

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Material-Based Object Segmentation Using Near-Infrared Information

Material-Based Object Segmentation Using Near-Infrared Information Material-Based Object Segmentation Using Near-Infrared Information N. Salamati and S. Süsstrunk School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne,

More information

Supplementary Material: Specular Highlight Removal in Facial Images

Supplementary Material: Specular Highlight Removal in Facial Images Supplementary Material: Specular Highlight Removal in Facial Images Chen Li 1 Stephen Lin 2 Kun Zhou 1 Katsushi Ikeuchi 2 1 State Key Lab of CAD&CG, Zhejiang University 2 Microsoft Research 1. Computation

More information

A Statistical Approach to Culture Colors Distribution in Video Sensors Angela D Angelo, Jean-Luc Dugelay

A Statistical Approach to Culture Colors Distribution in Video Sensors Angela D Angelo, Jean-Luc Dugelay A Statistical Approach to Culture Colors Distribution in Video Sensors Angela D Angelo, Jean-Luc Dugelay VPQM 2010, Scottsdale, Arizona, U.S.A, January 13-15 Outline Introduction Proposed approach Colors

More information

A Bi-Illuminant Dichromatic Reflection Model for Understanding Images

A Bi-Illuminant Dichromatic Reflection Model for Understanding Images A Bi-Illuminant Dichromatic Reflection Model for Understanding Images Bruce A. Maxwell Tandent Vision Science, Inc. brucemaxwell@tandent.com Richard M. Friedhoff Tandent Vision Science, Inc. richardfriedhoff@tandent.com

More information

Application of CIE with Associated CRI-based Colour Rendition Properties

Application of CIE with Associated CRI-based Colour Rendition Properties Application of CIE 13.3-1995 with Associated CRI-based Colour Rendition December 2018 Global Lighting Association 2018 Summary On September 18 th 2015, the Global Lighting Association (GLA) issued a position

More information

Towards Autonomous Vision Self-Calibration for Soccer Robots

Towards Autonomous Vision Self-Calibration for Soccer Robots Towards Autonomous Vision Self-Calibration for Soccer Robots Gerd Mayer Hans Utz Gerhard Kraetzschmar University of Ulm, James-Franck-Ring, 89069 Ulm, Germany Abstract The ability to autonomously adapt

More information

Exploiting Spatial and Spectral Image Regularities for Color Constancy

Exploiting Spatial and Spectral Image Regularities for Color Constancy Exploiting Spatial and Spectral Image Regularities for Color Constancy Barun Singh and William T. Freeman MIT Computer Science and Artificial Intelligence Laboratory David H. Brainard University of Pennsylvania

More information

Estimation of Reflection Properties of Silk Textile with Multi-band Camera

Estimation of Reflection Properties of Silk Textile with Multi-band Camera Estimation of Reflection Properties of Silk Textile with Multi-band Camera Kosuke MOCHIZUKI*, Norihiro TANAKA**, Hideaki MORIKAWA* *Graduate School of Shinshu University, 12st116a@shinshu-u.ac.jp ** Faculty

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Local Linear Models for Improved von Kries Adaptation

Local Linear Models for Improved von Kries Adaptation Appeared in Proc. of 10th Colour Imaging Conf., Scottsdale AZ, 2002 1 Local Linear Models for Improved von Kries Adaptation G. D. Finlayson, A. Alsam, and S. D. Hordley School of Information Systems University

More information

A Comparison of Computational Color. Constancy Algorithms; Part One: Methodology and Experiments with. Synthesized Data

A Comparison of Computational Color. Constancy Algorithms; Part One: Methodology and Experiments with. Synthesized Data This work has been accepted for publication in IEEE Transactions in Image Processing. 1 (See http://www.ieee.org/about/documentation/copyright/policies.htm for copyright issue details). A Comparison of

More information

Generalized Gamut Mapping using Image Derivative Structures for Color Constancy

Generalized Gamut Mapping using Image Derivative Structures for Color Constancy DOI 10.1007/s11263-008-0171-3 Generalized Gamut Mapping using Image Derivative Structures for Color Constancy Arjan Gijsenij Theo Gevers Joost van de Weijer Received: 11 February 2008 / Accepted: 6 August

More information

Shadow detection and removal from a single image

Shadow detection and removal from a single image Shadow detection and removal from a single image Corina BLAJOVICI, Babes-Bolyai University, Romania Peter Jozsef KISS, University of Pannonia, Hungary Zoltan BONUS, Obuda University, Hungary Laszlo VARGA,

More information

Color by Correlation: A Simple, Unifying Framework for Color Constancy

Color by Correlation: A Simple, Unifying Framework for Color Constancy IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 23, NO. 11, NOVEMBER 2001 1209 Color by Correlation: A Simple, Unifying Framework for Color Constancy Graham D. Finlayson, Steven D.

More information

Illuminant retrieval for fixed location cameras

Illuminant retrieval for fixed location cameras Illuminant retrieval for fixed location cameras Joanna Marguier and Sabine Süsstrunk School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland Abstract

More information

Chapter 5 Extraction of color and texture Comunicação Visual Interactiva. image labeled by cluster index

Chapter 5 Extraction of color and texture Comunicação Visual Interactiva. image labeled by cluster index Chapter 5 Extraction of color and texture Comunicação Visual Interactiva image labeled by cluster index Color images Many images obtained with CCD are in color. This issue raises the following issue ->

More information

Color constancy through inverse-intensity chromaticity space

Color constancy through inverse-intensity chromaticity space Tan et al. Vol. 21, No. 3/March 2004/J. Opt. Soc. Am. A 321 Color constancy through inverse-intensity chromaticity space Robby T. Tan Department of Computer Science, The University of Tokyo, 4-6-1 Komaba,

More information

Spectral Images and the Retinex Model

Spectral Images and the Retinex Model Spectral Images and the Retine Model Anahit Pogosova 1, Tuija Jetsu 1, Ville Heikkinen 2, Markku Hauta-Kasari 1, Timo Jääskeläinen 2 and Jussi Parkkinen 1 1 Department of Computer Science and Statistics,

More information

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 59 CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 3.1 INTRODUCTION Detecting human faces automatically is becoming a very important task in many applications, such as security access control systems or contentbased

More information

Perceptually Uniform Color Spaces for Color Texture Analysis: An Empirical Evaluation

Perceptually Uniform Color Spaces for Color Texture Analysis: An Empirical Evaluation 932 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 6, JUNE 2001 Perceptually Uniform Color Spaces for Color Texture Analysis: An Empirical Evaluation George Paschos Abstract, a nonuniform color space,

More information

Minimizing Worst-case Errors for Illumination Estimation

Minimizing Worst-case Errors for Illumination Estimation Minimizing Worst-case Errors for Illumination Estimation by Milan Mosny M.Sc., Simon Fraser University, 1996 Ing., Slovak Technical University, 1996 Thesis Submitted In Partial Fulfillment of the Requirements

More information

CS770/870 Spring 2017 Color and Shading

CS770/870 Spring 2017 Color and Shading Preview CS770/870 Spring 2017 Color and Shading Related material Cunningham: Ch 5 Hill and Kelley: Ch. 8 Angel 5e: 6.1-6.8 Angel 6e: 5.1-5.5 Making the scene more realistic Color models representing the

More information

ams AG TAOS Inc. is now The technical content of this TAOS document is still valid. Contact information:

ams AG TAOS Inc. is now The technical content of this TAOS document is still valid. Contact information: TAOS Inc. is now ams AG The technical content of this TAOS document is still valid. Contact information: Headquarters: ams AG Tobelbader Strasse 30 8141 Premstaetten, Austria Tel: +43 (0) 3136 500 0 e-mail:

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 7. Color Transforms 15110191 Keuyhong Cho Non-linear Color Space Reflect human eye s characters 1) Use uniform color space 2) Set distance of color space has same ratio difference

More information

SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HISTOGRAM EQUALIZATION

SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HISTOGRAM EQUALIZATION SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HISTOGRAM EQUALIZATION Jyothisree V. and Smitha Dharan Department of Computer Engineering, College of Engineering Chengannur, Kerala,

More information

Color Appearance in Image Displays. O Canada!

Color Appearance in Image Displays. O Canada! Color Appearance in Image Displays Mark D. Fairchild RIT Munsell Color Science Laboratory ISCC/CIE Expert Symposium 75 Years of the CIE Standard Colorimetric Observer Ottawa 26 O Canada Image Colorimetry

More information

Characterizing and Controlling the. Spectral Output of an HDR Display

Characterizing and Controlling the. Spectral Output of an HDR Display Characterizing and Controlling the Spectral Output of an HDR Display Ana Radonjić, Christopher G. Broussard, and David H. Brainard Department of Psychology, University of Pennsylvania, Philadelphia, PA

More information

UvA-DARE (Digital Academic Repository) Edge-driven color constancy Gijsenij, A. Link to publication

UvA-DARE (Digital Academic Repository) Edge-driven color constancy Gijsenij, A. Link to publication UvA-DARE (Digital Academic Repository) Edge-driven color constancy Gijsenij, A. Link to publication Citation for published version (APA): Gijsenij, A. (2010). Edge-driven color constancy General rights

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

Light, Color, and Surface Reflectance. Shida Beigpour

Light, Color, and Surface Reflectance. Shida Beigpour Light, Color, and Surface Reflectance Shida Beigpour Overview Introduction Multi-illuminant Intrinsic Image Estimation Multi-illuminant Scene Datasets Multi-illuminant Color Constancy Conclusions 2 Introduction

More information

Minimalist surface-colour matching

Minimalist surface-colour matching Perception, 2005, volume 34, pages 1007 ^ 1011 DOI:10.1068/p5185 Minimalist surface-colour matching Kinjiro Amano, David H Foster Computational Neuroscience Group, Faculty of Life Sciences, University

More information

Lecture 11. Color. UW CSE vision faculty

Lecture 11. Color. UW CSE vision faculty Lecture 11 Color UW CSE vision faculty Starting Point: What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power (watts) λ is wavelength Perceiving

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Color and Color Constancy in a Translation Model for Object Recognition

Color and Color Constancy in a Translation Model for Object Recognition Color and Color Constancy in a Translation Model for Object Recognition Kobus Barnard 1 and Prasad Gabbur 2 1 Department of Computer Science, University of Arizona Email: kobus@cs.arizona.edu 2 Department

More information

Image Quality Assessment Techniques: An Overview

Image Quality Assessment Techniques: An Overview Image Quality Assessment Techniques: An Overview Shruti Sonawane A. M. Deshpande Department of E&TC Department of E&TC TSSM s BSCOER, Pune, TSSM s BSCOER, Pune, Pune University, Maharashtra, India Pune

More information

Lecture 12 Color model and color image processing

Lecture 12 Color model and color image processing Lecture 12 Color model and color image processing Color fundamentals Color models Pseudo color image Full color image processing Color fundamental The color that humans perceived in an object are determined

More information

A Novel Approach for Shadow Removal Based on Intensity Surface Approximation

A Novel Approach for Shadow Removal Based on Intensity Surface Approximation A Novel Approach for Shadow Removal Based on Intensity Surface Approximation Eli Arbel THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE MASTER DEGREE University of Haifa Faculty of Social

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

An LED based spectrophotometric instrument

An LED based spectrophotometric instrument An LED based spectrophotometric instrument Michael J. Vrhel Color Savvy Systems Limited, 35 South Main Street, Springboro, OH ABSTRACT The performance of an LED-based, dual-beam, spectrophotometer is discussed.

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

Illuminant Estimation from Projections on the Planckian Locus

Illuminant Estimation from Projections on the Planckian Locus Illuminant Estimation from Projections on the Planckian Locus Baptiste Mazin, Julie Delon, and Yann Gousseau LTCI, Télécom-ParisTech, CNRS, 46 rue Barault, Paris 75013, France {baptiste.mazin,julie.delon,yann.gousseau}@telecom-paristech.fr

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

Shadow Removal from a Single Image

Shadow Removal from a Single Image Shadow Removal from a Single Image Li Xu Feihu Qi Renjie Jiang Department of Computer Science and Engineering, Shanghai JiaoTong University, P.R. China E-mail {nathan.xu, fhqi, blizard1982}@sjtu.edu.cn

More information

Color Constancy by Derivative-based Gamut Mapping

Color Constancy by Derivative-based Gamut Mapping Color Constancy by Derivative-based Gamut Mapping Arjan Gijsenij, Theo Gevers, Joost Van de Weijer To cite this version: Arjan Gijsenij, Theo Gevers, Joost Van de Weijer. Color Constancy by Derivative-based

More information

Color can be an important cue for computer vision or

Color can be an important cue for computer vision or IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 9, SEPTEMBER 2011 2475 Computational Color Constancy: Survey and Experiments Arjan Gijsenij, Member, IEEE, Theo Gevers, Member, IEEE, and Joost van de

More information

Physics-based Fast Single Image Fog Removal

Physics-based Fast Single Image Fog Removal Physics-based Fast Single Image Fog Removal Jing Yu 1, Chuangbai Xiao 2, Dapeng Li 2 1 Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China 2 College of Computer Science and

More information

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Jung H. Oh, Gyuho Eoh, and Beom H. Lee Electrical and Computer Engineering, Seoul National University,

More information

Analysis and extensions of the Frankle-McCann

Analysis and extensions of the Frankle-McCann Analysis and extensions of the Frankle-McCann Retinex algorithm Jounal of Electronic Image, vol.13(1), pp. 85-92, January. 2004 School of Electrical Engineering and Computer Science Kyungpook National

More information

Biometric Security System Using Palm print

Biometric Security System Using Palm print ISSN (Online) : 2319-8753 ISSN (Print) : 2347-6710 International Journal of Innovative Research in Science, Engineering and Technology Volume 3, Special Issue 3, March 2014 2014 International Conference

More information

A Silicon Graphics CRT monitor was characterized so that multispectral images could be

A Silicon Graphics CRT monitor was characterized so that multispectral images could be A Joint Research Program of The National Gallery of Art, Washington The Museum of Modern Art, New York Rochester Institute of Technology Technical Report April, 2002 Colorimetric Characterization of a

More information

MANY computer vision applications as well as image

MANY computer vision applications as well as image IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO., MONTH 2013 1 Exemplar-Based Colour Constancy and Multiple Illumination Hamid Reza Vaezi Joze and Mark S. Drew Abstract Exemplar-based

More information

Metamer Constrained Colour Correction

Metamer Constrained Colour Correction In Proc. 7th Color Imaging Conference, Copyright IS & T 1999 1 Metamer Constrained Colour Correction G. D. Finlayson, P. M. Morovič School of Information Systems University of East Anglia, Norwich, UK

More information

Quantitative Analysis of Metamerism for. Multispectral Image Capture

Quantitative Analysis of Metamerism for. Multispectral Image Capture Quantitative Analysis of Metamerism for Multispectral Image Capture Peter Morovic 1,2 and Hideaki Haneishi 2 1 Hewlett Packard Espanola, Sant Cugat del Valles, Spain 2 Research Center for Frontier Medical

More information

Light source separation from image sequences of oscillating lights

Light source separation from image sequences of oscillating lights 2014 IEEE 28-th Convention of Electrical and Electronics Engineers in Israel Light source separation from image sequences of oscillating lights Amir Kolaman, Rami Hagege and Hugo Guterman Electrical and

More information

COLOR imaging and reproduction devices are extremely

COLOR imaging and reproduction devices are extremely IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 10, OCTOBER 2004 1319 Indexing of Multidimensional Lookup Tables in Embedded Systems Michael J. Vrhel, Senior Member, IEEE Abstract The proliferation

More information

Fundamentals of Digital Image Processing

Fundamentals of Digital Image Processing \L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,

More information

CS635 Spring Department of Computer Science Purdue University

CS635 Spring Department of Computer Science Purdue University Color and Perception CS635 Spring 2010 Daniel G Aliaga Daniel G. Aliaga Department of Computer Science Purdue University Elements of Color Perception 2 Elements of Color Physics: Illumination Electromagnetic

More information

Color Correction between Gray World and White Patch

Color Correction between Gray World and White Patch Color Correction between Gray World and White Patch Alessandro Rizzi, Carlo Gatta, Daniele Marini a Dept. of Information Technology - University of Milano Via Bramante, 65-26013 Crema (CR) - Italy - E-mail:

More information

THE fast evolution of workstations and network technology

THE fast evolution of workstations and network technology IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 3, APRIL 1999 501 Extracting Color Features and Dynamic Matching for Image Data-Base Retrieval Soo-Chang Pei, Senior Member,

More information

Illuminant Estimation for Object Recognition

Illuminant Estimation for Object Recognition Illuminant Estimation for Object Recognition Graham D. Finlayson and Steven Hordley School of Information Systems University of East Anglia Norwich, NR4 7TJ UK Paul M. Hubel H-P Laboratories Hewlett-Packard

More information

(b) Side view (-u axis) of the CIELUV color space surrounded by the LUV cube. (a) Uniformly quantized RGB cube represented by lattice points.

(b) Side view (-u axis) of the CIELUV color space surrounded by the LUV cube. (a) Uniformly quantized RGB cube represented by lattice points. Appeared in FCV '99: 5th Korean-Japan Joint Workshop on Computer Vision, Jan. 22-23, 1999, Taegu, Korea1 Image Indexing using Color Histogram in the CIELUV Color Space Du-Sik Park yz, Jong-Seung Park y?,

More information

Content based Image Retrieval Using Multichannel Feature Extraction Techniques

Content based Image Retrieval Using Multichannel Feature Extraction Techniques ISSN 2395-1621 Content based Image Retrieval Using Multichannel Feature Extraction Techniques #1 Pooja P. Patil1, #2 Prof. B.H. Thombare 1 patilpoojapandit@gmail.com #1 M.E. Student, Computer Engineering

More information

CSE 167: Introduction to Computer Graphics Lecture #6: Colors. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

CSE 167: Introduction to Computer Graphics Lecture #6: Colors. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 CSE 167: Introduction to Computer Graphics Lecture #6: Colors Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 Announcements Homework project #3 due this Friday, October 18

More information

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both

More information

Inner Products and Orthogonality in Color Recording Filter Design

Inner Products and Orthogonality in Color Recording Filter Design 632 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 4, APRIL 2001 Inner Products and Orthogonality in Color Recording Filter Design Poorvi L. Vora Abstract We formalize ideas of orthogonality and inner

More information