Illuminant Estimation for Object Recognition

Size: px
Start display at page:

Download "Illuminant Estimation for Object Recognition"

Transcription

1 Illuminant Estimation for Object Recognition Graham D. Finlayson and Steven Hordley School of Information Systems University of East Anglia Norwich, NR4 7TJ UK Paul M. Hubel H-P Laboratories Hewlett-Packard Company Palo Alto, CA USA Abstract Comparing colour histograms of images has been shown to be a powerful technique for discriminating between large sets of images. However, these histograms depend not only on the properties of imaged objects but also on the illumination under which the objects are captured. If this illumination dependence is not accounted for prior to constructing the colour histograms, colour-based image indexing will fail when illumination changes. This failure can be addressed by correcting the RGBs in an image to corresponding RGBs representing the same scene but under a standard, reference illuminant, prior to constructing the histograms. To perform this correction of RGBs it is necessary to have a measurement, or more commonly an estimate, of the illumination in the original scene. Many authors have proposed illuminant estimation (or colour constancy) algorithms to obtain such an estimate. Unfortunately results of colour histogram matching experiments under varying illumination conditions have shown that existing estimation algorithms do not provide a sufficiently good estimate of the scene illuminant to enable this approach to work. In this paper, we repeat those experiments but this time using a new illuminant estimation algorithm - the so called Color by Correlation approach which has been shown to afford significantly better performance than previous algorithms. The results of this new experiment show that when this new algorithm is used to preprocess images, a significant improvement in colour histogram matching performance is achieved. Indeed, performance is close to the theoretically optimal level of performance i.e. close to that which can be obtained using actual measurements of the scene illumination.

2 1 Introduction One of the biggest challenges facing researchers in image retrieval and object recognition is to develop suitable features by which to describe objects and images. While much early work in this field concentrated on using the geometric properties of objects as recognition cues, more recent work [1, 2] has shown that colour is also a powerful cue. There are a number of advantages to be gained by using colour as an identifying feature of an object it is largely independent of the scale and resolution at which the object is imaged, and, as Swain and Ballard have shown in recent work [1], can be facilitated with only very simple processing of the image data. This is not to say that colour alone suffices to recognise an object but that it is an important cue which makes recognition easier. A simple example helps to illustrate this point. A spherical object may be identified as a ball on the basis of its geometric properties alone but identifying the fact that it is a cricket ball as opposed to a tennis ball is helped by knowing that its colour is red (cricket balls are red!). Further evidence of the importance of colour in object identification is provided by the fundamental role it plays in establishing brand identity. Yet, despite the purported advantages of colour, prior to the work of Swain and Ballard its use as a cue for object recognition was limited. The authors themselves identified a possible reason for this the fact that the colours in an image of an object depend not only on the surface reflectance properties of the object but also on the spectral content of the light irradiating it. If the effect of the scene illuminant colour is not accounted for, it is clear that colour will be an unstable descriptor of an object under varying illumination conditions. In this case, colour is of limited use as a recognition cue. To overcome this problem Swain and Ballard suggested that images must be corrected to account for the colour of the prevailing illumination prior to performing recognition. In practice this involves estimating the scene illuminant colour from the image data and then using this estimate of the illuminant to correct the image data to corresponding data that would have been recorded under a reference light. Many algorithms for estimating the scene illuminant have been proposed in the literature [3, 4,, 6, 7, 8, 9,, 11, 12, 13, 14, 1] but it was only recently that Funt et al [16] conducted experiments to determine whether or not existing algorithms were good enough to facilitate colour based object recognition under varying illumination. Unfortunately, though all the algorithms they tested led to better recognition performance than could be achieved by not correcting for the scene illumination, all algorithms were found to perform significantly less well than the theoretical best-case performance. That is, much better recognition could be achieved by using a measurement of the scene illuminant to correct images, than could be obtained using any of the algorithm s estimates. Since a measurement of the illumination in an scene is rarely available, and as the results of this experiment show existing illuminant estimation algorithms to be incapable of providing a sufficiently accurate estimate of the illumination, it would seem, that under varying illumination at least, colour based recognition will not work. Recently however, a new illuminant estimation algorithm has been developed [17] which gives significantly better estimation performance than all previous methods, and which thus raises the question as to whether this improved performance translates into good enough colour-based recognition? The answer to this question is provided in this paper in which Funt et al s object recognition experiments are repeated, but this time using the new Color by Correlation [17] algorithm to estimate and correct for scene illumination. The results show that the new algorithm does allow significantly improved recognition performance, indeed results are close to those obtained by using a measurement of the scene illuminant. In the experiments reported here colour-based recognition is implemented following Swain and Ballard s original colour indexing paradigm which is summarised in Section 2. The illuminant estimation problem and the estimation algorithms tested in this paper are described in Section 3. Section 4 details the colour indexing experiment itself and the results are reported in Section. The paper concludes with a summary in Section 6. 2

3 2 Colour Indexing The colour indexing algorithm developed by Swain and Ballard performs object recognition by comparing the distribution of colours in an image of a test (or query) object with the distribution of colours in each of the images of a database of objects (referred to as model objects). The test object is identified as the object in the database whose corresponding colour distribution is most similar to the colour distribution of the query. Imaging devices typically sample light from a point in a scene with three type of sensor. These sensors are sensitive to light in the long (red), medium (green), and short (blue) regions of the visible spectrum. Thus, colour at a point in an scene is coded as a triplet of numbers called an RGB. In their original work Swain and Ballard represented the distribution of colours in an image by a histogram of the RGBs. In this paper we first transform the RGB triplets at each pixel into 2-dimensional, intensity independent, co-ordinates, and represent the colour distribution of an image by a histogram of these 2-dimensional co-ordinates instead. An intensity independent space was chosen because many of the illuminant estimation algorithms tested (both in this work, and in Funt et als original work [16]) do not recover the overall intensity of the scene illumination. This implies that when images are corrected for illumination, intensity information is effectively lost. The RGB response of a camera to a surface can be normalised in many ways to discard intensity information here we follow the approach of Funt et al and use the standard (r g) space r = R R + G + B g = G R + G + B (1) This (r g) co-ordinate space is referred to in this paper as a chromaticity space because (r g) are formed from the RGB co-ordinates in a similar way to how (x y) chromaticity co-ordinates are formed from CIEXYZ tristimulus values [18]. To represent the distribution of colours in an image we build a histogram of these 2-dimensional chromaticity co-ordinates. That is, we quantise the (r g) space into a number of discrete cells. A histogram H 1 is then defined for an image such that H 1 (i j) contains the number of pixels in the image whose chromaticities fall in the (i j) th cell of (r g) space, normalised by the number of pixels in the image. This normalisation ensures that the histograms are independent of the size of the image. Histograms for a large number of model images are pre-computed and stored in a database. To identify an unknown object from its image its corresponding 2-dimensional colour histogram is first computed. This histogram is then compared with each of the histograms in the database to derive a measure of the similarity of the database and test objects. The similarity of histograms is measured using the intersection method. Given two histograms H 1 and H 2 their intersection is defined as X i j min [H 1 (i j) X i j H 1 (i j) H 2 (i j)] Equation 2 returns a value of between zero and one; a value of one implies that the histograms intersect exactly while a zero value means the histograms don t intersect at all. The query object is thus identified as the model object whose corresponding histogram has the highest intersection value with the query object. Swain and Ballard showed that matching on this basis they were well able to distinguish between a large number of objects in a database. Moreover the method is quite insensitive to rotation and small changes in view as well as to partial occlusion. The algorithm as described does though have some limitations. First, it assumes that the query images contain only a single object which can therefore easily be compared to the precomputed database of objects. In reality it will often be necessary to recognise objects imaged against different backgrounds, 3 (2)

4 and surrounded by other objects and so in this case we must address the problem as to how to separate the object of interest from other objects in the imaged scene. In the context of this paper however, we do not address this issue, but rather we restrict attention to the ideal case in which images consist only of a single object. A second major limitation of the indexing approach as described is that the colours in an image are highly sensitive to the lighting under which the image is captured. For example, the colours in an image of an object taken under a Tungsten light will be much redder than those in the corresponding image of the same object under daylight illumination. This suggests that simply comparing the colour histograms of two images is not a sufficiently reliable way to determine whether the images represent the same object. Indeed, under even quite small changes in illumination colour, indexing can fail completely [19]. In the next section we consider how we might deal with this problem. 3 Colour Indexing and Varying Illumination The dependence of image colours on the prevailing scene illumination is made clear by examining the physics of image formation. Given a surface, described by its surface reflectance function S(), imaged under a single (spatially uniform) light with spectral power distribution E(), by a device with (three) spectral sensitivity functions R(), the RGB response is given by (R G B) t = Z! S()E()R()d (3) where the integral is taken over! the range of the spectrum to which the sensors are sensitive. It is clear from Equation 3 that changing E() will change the RGB responses to surfaces, and these different RGBs will result in a different colour histogram. We point out further that Equation 3 represents a simplified model of image formation and in reality the process can be much more complicated than this. For example, in Equation 3 we assume that a scene is lit by only a single illuminant, whereas in practice there may be many illuminants in the scene. In the context of this paper however we will assume only a single scene illuminant, since even in this case the problem of estimating the scene illuminant has not yet been satisfactorily solved. For colour based recognition to be viable under varying illumination images we must correct the image data such that illumination dependence is removed or otherwise accounted for. One method of doing this is to estimate and correct for the colour of the prevailing illumination. This proceeds in two stages; first, the illumination under which an image is recorded is determined. Second, the colour bias due to the illuminant is removed from the image. Removing colour bias amounts to mapping the colours in the original image to corresponding colours under known reference lighting conditions. Alternatively, as many previous authors [19, 2, 21, 22, 23] have suggested, matching of images could be performed on their colour content without explicitly recovering the scene illuminant. Such approaches depart from the original colour indexing paradigm since they compare images not on the image colours themselves, but rather on some function of these colours. That is, features invariant to illumination are derived from the colour distributions and image matching is performed on these. While such methods can give good results, they have two limitations. First, invariants are calculated with respect to a particular image region (they are not computed pixelwise) and so are sensitive to occlusion. Second, whereas invariants can easily be calculated after an image has been corrected for the colour of the scene illuminant, invariant calculation does not render the illuminant estimation problem easier to solve. Given perfect illuminant colour correction we can always calculate invariants but the converse is not true. Ideally then, it is preferable to pre-process images so as to correct for the colour of the scene illuminant. However, in practice this approach makes sense if and only if the illumination can be estimated sufficiently well. There exist in the literature many methods for estimating the colour of the scene illuminant from the RGB data 4

5 of an imaged scene. Recently Funt et al [16] tested a number of algorithms to determine whether the illuminant estimate they provided was sufficiently good to enable accurate colour based object recognition. Five different illuminant estimation algorithms were tested max-rgb, grey-world, a 3-d [] and a 2-d [11] gamut mapping algorithm, and a neural network based approach [24]. Each of these algorithms can be considered to provide an estimate of the scene illuminant in terms of the response of a device to an achromatic patch (R e G e B e ), (though it should be noted that not all algorithms recover the intensity of this response). If (R c G c B c ) is the actual response of the device to the same patch under a reference light, then the mapping D = diag(d 1 d 2 d 3 ) d 1 = R c =R e d 2 = G c =G e d 3 = B c =B e (4) applied to each RGB in the original image, produces an equivalent image under the reference illuminant. This re-rendered image forms the input to the colour indexing algorithm. The max-rgb algorithm estimates the illuminant colour as (R max G max B max ) the maximum value of R, G, and B in the image. If there is a surface in the scene which reflects all light incident upon it (for example a white patch) then this approach will give a good estimate of the illuminant. The grey-world approach is similar it assumes that the average colour of surfaces in the scene is achromatic (reflecting light equally at all wavelengths) and that therefore (R mean G mean B mean ) the average of all colours in the image, is a good estimate of the illuminant. The neural network approach returns an estimate of the chromaticity of the scene illuminant based on the result of prior training. Finally, the gamut mapping methods directly recover estimates of the mapping D taking an image to its corresponding image under a reference light. Details of these algorithms can be found elsewhere [24,, 11]. To test the algorithms in the context of an object recognition experiment Funt et al [16] compiled a calibrated database of images of a number of different objects imaged under a variety of illuminants. For each image five different estimates of the scene illuminants were obtained, from each of the different algorithms. On the basis of the estimates provided by each of the algorithms the images were corrected to corresponding data under a reference light. This resulted in five different sets of image data (one set per illuminant estimation algorithm) and for each of these sets of images a colour indexing experiment was performed. The success of the indexing experiments depends on how well the scene illumination is accounted for and so provides a test of the relative performance of the different illuminant estimation algorithms. Of the algorithms tested, the gamut mapping methods have been shown [11] to give the best illuminant estimation performance. When tested in the context of the colour indexing experiment however, Funt et al [16] found that no algorithm (gamut mapping included) is sufficiently accurate to deliver good enough recognition under varying illumination. 3.1 Color by Correlation Since the work of Funt et al [16] was published a new illuminant estimation algorithm, called Color by correlation, has been proposed, which gives significantly better illuminant estimation performance than all the estimation approaches described above. In light of this we decided to repeat the colour indexing experiments of Funt et al [16] only this time using the Color by Correlation algorithm to estimate the scene illuminant. In this way we can determine whether the performance of this new illuminant estimation algorithm is sufficient to facilitate colour based object recognition. Next we present a summary of the Color by Correlation approach to illuminant estimation but for a more detailed description of the method we direct the reader to previous publications [2]. Like the other illuminant estimation algorithms tested by Funt et al the Color by Correlation approach is founded on the assumption that an imaged scene consists of a number of objects illuminated by a single light. The illuminant estimation problem is then posed as the problem of determining which of a pre-determined set of

6 candidate illuminants is most consistent with a given set of image data. Moreover, we set out to recover only the chromaticity of the scene illuminant and not its brightness, so in what follows we represent image colours in a two dimensional, intensity independent chromaticity space. The first step in the Color by Correlation algorithm is to construct a correlation matrix memory. The process by which the matrix is constructed is illustrated in Figure 1. We begin with a set of possible illuminants and for each of these lights we calculate the probability of observing each possible image colour. Image colours are represented in a two dimensional chromaticity space which is divided into a number of discrete cells. The methods by which we calculate the probability of observing a given image colour (that is, a given cell of the chromaticity space) under each light are summarised below. This process is repeated for each of the possible lights and in this way we obtain a probability distribution for each light. Each of these distributions is then represented as a vector each element of this vector corresponds to one cell of the chromaticity space. These vectors form the columns of the correlation matrix which we denote M. The ij th entry of M contains a measure of the likelihood of observing the i th chromaticity under illuminant j in fact, it contains the log probability. Figure 2 shows how this correlation matrix is used to identify the scene illuminant for a given image. First an image vector v is calculated; this vector is similar to a column of M but contains 1 s in the positions of chromaticities that occur in the image, and s everywhere else. This implies that an image colour is counted only once regardless of how many times it occurs in the image. This ensures that the likelihood scores that we calculate next are not biased by the size of a surface in the image. This image vector is multiplied with the matrix M, to give a new vector l t = v t M. The entries of l contain a measure of the likelihood that each of the plausible illuminants was the scene illuminant. To see that this is the case, consider that if the ij th entry of M contains the log probability of observing chromaticity i under illuminant j log(p (chromaticity i j illuminant j)), then the j th entry of l is X i2i log(p (chromaticity i jilluminant j)) which, from Bayes rule, and assuming independence of chromaticities, is proportional to the log-probability that the scene illuminant is illuminant j, given the set of image chromaticities. On the basis of these illuminant likelihoods, a single estimate of the scene illuminant is chosen. For example, we can choose the illuminant which has maximum likelihood - we discuss some alternative selection methods in Section 4.2. There are other choices that must be made when implementing the Color by Correlation algorithm. First we must decide on which chromaticity representation we wish to use. The (r g) chromaticity space given in Equation (1) is commonly used in computer vision applications. However, for our purposes we found it to be unsuitable. The problem with this space is that the mapping from RGB to (r g) is non-linear [26] and so a simple uniform discretisation of the space is inappropriate. When we discretise the chromaticity space we would like the probability of a bin being occupied to be uniform throughout the space. In information theoretic terms we can quantify the uniformity of a distribution in terms of entropy. If p i i = 1 n represents a probability distribution, then H = ; P p i log(p i ) is defined as the entropy of that distribution. The greater the entropy, the more uniform the distribution. An entropy value of zero will be obtained for a distribution such that p 1 =1and p i = i =2 n. In contrast a uniform distribution; p i =1=n i =1 nhas maximum entropy. The problem with the chromaticity space defined by Equation 1 then, is that it has low entropy. So, instead we chose to represent image colours in a different 2-d chromaticity space, whose co-ordinates (r g ) are calculated from an RGB vector thus R 1=3 G 1=3 r = g = () B B Using a set of reference surface reflectances (in fact, a set of 1269 munsell chips [27] we can calculate the distribution of chromaticities in each space and from this we can calculate the entropy of the two distributions. The entropy of the space defined by Equation 1 is 1.6 whereas that defined by Equation has a value of

7 which is much closer to the ideal case of a uniform distribution which would have entropy of.. The second implementation choice that must be made concerns the particular entries we place in M that is, how we determine the probability of observing a given image colour under each of the possible lights. Two approaches are considered here. In the first instance a matrix M Real is computed from real image data. This is done by first uniformly quantising the (r g ) chromaticity space. Then, a binary chromaticity histogram is computed for each image. The histograms for all images taken under a particular light are then summed and the resulting histogram can thus be considered as an approximation to P (chromaticity i j illuminant j). The log of this histogram is then taken and forms a column of the correlation matrix M Real. Computing M Real in this way should give an idea of the best case performance for the method since the statistics encoded in the matrix accurately reflect the statistics of the images tested. The second method of computing the correlation matrix is based on synthetic data. In this case matrix entries are calculated based on a set of reference surface reflectance functions. For an illuminant j, the device RGB response to these reference surfaces are computed (using Equation 3) and from these RGBs, the corresponding chromaticities can be calculated according to Equation. For each cell of the discrete chromaticity space, the number of reference surfaces falling in that cell are counted, again giving an approximation to P (chromaticity i j illuminant j). Once more, the log of these values are calculated and coded in a correlation matrix denoted M Synth. The success of this matrix is clearly dependent on the degree to which the statistics of the reference surfaces match the statistics of the images tested. Other implementation issues that must be addressed are how many candidate illuminants are chosen, and how finely the chromaticity space is quantised. Both these issues are to a degree application dependent and can only properly be addressed empirically - the next section details the choices we made in this paper and why we chose them, however a full investigation of these issues is not the intent of this paper and so we do not expect that the choices we have made here are optimal. 4 The experiment The experiment reported here follows that of Funt et al and is summarised in Figure 3. First, chromaticity histograms for each of a number of objects are pre-computed based on images of the objects taken under a reference light. The database of images and their corresponding chromaticity histograms are shown at the bottom of Figure 3. Recognition of an object is based on an image of the object taken under an unknown illuminant. Recognition proceeds by first estimating the scene illuminant for the image using one of the tested algorithms (top of Figure 3). On the basis of this estimate the image colours are mapped to a reference light (the same light as for the images in the database) as described in Section 3. A chromaticity histogram is computed from this corrected image and compared with those in the pre-computed database. The closest histogram overall identifies the object in the image. 4.1 The image database The images used by Funt et al [16] in their original experiment are again used here. In total there are 1 images11 different objects each imaged twice under five different lights. These images are split into two sets; a model set and a test set. The model set consists of eleven sets of five registered images each object being imaged under each of the five lights with its position unchanged. In addition all settings on the camera were unaltered when taking the images. The test set consists of images of the same objects and taken under the same lights as the model set, but the position and orientation of the object in each image is random. Figure 4 shows the eleven objects in the test set. All images were taken with a Sony DXC-93 three CCD colour video camera with its The authors are grateful to Dr. Brian Funt for making this data available. 7

8 white balance set to 32K and its gamma correction turned off. When taking the images the camera aperture was set such that no pixel values were clipped under any of the five lights. Images were adjusted according to a camera calibration model resulting in a set of images whose RGBs are approximately linearly related to scene radiance. Images were taken in a light booth with its illumination set to one of five different lights Sylvania Halogen, Sylvania Cool White Fluorescent, Philips Ultralume Fluorescent, Macbeth K Fluorescent and Macbeth K Fluorescent with a Roscolux 322 full blue filter. The images enable a valid test both of illuminant estimation algorithms, and of a colour based approach to object recognition, though we should point out that the images represent a simplified problem in a number of senses. First, the images are scenes of single objects taken under an (approximately) spatially constant single illuminant. More generally, a scene might contain multiple illuminants varying in their spectral distribution. Furthermore the images contain only a single object against a dark background so that interreflection of light between objects is avoided. A complete solution to the illuminant estimation problem must be able to handle these more complex cases. However, research in the field is almost entirely restricted to the single illuminant case since solving even this simplified problem has proven to be non-trivial. The images also represent a special case for object recognition since the test images contain a single object against a dark background whereas more generally an object must be identified when it is surrounded by other objects or multi-coloured backgrounds. However, the images we have used are useful in assessing whether or not illuminant estimation algorithms are sufficiently good to enable colour based recognition across varying illumination. Two colour based object experiments were performed. In a first experiment a database of histograms was formed from the eleven objects in the model set imaged under one of the lights. Then each of the images in the model set of images were matched to the database in turn. It is possible that the choice of reference light used for the database images could affect the recognition performance so to investigate this we repeated the experiment five times, each time using a different reference illuminant. This revealed that performance was largely independent of which light was used as the reference except in the case of the filtered Macbeth illuminant which was found to give significantly worse matching performance. We are not entirely sure why this was the case, but it may be due to the fact that image noise was a bigger factor for these images since this illuminant is significantly less bright than the others and the resulting images are on average much darker. This increased image noise may mean that the method we use to correct images to the reference light (Equation 4) is inaccurate in this case, resulting in poorer indexing performance. In a second experiment the database was again constructed from objects in the model set, but matching was performed on objects from the test set, thus allowing the effect of object pose on recognition performance to be evaluated. As in the first experiment this experiment was repeated four times, each time using a database compiled from images under a different reference light. 4.2 Colour Constancy Algorithms In total five algorithms were tested max-rgb (the algorithm which Funt et al [16] found to perform best), grey-world, and three versions of the correlation matrix algorithms described in this paper. The gamut mapping and neural network algorithms also tested by Funt et al [16] are not tested again here. Of the three correlation matrix algorithms tested, one was based on real image data (M Real ), as described above, and two were based on synthetic data M Synth and M Synth18. These algorithms differ only in how many illuminants are coded in the matrix M Synth uses just the five lights under which images were recorded and allows direct comparison with M Real which also contains five lights. Of course discriminating between the correct answers does make the illuminant estimation problem easier to solve (and is perhaps an easier problem than was addressed in Funt et al s study). Thus, we added another 13 typical lights (these included a range of incandescent lamps of varying colour temperatures, fluorescent illuminants and daylight spectra with a range of colour temperatures) to our training set and calculated the second correlation matrix M Synth18. The chromaticities of the 18 lights used are shown 8

9 in Figure. Synthetically trained matrices were based on two sets of reference reflectance functions 17 object reflectances [28] and 219 natural reflectances [27]. These two sets were found to give the best results but results obtained by training on other reflectances (for example a set Munsell chips [29]) are similar. When building the correlation matrix we must also decide how finely to quantise the chromaticity space on which the matrix is based. Experimentally we have found that quite a coarse quantisation of the space gives better results quantisations of 8 8 or work well (we report results here for an 8 8 quantisation). In the 8 8 case we can distinguish only 64 colours and it might seem that this would not be a fine enough distinction. Indeed, it is possible that better performance might be achievable with a finer quantisation. We believe that this will be the case provided we can accurately calculate the probability of observing each image colour. However, the method we currently used based on either a small set of image data, or a small set of surface reflectance functions provides only a rough approximation to the true distribution of image colours. If we use a coarse quantisation we can capture the general shape of the distribution quite well, but as we increase the quantisation the accuracy is reduced. Further testing is required to verify that this intuition is correct, but for now we report only what level of quantisation worked well and point out that the results may not represent the best that are achievable with the method. In the first instance the correlation matrix algorithms return a likelihood measure for each of the possible scene illuminants. On the basis of these likelihoods a single estimate of the scene illuminant must be chosen. There are many ways this might be done for example the illuminant with maximum likelihood could be chosen, or we might choose the average of a small number of illuminants with high likelihood. A full investigation of selection methods is outside the scope of this paper so here we just summarise the approach we took. For correlation matrices with illuminants we used the maximum likelihood approach. However, it is apparent that if we wish to consider more illuminants then we need to allow the possibility that the image data may only suffice to identify a set of possible illuminants. So, for M Synth18 we allowed multiple illuminant estimates. That is, all lights with likelihood greater than a threshold % of the maximum likelihood are selected and each of these possible illuminants is then used to correct the image. Matching is performed on each of these corrected images. For example, if two illuminants are equally likely, the image is corrected using both lights and both the corrected images are matched to the database. Then, as before, the unknown object is identified as the database object with the highest match score. In our experiments we found that likelihoods of illuminants were fairly easily split into two classes those which were highly probable and those which were quite unlikely. Applying a threshold of 96% of the maximum likelihood answer readily separated the two classes. Setting the threshold in this way typically resulted in between 2 and 3 possible illuminants per image. Prior to estimating the scene illuminant with the correlation matrix method we applied some pre-processing to the images. First, we reduced the resolution of the images by a factor of 2 in each direction so that the RGBs of each pixel in the reduced image are an average of four pixels in the original image. This has the effect of reducing the level of the noise in the images. In addition, since all the images consist of a single object against a nearly black background, we also attempted to remove this background prior to estimating the illuminant. This we did by checking the RGB values at each pixel in the image and considering only those pixels whose values in each of the three channels were greater than some fixed threshold. Empirically we found a threshold of somewhere between and counts out of 2 to remove most of the background pixels. We found this to give better illuminant estimation performance than could be obtained using the full image. A similar procedure was tried for the max-rgb and grey-world algorithms but it was found that better performance was obtained using the full, unthresholded images. 9

10 Experimental Results We summarise the results of the two experiments using match percentile (MP) which is calculated N models ; rank N models ; 1 (6) where N models is the number of model images (the 11 images in the database) and rank is the position of the correct histogram in a sorted list of histogram intersection scores. This gives a value between zero and one one implies that the image was correctly matched and zero that the correct image was in last place. The average match percentile (AMP) is this value multiplied by and averaged over all matched images. Table 1 summarises the results for the two experiments in terms of both AMP and the number of images which were ranked in worse than third place. These results are the combined results obtained using each of four lights as the reference light. Results for the filtered Macbeth light, used as a database light are excluded since they are significantly worse than those for the other lights and so suggest that this light is not suitable as a reference illuminant. However, query images taken under this illuminant are included in these results. Recognition performance is shown for all the illuminant estimation algorithms tested as well as performance when no illuminant correction was performed (denoted Nothing ) and that when images were corrected using a measurement of the actual scene illuminant ( Actual ). A number of conclusions can be drawn from these results. First the results obtained without correcting for the colour of the scene illuminant clearly demonstrate that such correction is required. In this case AMP is only 7%. In fact, these results are helped by the fact that in 44 out of a total of 22 matches in each experiment, no colour correction was necessary since in these cases the query images were taken under the same light as the database images. Recognition performance when these cases are excluded (that is when scene illumination doesn t match the illumination in the database images) is no better than random matching. The results obtained by correcting images using a measurement of the scene illuminant set an upper bound on the recognition performance that can be achieved using an illuminant estimation algorithm and show that when images are properly corrected very good recognition performance can be achieved. These results also show however, that the model of illumination change adopted (Equation. 4) is imperfect since the match images are registered with those in the database (and in a fifth of cases are identical) in the first experiment (column 2 of Table 1) perfect colour correction should result in a match rank no worse than joint first, so that AMP should be. That the results obtained are less than this can only be due to inaccuracies in the correction procedure. The results obtained by colour correcting the images on the basis of max-rgb or grey-world estimates of the scene illuminant confirm the findings of Funt et al [16]; neither of these algorithms are good enough to enable object recognition; their match percent is too low and there are still too many failures. In the original experiment max-rgb was the best performing algorithm. In contrast, the new algorithms all perform significantly better. Indeed, the average performance over the two experiments shows that the matrix trained on real image data delivers 93.8% recognition performance more than a % improvement over max-rgb and close to the bestcase performance of 98%. While this performance is good it may not be too surprising since the algorithm was trained on the data being tested. The results obtained using the matrix trained on synthetic data allay this concern however. Both synthetically trained matrices (M Synth and M Synth18 ) perform almost as well, giving AMPs of 93.7% and 91.7% respectively. The number of failures for these two algorithms is also very low on average only 4 images are matched in worse than third place. The results of the two experiments matching model images to model images and model images to test images respectively follow broadly the same trends. Recognition performance is not affected to any significant degree by the fact that the database images do not correspond exactly to the test images. This might be expected in a database of only 11 objects so the sensitivity of colour indexing to object pose cannot be conclusively determined

11 from these results. In any event, in their original work Swain and Ballard proposed that more than just a single view of an object should be stored in the database to make the method more robust to object pose. In summary, the results presented here show that the new Color by Correlation matrix approach affords significantly better object recognition under varying illumination than do all previously tested algorithms. Furthermore the results of the experiments presented here show that performance of this new algorithm is close to the theoretical best case performance. 6 Conclusions The results of this experiment then are encouraging they suggest that Swain and Ballard s original colour indexing paradigm, supplemented with an illuminant colour correction pre-processing step is a powerful technique for object recognition, even under varying illumination. The experiments do though raise a number of important questions. Firstly, the results show that even using a measurement of the scene illuminant results in less than perfect colour indexing performance whether this is true only for the images in the tested database or true in general needs to be further investigated. Furthermore, the results are based on a database which is significantly smaller than any database which would be met in practice and so cannot be considered conclusive. That said, the images were taken under a wide range of different illuminants and in these terms at least the experiment represents a reasonable test of the method. It should be noted further that the test images pose quite a difficult test of illuminant estimation algorithms all images contain only a single object and in a number of cases the range of colours in the images is limited. Clearly the issues raised can only be properly resolved by further tests of the method, on different (and bigger) image databases. But on the basis of the results presented here it might reasonably be concluded that the new correlation matrix algorithm suffices to support colour based object recognition. Acknowledgements This work was supported by the EPSRC (GR/L682) and by Hewlett-Packard Incorporated. References [1] Michael J. Swain and Dana H. Ballard. Color Indexing. International Journal of Computer Vision, 7(1)11 32, [2] W. Niblack and R. Barber. The QBIC project Querying images by content using color, texture and shape. In Storage and Retrieval for Image and Video Databases I, volume 198 of SPIE Proceedings Series [3] G. Buchsbaum. A spatial processor model for object colour perception. Journal of the Franklin Institute, 31 26, 198. [4] Laurence T. Maloney and Brian A. Wandell. Color constancy a method for recovering surface spectral reflectance. Journal of the Optical Society of America, A, 3(1)29 33, [] Michael D Zmura. Color constancy surface color from changing illumination. Journal of the Optical Society of America, A, 9(3) , [6] L. T. Maloney. Computational approaches to colour constancy. PhD thesis, Stanford University, Stanford CA,

12 [7] S. A. Shafer. Using color to separate reflection components. Color Research and Application, 2 218, 198. [8] Shoji Tominaga. Surface Reflectance Estimation by the Dichromatic Model. Color Research and Application, 21(2)4 114, [9] Shoji Tominaga and Brian A. Wandell. Standard surface-reflectance model and illuminant estimation. Journal of the Optical Society of America, A, 6(4)76 84, [] D. A. Forsyth. A Novel Algorithm for Colour Constancy. International Journal of Computer Vision, (1) 36, 199. [11] G. D. Finlayson. Color in Perspective. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18()34 38, [12] Edwin H. Land. The Retinex Theory of Color Constancy. Scientific American, pages 8 129, [13] Edwin H. Land. Recent advances in retinex theory and some implications for cortical computations. Proc. Natl. Acad. Sci. USA, , [14] A. Blake. On Lightness Computation in the Mondrian World. In Proceedings of the Wenner-Gren Conference on Central & Peripheral Mechanisms in Colour Vision, pages 4 9. MacMillan, New York, [1] Gavin Brelstaff and Andrew Blake. Computing lightness. Pattern Recognition Letters, , [16] Brian Funt, Kobus Barnard, and Lindsay Martin. Is machine colour constancy good enough? In th European Conference on Computer Vision, pages Springer, June [17] Graham D. Finlayson, Paul M. Hubel, and Steven Hordley. Color by Correlation. In Fifth Colour Imaging Conference, pages IS&T/SID, November [18] R.W.G. Hunt. The Reproduction of Colour. Fountain Press, th edition, 199. [19] Brian V. Funt and Graham D. Finlayson. Color Constant Color Indexing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17()22 29, 199. [2] Glenn Healey and David Slater. Global color constancy recognition of objects by use of illuminationinvariant properties of color distributions. Journal of the Optical Society of America, A, 11(11)33 3, [21] G.D. Finlayson, S.S. Chatterjee, and B.V. Funt. Color angular indexing. In The Fourth European Conference on Computer Vision (Vol II), pages European Vision Society, [22] Daniel Berwick and Sang Wook Lee. A chromaticity space for Specularity, Illumination color- and illumination pose-invariant 3-D object recognition. In Sixth International Conference on Computer Vision. Narosa Publishing House, [23] M.S. Drew, Jie Wei, and Ze-Nian Li. Illumination-invariant color object recognition via compressed chromaticity histograms of normalized images. In Sixth International Conference on Computer Vision, pages Narosa Publishing House, [24] Brian V. Funt, Vlad Cardei, and Kobus Barnard. Learning color constancy. In Proceedings of the Fourth Color Imaging Conference, pages 8 6, November

13 [2] G.D. Finlayson, S.D. Hordley, and P.M. Hubel. Colour by correlation A simple unifying framework for colour constancy. ICCV99, Submitted. [26] G.D. Finlayson J. Berens and G. Qiu. Image indexing using compressed colour histograms. In The Challenge of Image Rerieval [27] J. Parkkinen, T. Jaaskelainen, and M. Kuittinen. Spectral representation of color images,. In IEEE 9th International Conference on Pattern Recognition, volume 2, pages , November [28] M.J. Vrhel, R. Gershon, and L.S. Iwan. Measurement and analysis of object reflectance spectra. Color Research and Application, 19(1)4 9, [29] Newhall, Nickerson, and Judd. Final report of the OSA subcomittee on the spacing of the Munsell colours. Journal of the Optical Society of America, A, ,

14 Model Images Test Images Average Avg Mch % rank > 3 rd Avg Mch % rank > 3 rd Avg Mch % rank > 3 rd Nothing Grey-world Max-RGB M Real M Synth M Synth Actual Table 1 Average Match Percentile and number of images ranked in worse than third place in the two experiments. 14

15 y x a) Characterise image chromaticities under reference illuminant b) Determine relative likelihoods for reference illuminant 1 x1, y1 x1, y2 x1, y3 x1, yn x2, y1 x2, y2 xn, yn c) Put these probability distributions into a correlation matrix M ill 1 ill 2 ill ill 4 ill ill 6 ill 7 ill Figure 1 An illustration of how the correlation matrix memory is constructed. 1

16 a) Make image vector from image chromaticities y b) Correlate image data with possible illuminants x v x1, y1 x1, y2 x1, y3 x1, yn x2, y1 x2, y2 xn, yn ill 1 ill 2 ill 3 ill 4 ill ill 6 ill 7 ill t Matrix M v M c) Choose most likely illuminant Figure 2 Using the correlation matrix to determine the scene illumination. (a) - record the chromaticities present in an image in a vector v. (b) - multiply the correlation matrix by this vector to determine the likelihoods for each illuminant. (c) - use these likelihoods to choose an estimate of the scene illumination. 16

17 Colour Chrom. Calculate Constancy Database Histogram Image Figure 3 A summary of the experimental procedure. 17

18 Figure 4 The eleven objects used in the experiment imaged under the Philips-Ultralume light. 18

19 1.9 Database Lights Other Lights g/(r+g+b) r/(r+g+b) Figure The chromaticities of the 18 lights used in the experiments. 19

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL?

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL? HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL? Gerald Schaefer School of Computing and Technology Nottingham Trent University Nottingham, U.K. Gerald.Schaefer@ntu.ac.uk Abstract Keywords: The images

More information

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom

More information

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance

More information

Local Linear Models for Improved von Kries Adaptation

Local Linear Models for Improved von Kries Adaptation Appeared in Proc. of 10th Colour Imaging Conf., Scottsdale AZ, 2002 1 Local Linear Models for Improved von Kries Adaptation G. D. Finlayson, A. Alsam, and S. D. Hordley School of Information Systems University

More information

Improvements to Gamut Mapping Colour Constancy Algorithms

Improvements to Gamut Mapping Colour Constancy Algorithms Improvements to Gamut Mapping Colour Constancy Algorithms Kobus Barnard Department of Computing Science, Simon Fraser University, 888 University Drive, Burnaby, BC, Canada, V5A 1S6 email: kobus@cs.sfu.ca

More information

Color by Correlation: A Simple, Unifying Framework for Color Constancy

Color by Correlation: A Simple, Unifying Framework for Color Constancy IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 23, NO. 11, NOVEMBER 2001 1209 Color by Correlation: A Simple, Unifying Framework for Color Constancy Graham D. Finlayson, Steven D.

More information

Color Constancy from Illumination Changes

Color Constancy from Illumination Changes (MIRU2004) 2004 7 153-8505 4-6-1 E E-mail: {rei,robby,ki}@cvl.iis.u-tokyo.ac.jp Finlayson [10] Color Constancy from Illumination Changes Rei KAWAKAMI,RobbyT.TAN, and Katsushi IKEUCHI Institute of Industrial

More information

Metamer Constrained Colour Correction

Metamer Constrained Colour Correction In Proc. 7th Color Imaging Conference, Copyright IS & T 1999 1 Metamer Constrained Colour Correction G. D. Finlayson, P. M. Morovič School of Information Systems University of East Anglia, Norwich, UK

More information

Video-Based Illumination Estimation

Video-Based Illumination Estimation Video-Based Illumination Estimation Ning Wang 1,2, Brian Funt 2, Congyan Lang 1, and De Xu 1 1 School of Computer Science and Infromation Technology, Beijing Jiaotong University, Beijing, China 2 School

More information

Removing Shadows from Images

Removing Shadows from Images Removing Shadows from Images Zeinab Sadeghipour Kermani School of Computing Science Simon Fraser University Burnaby, BC, V5A 1S6 Mark S. Drew School of Computing Science Simon Fraser University Burnaby,

More information

Estimating basis functions for spectral sensitivity of digital cameras

Estimating basis functions for spectral sensitivity of digital cameras (MIRU2009) 2009 7 Estimating basis functions for spectral sensitivity of digital cameras Abstract Hongxun ZHAO, Rei KAWAKAMI, Robby T.TAN, and Katsushi IKEUCHI Institute of Industrial Science, The University

More information

Angular Error /Degrees. Angular Error /Degrees Number of Surfaces

Angular Error /Degrees. Angular Error /Degrees Number of Surfaces A Theory of Selection for Gamut Mapping Colour Constancy Graham Finlayson University of Derby Colour & Imaging Institute Britannia Mill Mackworth Road Derby DE22 3BL Steven Hordley University of Derby

More information

Estimating the wavelength composition of scene illumination from image data is an

Estimating the wavelength composition of scene illumination from image data is an Chapter 3 The Principle and Improvement for AWB in DSC 3.1 Introduction Estimating the wavelength composition of scene illumination from image data is an important topics in color engineering. Solutions

More information

A Comparison of Computational Color Constancy Algorithms; Part Two: Experiments with Image Data

A Comparison of Computational Color Constancy Algorithms; Part Two: Experiments with Image Data This work has been accepted for publication in IEEE Transactions in Image Processing. 1 (See http://www.ieee.org/about/documentation/copyright/policies.htm for copyright issue details). A Comparison of

More information

High Information Rate and Efficient Color Barcode Decoding

High Information Rate and Efficient Color Barcode Decoding High Information Rate and Efficient Color Barcode Decoding Homayoun Bagherinia and Roberto Manduchi University of California, Santa Cruz, Santa Cruz, CA 95064, USA {hbagheri,manduchi}@soe.ucsc.edu http://www.ucsc.edu

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

Combining Strategies for White Balance

Combining Strategies for White Balance Combining Strategies for White Balance Simone Bianco, Francesca Gasparini and Raimondo Schettini DISCo, Università degli Studi di Milano-Bicocca, Via Bicocca degli Arcimboldi 8, 20126 Milano, Italy ABSTRACT

More information

Color Constancy from Hyper-Spectral Data

Color Constancy from Hyper-Spectral Data Color Constancy from Hyper-Spectral Data Th. Gevers, H. M. G. Stokman, J. van de Weijer Faculty of Science, University of Amsterdam, The Netherlands fgevers, stokman, joostwg@wins.uva.nl Abstract This

More information

A Comparison of Computational Color. Constancy Algorithms; Part One: Methodology and Experiments with. Synthesized Data

A Comparison of Computational Color. Constancy Algorithms; Part One: Methodology and Experiments with. Synthesized Data This work has been accepted for publication in IEEE Transactions in Image Processing. 1 (See http://www.ieee.org/about/documentation/copyright/policies.htm for copyright issue details). A Comparison of

More information

1 Introduction An image of a three-dimensional scene depends on a number of factors. First it depends on the physical properties of the imaged objects

1 Introduction An image of a three-dimensional scene depends on a number of factors. First it depends on the physical properties of the imaged objects Colour by Correlation A Simple, Unifying Framework for Colour Constancy G. D. Finlayson, S. D. Hordley P. M. Hubel y School of Information Systems y Hewlett-Packard Labs. University of East Anglia Hewlett

More information

Spectral Images and the Retinex Model

Spectral Images and the Retinex Model Spectral Images and the Retine Model Anahit Pogosova 1, Tuija Jetsu 1, Ville Heikkinen 2, Markku Hauta-Kasari 1, Timo Jääskeläinen 2 and Jussi Parkkinen 1 1 Department of Computer Science and Statistics,

More information

Investigating von Kries-like Adaptation Using Local Linear Models

Investigating von Kries-like Adaptation Using Local Linear Models Investigating von Kries-like Adaptation Using Local Linear Models G. D. Finlayson, 1 S. D. Hordley, 1 * A. Alsam 2 1 School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, United Kingdom

More information

To appear in Pattern Recognition Letters Copyright Elsevier Science 22 2 an example the chromaticity[26] function, used extensively in colour science

To appear in Pattern Recognition Letters Copyright Elsevier Science 22 2 an example the chromaticity[26] function, used extensively in colour science Illuminant and Gamma Comprehensive ormalisation in Log RGB Space Graham Finlayson and Ruixia Xu School of Information Systems University of East Anglia orwich R4 7TJ United Kingdom graham@sysueaacuk September,

More information

Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes

Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes Javier Toro 1 and Brian Funt 2 LBMU 1, Centre Hospitalier de l Université de Montréal Montréal, QC, Canada H2L 2W5 School of

More information

dependent intensity function - the spectral distribution function (SPD) E( ). The surface reflectance is the proportion of incident light which is ref

dependent intensity function - the spectral distribution function (SPD) E( ). The surface reflectance is the proportion of incident light which is ref Object-Based Illumination Classification H. Z. Hel-Or B. A. Wandell Dept. of Computer Science Haifa University Haifa 395, Israel Dept. Of Psychology Stanford University Stanford, CA 9435, USA Abstract

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Illuminant retrieval for fixed location cameras

Illuminant retrieval for fixed location cameras Illuminant retrieval for fixed location cameras Joanna Marguier and Sabine Süsstrunk School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland Abstract

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Light & Perception Announcements Quiz on Tuesday Project 3 code due Monday, April 17, by 11:59pm artifact due Wednesday, April 19, by 11:59pm Can we determine shape

More information

Diagonal versus affine transformations for color correction

Diagonal versus affine transformations for color correction 2108 J. Opt. oc. Am. A/ Vol. 17, No. 11/ November 2000 JOA Communications Diagonal versus affine transformations for color correction Brian V. Funt and Benjamin C. ewis chool of Computing cience, imon

More information

Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis

Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis Experimentation on the use of Chromaticity Features, Local Binary Pattern and Discrete Cosine Transform in Colour Texture Analysis N.Padmapriya, Ovidiu Ghita, and Paul.F.Whelan Vision Systems Laboratory,

More information

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison CHAPTER 9 Classification Scheme Using Modified Photometric Stereo and 2D Spectra Comparison 9.1. Introduction In Chapter 8, even we combine more feature spaces and more feature generators, we note that

More information

Introduction to color science

Introduction to color science Introduction to color science Trichromacy Spectral matching functions CIE XYZ color system xy-chromaticity diagram Color gamut Color temperature Color balancing algorithms Digital Image Processing: Bernd

More information

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.

More information

Color constancy through inverse-intensity chromaticity space

Color constancy through inverse-intensity chromaticity space Tan et al. Vol. 21, No. 3/March 2004/J. Opt. Soc. Am. A 321 Color constancy through inverse-intensity chromaticity space Robby T. Tan Department of Computer Science, The University of Tokyo, 4-6-1 Komaba,

More information

Efficient illuminant correction in the Local, Linear, Learned (L 3 ) method

Efficient illuminant correction in the Local, Linear, Learned (L 3 ) method Efficient illuminant correction in the Local, Linear, Learned (L 3 ) method Francois G. Germain a, Iretiayo A. Akinola b, Qiyuan Tian b, Steven Lansel c, and Brian A. Wandell b,d a Center for Computer

More information

Lecture 1 Image Formation.

Lecture 1 Image Formation. Lecture 1 Image Formation peimt@bit.edu.cn 1 Part 3 Color 2 Color v The light coming out of sources or reflected from surfaces has more or less energy at different wavelengths v The visual system responds

More information

A Colour Constancy Algorithm Based on the Histogram of Feasible Colour Mappings

A Colour Constancy Algorithm Based on the Histogram of Feasible Colour Mappings A Colour Constancy Algorithm Based on the Histogram of Feasible Colour Mappings Jaume Vergés-Llahí and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial Technological Park of Barcelona, U

More information

Quantitative Analysis of Metamerism for. Multispectral Image Capture

Quantitative Analysis of Metamerism for. Multispectral Image Capture Quantitative Analysis of Metamerism for Multispectral Image Capture Peter Morovic 1,2 and Hideaki Haneishi 2 1 Hewlett Packard Espanola, Sant Cugat del Valles, Spain 2 Research Center for Frontier Medical

More information

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image [6] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image Matching Methods, Video and Signal Based Surveillance, 6. AVSS

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Geodesic Based Ink Separation for Spectral Printing

Geodesic Based Ink Separation for Spectral Printing Geodesic Based Ink Separation for Spectral Printing Behnam Bastani*, Brian Funt**, *Hewlett-Packard Company, San Diego, CA, USA **Simon Fraser University, Vancouver, BC, Canada Abstract An ink separation

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 30: Light, color, and reflectance Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Properties of light

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Bipartite Graph Partitioning and Content-based Image Clustering

Bipartite Graph Partitioning and Content-based Image Clustering Bipartite Graph Partitioning and Content-based Image Clustering Guoping Qiu School of Computer Science The University of Nottingham qiu @ cs.nott.ac.uk Abstract This paper presents a method to model the

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

Assessing Colour Rendering Properties of Daylight Sources Part II: A New Colour Rendering Index: CRI-CAM02UCS

Assessing Colour Rendering Properties of Daylight Sources Part II: A New Colour Rendering Index: CRI-CAM02UCS Assessing Colour Rendering Properties of Daylight Sources Part II: A New Colour Rendering Index: CRI-CAM02UCS Cheng Li, Ming Ronnier Luo and Changjun Li Department of Colour Science, University of Leeds,

More information

A New Color Constancy Algorithm Based on the Histogram of Feasible Mappings

A New Color Constancy Algorithm Based on the Histogram of Feasible Mappings A New Color Constancy Algorithm Based on the Histogram of Feasible Mappings Jaume Vergés Llahí and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial, Technological Park of Barcelona, U Building

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 21: Light, reflectance and photometric stereo Announcements Final projects Midterm reports due November 24 (next Tuesday) by 11:59pm (upload to CMS) State the

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

Removing Shadows From Images using Retinex

Removing Shadows From Images using Retinex Removing Shadows From Images using Retinex G. D. Finlayson, S. D. Hordley M.S. Drew School of Information Systems School of Computing Science University of East Anglia Simon Fraser University Norwich NR4

More information

Sensor Sharpening for Computational Color Constancy

Sensor Sharpening for Computational Color Constancy 1 Manuscript accepted for publication in the Journal of the Optical Society of America A. Optical Society of America Sensor Sharpening for Computational Color Constancy Kobus Barnard*, Florian Ciurea,

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 20: Light, reflectance and photometric stereo Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Properties

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Investigation of Directional Filter on Kube-Pentland s 3D Surface Reflectance Model using Photometric Stereo

Investigation of Directional Filter on Kube-Pentland s 3D Surface Reflectance Model using Photometric Stereo Investigation of Directional Filter on Kube-Pentland s 3D Surface Reflectance Model using Photometric Stereo Jiahua Wu Silsoe Research Institute Wrest Park, Silsoe Beds, MK45 4HS United Kingdom jerry.wu@bbsrc.ac.uk

More information

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter? Light Properties of light Today What is light? How do we measure it? How does light propagate? How does light interact with matter? by Ted Adelson Readings Andrew Glassner, Principles of Digital Image

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca Gray-World assumption on perceptual color spaces Jonathan Cepeda-Negrete jonathancn@laviria.org Raul E. Sanchez-Yanez sanchezy@ugto.mx Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

More information

Estimation of Reflection Properties of Silk Textile with Multi-band Camera

Estimation of Reflection Properties of Silk Textile with Multi-band Camera Estimation of Reflection Properties of Silk Textile with Multi-band Camera Kosuke MOCHIZUKI*, Norihiro TANAKA**, Hideaki MORIKAWA* *Graduate School of Shinshu University, 12st116a@shinshu-u.ac.jp ** Faculty

More information

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation.

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation. Equation to LaTeX Abhinav Rastogi, Sevy Harris {arastogi,sharris5}@stanford.edu I. Introduction Copying equations from a pdf file to a LaTeX document can be time consuming because there is no easy way

More information

Abstract. 1. Introduction

Abstract. 1. Introduction Recovery of Chromaticity Image Free from Shadows via Illumination Invariance Mark S. Drew School of Computing Science Simon Fraser University Vancouver, British Columbia Canada V5A 1S6 mark@cs.sfu.ca Graham

More information

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

HISTOGRAMS OF ORIENTATIO N GRADIENTS

HISTOGRAMS OF ORIENTATIO N GRADIENTS HISTOGRAMS OF ORIENTATIO N GRADIENTS Histograms of Orientation Gradients Objective: object recognition Basic idea Local shape information often well described by the distribution of intensity gradients

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

then assume that we are given the image of one of these textures captured by a camera at a different (longer) distance and with unknown direction of i

then assume that we are given the image of one of these textures captured by a camera at a different (longer) distance and with unknown direction of i Image Texture Prediction using Colour Photometric Stereo Xavier Lladó 1, Joan Mart 1, and Maria Petrou 2 1 Institute of Informatics and Applications, University of Girona, 1771, Girona, Spain fllado,joanmg@eia.udg.es

More information

Minimalist surface-colour matching

Minimalist surface-colour matching Perception, 2005, volume 34, pages 1007 ^ 1011 DOI:10.1068/p5185 Minimalist surface-colour matching Kinjiro Amano, David H Foster Computational Neuroscience Group, Faculty of Life Sciences, University

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

Fast Spectral Reflectance Recovery using DLP Projector

Fast Spectral Reflectance Recovery using DLP Projector Fast Spectral Reflectance Recovery using DLP Projector Shuai Han 1, Imari Sato 2, Takahiro Okabe 1, and Yoichi Sato 1 1 Institute of Industrial Science, The University of Tokyo, Japan 2 National Institute

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Color Vision. Spectral Distributions Various Light Sources

Color Vision. Spectral Distributions Various Light Sources Color Vision Light enters the eye Absorbed by cones Transmitted to brain Interpreted to perceive color Foundations of Vision Brian Wandell Spectral Distributions Various Light Sources Cones and Rods Cones:

More information

THE human visual system perceives the color of an

THE human visual system perceives the color of an 3612 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 8, AUGUST 2012 Chromaticity Space for Illuminant Invariant Recognition Sivalogeswaran Ratnasingam, Member, IEEE, and T. Martin McGinnity, Member,

More information

A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation

A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation Graham D. Finlayson 1, Ghalia Hemrit 1, Arjan Gijsenij 2, Peter Gehler 3 1 School of Computing Sciences, University of

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

A Bi-Illuminant Dichromatic Reflection Model for Understanding Images

A Bi-Illuminant Dichromatic Reflection Model for Understanding Images A Bi-Illuminant Dichromatic Reflection Model for Understanding Images Bruce A. Maxwell Tandent Vision Science, Inc. brucemaxwell@tandent.com Richard M. Friedhoff Tandent Vision Science, Inc. richardfriedhoff@tandent.com

More information

Sequential Maximum Entropy Coding as Efficient Indexing for Rapid Navigation through Large Image Repositories

Sequential Maximum Entropy Coding as Efficient Indexing for Rapid Navigation through Large Image Repositories Sequential Maximum Entropy Coding as Efficient Indexing for Rapid Navigation through Large Image Repositories Guoping Qiu, Jeremy Morris and Xunli Fan School of Computer Science, The University of Nottingham

More information

Towards Autonomous Vision Self-Calibration for Soccer Robots

Towards Autonomous Vision Self-Calibration for Soccer Robots Towards Autonomous Vision Self-Calibration for Soccer Robots Gerd Mayer Hans Utz Gerhard Kraetzschmar University of Ulm, James-Franck-Ring, 89069 Ulm, Germany Abstract The ability to autonomously adapt

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

Abstract. 1 Introduction. Information and Communication Engineering July 2nd Fast Spectral Reflectance Recovery using DLP Projector

Abstract. 1 Introduction. Information and Communication Engineering July 2nd Fast Spectral Reflectance Recovery using DLP Projector Information and Communication Engineering July 2nd 2010 Fast Spectral Reflectance Recovery using DLP Projector Sato Laboratory D2, 48-097403, Shuai Han Abstract We present a practical approach to fast

More information

Linear Approximation of Sensitivity Curve Calibration

Linear Approximation of Sensitivity Curve Calibration Linear Approximation of Sensitivity Curve Calibration Dietrich Paulus 1 Joachim Hornegger 2 László Csink 3 Universität Koblenz-Landau Computational Visualistics Universitätsstr. 1 567 Koblenz paulus@uni-koblenz.de

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

Color and Color Constancy in a Translation Model for Object Recognition

Color and Color Constancy in a Translation Model for Object Recognition Color and Color Constancy in a Translation Model for Object Recognition Kobus Barnard 1 and Prasad Gabbur 2 1 Department of Computer Science, University of Arizona Email: kobus@cs.arizona.edu 2 Department

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

CS231A Section 6: Problem Set 3

CS231A Section 6: Problem Set 3 CS231A Section 6: Problem Set 3 Kevin Wong Review 6 -! 1 11/09/2012 Announcements PS3 Due 2:15pm Tuesday, Nov 13 Extra Office Hours: Friday 6 8pm Huang Common Area, Basement Level. Review 6 -! 2 Topics

More information

Exploiting Spatial and Spectral Image Regularities for Color Constancy

Exploiting Spatial and Spectral Image Regularities for Color Constancy Exploiting Spatial and Spectral Image Regularities for Color Constancy Barun Singh and William T. Freeman MIT Computer Science and Artificial Intelligence Laboratory David H. Brainard University of Pennsylvania

More information

2006 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

2006 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, 6 IEEE Personal use of this material is permitted Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising

More information

Data Term. Michael Bleyer LVA Stereo Vision

Data Term. Michael Bleyer LVA Stereo Vision Data Term Michael Bleyer LVA Stereo Vision What happened last time? We have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > N s( p, q) We have learned about an optimization algorithm that

More information

Contrast adjustment via Bayesian sequential partitioning

Contrast adjustment via Bayesian sequential partitioning Contrast adjustment via Bayesian sequential partitioning Zhiyu Wang, Shuo Xie, Bai Jiang Abstract Photographs taken in dim light have low color contrast. However, traditional methods for adjusting contrast

More information

Characterizing and Controlling the. Spectral Output of an HDR Display

Characterizing and Controlling the. Spectral Output of an HDR Display Characterizing and Controlling the Spectral Output of an HDR Display Ana Radonjić, Christopher G. Broussard, and David H. Brainard Department of Psychology, University of Pennsylvania, Philadelphia, PA

More information

Reproduction Angular Error: An Improved Performance Metric for Illuminant Estimation

Reproduction Angular Error: An Improved Performance Metric for Illuminant Estimation FINLAYSON, ZAKIZADEH: REPRODUCTION ANGULAR ERROR 1 Reproduction Angular Error: An Improved Performance Metric for Illuminant Estimation Graham D. Finlayson g.finlayson@uea.ac.uk Roshanak Zakizadeh r.zakizadeh@uea.ac.uk

More information

Evaluating Colour-Based Object Recognition Algorithms Using the SOIL-47 Database

Evaluating Colour-Based Object Recognition Algorithms Using the SOIL-47 Database ACCV2002: The 5th Asian Conference on Computer Vision, 23 25 January 2002,Melbourne, Australia Evaluating Colour-Based Object Recognition Algorithms Using the SOIL-47 Database D. Koubaroulis J. Matas J.

More information

Level lines based disocclusion

Level lines based disocclusion Level lines based disocclusion Simon Masnou Jean-Michel Morel CEREMADE CMLA Université Paris-IX Dauphine Ecole Normale Supérieure de Cachan 75775 Paris Cedex 16, France 94235 Cachan Cedex, France Abstract

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

Forensic Image Recognition using a Novel Image Fingerprinting and Hashing Technique

Forensic Image Recognition using a Novel Image Fingerprinting and Hashing Technique Forensic Image Recognition using a Novel Image Fingerprinting and Hashing Technique R D Neal, R J Shaw and A S Atkins Faculty of Computing, Engineering and Technology, Staffordshire University, Stafford

More information

Classifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao

Classifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao Motivation Image search Building large sets of classified images Robotics Background Object recognition is unsolved Deformable shaped

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Locating 1-D Bar Codes in DCT-Domain

Locating 1-D Bar Codes in DCT-Domain Edith Cowan University Research Online ECU Publications Pre. 2011 2006 Locating 1-D Bar Codes in DCT-Domain Alexander Tropf Edith Cowan University Douglas Chai Edith Cowan University 10.1109/ICASSP.2006.1660449

More information

Low Cost Motion Capture

Low Cost Motion Capture Low Cost Motion Capture R. Budiman M. Bennamoun D.Q. Huynh School of Computer Science and Software Engineering The University of Western Australia Crawley WA 6009 AUSTRALIA Email: budimr01@tartarus.uwa.edu.au,

More information