Illumination-independent descriptors using color moment invariants

Size: px
Start display at page:

Download "Illumination-independent descriptors using color moment invariants"

Transcription

1 482, February 2009 Illumination-independent descriptors using color moment invariants Bing Li De Xu Beijing Jiaotong University Institute of Computer Science & Engineering Beijing China Weihua Xiong OmniVision Technologies 1341 Orleans Drive Sunnyvale, California Songhe Feng Beijing Jiaotong University Institute of Computer Science & Engineering Beijing China Abstract. Color-based object indexing and matching is attractive because color is an efficient visual cue for characterizing an object. However, different light sources will lead to color deformation of objects, which will inevitably degrade the performance of recognition. In order to circumvent this confounding influence, we present three effective illumination-independent descriptors that are substantially insensitive to the variations of light conditions, object geometric transformation, and image blur level. To this end, we define two new two-dimensional color coordinate systems, the central color coordinate system and the edgebased color coordinate system, based on the diagonal-offset reflectance model. By introducing normalized moment invariants, this paper then provides two color-constant descriptors in each system separately. Either descriptor is a feature vector consisting of several normalized moment invariants of the color pixels distribution with different orders. Furthermore, the combination thereof can characterize not only color content but also color edges in an image, so it serves as a third descriptor of the image. Experiments on real image databases with object recognition and image retrieval show that our descriptors are robust and perform very well under various circumstances Society of Photo- Optical Instrumentation Engineers. DOI: / Subject terms: illumination-independent descriptor; color invariants; color constancy; moment invariant; blur-robust. Paper R received Jul. 26, 2008; revised manuscript received Jan. 5, 2009; accepted for publication Jan. 7, 2009; published online Feb. 19, /2009/$ SPIE 1 Introduction Color has proven to be simple, straightforward tool for object matching, but different light sources can result in different colors of the same object. Consequently, to successfully index objects, the effect of illumination color variation should be canceled out. There are two major approaches: The first one is estimating the illumination characteristics and directly mapping the image into that under a canonical illumination. Although a variety of illumination estimation methods have been proposed in the past decades, 1,2 their performance and generality are not enough for systematic application to recognition among a large number of objects. 3 The second approach is representing images by features that are independent of the light source, so they do not depend on the performance of the color constancy algorithm. A number of techniques belonging to this category have been reported in the literature Swain and Ballard 15 developed an indexing scheme that recognizes the object using color histogram intersections. Although this method is insensitive to geometric transformation, its performance will inevitably degrade when light conditions change. To address this problem, Funt and Finlayson 4 deduced a set of color-constant derivatives based on a physical reflection model. They presented an algorithm, named color-constancy color indexing CCCI, matching histograms of color ratios between neighboring pixels. Gevers and Smeulders 5 extended the CCCI technique to take account of the effect of both illumination color and shading. Adjeroh and Lee 6 proposed another color-ratio-based feature to be calculated by integrating the variation between each pixel and its neighbors. van de Weijer and Schmid 8 introduced the ratio of image derivatives into edge-based color-constant descriptors. Although these methods have been shown to be superior to Swain and Ballard s method in the presence of illumination change, they work along the image edges while ignoring the color content information of the images themselves. When the image is blurred and the edge sharpness is degraded, the derivatives will be easily affected by noise for darker regions, and are close to 0 for blurred or uniform regions. Consequently, these methods are too sensitive to image quality, especially for the blurred images that are frequently encountered due to poor focus, to relative motion between camera and object, and to other causes. Another category of illumination independent descriptors are based on image color information. Healey and Slater 7 gave an object recognition algorithm using highorder color distribution information. Finlayson et al. 9 sorted the pixels in increasing order of their RGB responses and associated them with a rank measure. They further proved that such rank measures are invariant to the illumination change and can be used as descriptors of images for object matching. Muselet et al. 10 discovered that the intersection of ranks so estimated cannot provide satisfactory results; they introduced the concept of fuzzy spatial rank and utilized the histogram of such ranks to characterize the image for object recognition. Based on a blackbody model, Fin

2 layson et al. 11 proposed another descriptor under outdoor illumination for shadow removal. But this method requires strict assumptions about the camera and light sources, and it requires calibrating the camera s parameters, which is inconvenient for practical applications. Moment invariants have also been introduced to get color invariants by Mindru et al. 12 However, they pay attention mainly to image color content, ignoring the contour and edge information in the image. In this paper, we propose two new two-dimensional color coordinate systems derived from a more general diagonal-offset model. One is based on image color content; the other is based on image edge information. In these two systems, two illumination-independent descriptors, represented by feature vectors, are given after introducing the normalized moment invariants. Every descriptor includes several normalized moment variants with different orders, each summarizing the shape of the color distribution of the image. Furthermore, the combination of the two descriptors can characterize not only color content but also color edges in the image. Experiments using several real image databases show that our scheme is robust to illumination, affine transformation, and image blur change and works very well in various situations. The remainder of this paper is organized as follows. In Sec. 2, the diagonal-offset model is described. Then, in Sec. 3, we explain the details of illumination-independent descriptors using color moment invariants. Section 4 discusses the robustness of the proposed descriptors. The experimental results are presented in Sec. 5. Section 6 concludes this paper. 2 Diagonal-Offset Model According to the Lambertian reflectance model, the image f =R,G,B T can be computed as follows: fx = esx,d, 1 where X is the spatial coordinate, is the wavelength, represents the visible spectrum, e is the spectral power distribution of the light source, SX, is the surface reflectance, =R,G,B is the camera sensitivity function of the three responses, and is the shading factor depending on the angle between the normal to a surface patch and the illumination direction. Because the Lambertian model is unrealistic, Shafer 13 proposed to add a diffuse light term to this model. The diffuse light has a lower intensity and comes equally from all directions: 13 fx = esx,d d, 2 + where is the function that models the diffuse light. This equation can model objects under daylight well, since daylight consists of both a point source the sun and diffuse light coming from the sky. So this model is much better for natural images and much more robust than Eq. 1. The aim of many color constancy applications is to transform all colors of the input image f 1, taken under a light source e 1, to colors in an image f 2 as they would appear under a reference light e 2. This transformation can be modeled by a diagonal model called the von Kries model: 1 f 1 = D 1,2 f 2, 3 where D 1,2 is a diagonal matrix. In the R,G,B T color space, the transformation can be written as R R2 = B 0 0 B = e 1 R/e 2 R 0 0 B R2 0 e 1 G/e 2 G e 1 B/e 2 G 1 G 2 2, B 4 where e 1 R,e 1 G,e 1 B, and e 2 R,e 2 G,e 2 B, are the light colors of illumination e 1 and e 2, respectively. However, this diagonal model is too strict. It cannot describe some conditions, for example, saturated colors. To overcome these problems, Finlayson et al. proposed a more robust diagonal-offset model by adding an offset term to the diagonal model: 14 R R2 = o1 + B 0 0 B G 1 G 2 o 2 o 3. Interestingly, the diagonal-offset model also takes diffuse lighting into account, consistently with Eq. 2. Thus the illumination color changes can be considered as comprising scaling combined with an offset for each color band. 3 Illumination-Independent Descriptors Using Moment Invariants 3.1 Two New Color Coordinate Systems (Central and Edge-Based) Here we introduce two novel two-dimensional color coordinate systems, each removing the additional offset term in Eq. 5 through a different method. The first new one, called the central color coordinate or chc, is defined by subtracting from each pixel the average of the image: chc R = R R, chc G = G Ḡ, chc B = B B, 6 where R Ḡ,B is the average pixel value of the whole image in the R G,B channel separately. In order to extract the color intensity information as in Ref. 16, we express it in two dimensions: chc R chc G chc = chc r,chc g =, 7 chc B chc B. Here chcr, chcg are two color components of the central color coordinate chc. G

3 The second new color coordinate system is described after using color derivatives to remove the offset term. We name it the edge-based color coordinate system, or che: che R = dr dx, dg che G = dx, db che B = dx, where d/ dx means first-order spatial differentiation with respect to neighborhoods. Then the two-dimensional che is defined as che R che G che = che r,che g =, 9 che B che B. It is obvious that the two new two-dimensional color coordinate systems still conform well to the diagonal transformation model: chc r 1 chc g 1 = 0 0 chc r 2 chc g 2, che r 1 che g 1 = 0 0 che r 2 che g 2, where =/ and =/. Here chcr 1, chcg 1 and chcr 2, chcg 2 represent the colors in the chc color coordinate system under illumination e 1 and e 2, respectively, and cher 1, cheg 1 and cher 2, cheg 2 represent the corresponding colors in che color coordinate system. 3.2 Normalized Moment Invariant After defining two new color coordinate systems from the diagonal-offset model, we introduce a novel color moment invariant, which is the basis of illumination independent descriptor. The u+v-order color moment M C uv for the image f is defined as follows: M C uv =r u g v pr,gdr dg, 12 where Cchc,che, is the selected color space. We have r,g=chcr,chcg when C=chc, while r,g =cher,cheg when C=che. The density function pr,g is defined as the fraction of each color value in f: pr,g = Numr,g pixnum, 13 where pixnum is the image size of f, while Numr,g is the total number of pixels with value r,g T in the image. According to the Eqs , the relationship between M C uv 1, the color moment in r 1,g 1 under illumination e 1, and M C uv 2, the color moment in r 2,g 2 under illumination e 2, can be obtained as M C uv 1 =r 2 u g 2 v Pr 2,g 2 = u+1 v+1 M C uv 2, where dg 0 dr 14 is the Jacobi determinant. The scale factors, describe the change of lighting color, so they must be removed to get the moment invariants in Eq. 14. Consequently, the normalized moment invariants are defined as C uv = M C 00 u+v+2/2 C. M C 20 u+1/2 M C 02 v+1/2m uv 15 In Eq. 15, all scale factors for each color band in C uv have been normalized, so it stays constant across a change of illumination. 3.3 Implementation of Moment Invariants in Discrete Space All the derivation in the previous subsection is based on continuous space. In this section, we discuss how to implement the moment invariants in discrete color space. The integral is replaced with a sum and dr and dg are discarded. Consequently, the moment M C uv in Eq. 13 is implement as C in discrete color space: DM uv DM uv r max g max C = r u g v pr,g, r=0 g=0 16 where r max and g max represent the maxima r and g in the specific image. The relationship between M C uv 1 and M C uv 2 in Eq. 14 is now changed to a relationship between DM C uv 1 and DM C uv 2 : DM C uv 1 = r 2 u g 2 v Pr 2,g 2 = u v DM C uv So in the discrete color space, DM C 00 =1, and the normalized moment invariants are computed as C uv = C DM uv DM C 20 u/2 DM 02 C v/ Illumination-Independent Descriptor Having obtained the normalized moment invariants, we discuss how to use them to construct an illuminationindependent descriptor of any image. To ameliorate the effect from noise, the descriptor should include as many loworder moment invariants as possible and keep the moment order, u+v, as low as possible. According to these two principles, 10 candidate moment invariants are used to compose the image description vector J C in this paper. Because C 02 = C 20 =1 according to Eq. 18, these two moment invariants are not used in J C :

4 J C = C 01, C 10, C 11, C 12, C 21, C 22, C 03, C 30, C 13, C From different selections of C, this color-constant descriptor framework J C can be decomposed as follows: Image color constant descriptor OJ=J chc. In this case, the central color coordinate system, chc, is used to get moment invariants. This descriptor describes color features of the image. Edge image color constant descriptor EJ=J che. This descriptor is defined on the color coordinate system, i.e., C=che which is about image edge. So J che is an edge-derived descriptor. Combined color constant descriptor CJ=OJ,EJ. Itis composed of the image color constant descriptor OJ and edge image descriptor EJ. This descriptor can describe not only color content but also color edges in the image. 4 Robustness Analysis Various illumination-independent descriptors have been proposed. Evaluating and analyzing them should adhere to the following criteria: Photometric robustness: The descriptor should be independent of the spectral power distribution SPD of the illumination. Normalized moments defined in color space guarantee that our proposed descriptors adhere to this requirement. 2. Geometric robustness: The descriptor should be invariant with respect to geometrical changes, such as zoom, rotation, and affine transformation caused by change of viewpoint. Our descriptors are global descriptors, which come from the color distribution of the whole image. Obviously they are not affected by geometric changes. 3. Photometric stability: The descriptor should adequately handle the instabilities in the imaging process. Blurred images with different levels are most frequently encountered. In this paper, we always assume that image blur is caused by the PSF of the imaging system and ignore any blur due to motion. The central color coordinate system is defined with respect to color information of the image color content, so the presented descriptor using this color coordinate generally has much better robustness. However, the edge-based color coordinate system is based on image derivatives, which may be affected by the image blur level. Is it still robust? The following analysis yields a positive answer. To simulate the PSF of the blurred image, a Gaussian filter G with different standard deviation of is applied; the image fx becomes a blurred one on convolution with the Gaussian filter, fx G. We assume that an edge can be modeled by a step edge 8,17 fx=a I x+b, where A I indicates the amplitude of the step edge for the different channels I. We take R, B channels for instance; their value of cher in the edge-based color coordinate system after blurring is equal to Fig. 1 Examples from the image data set pixels. First row: examples from group A. Second row: examples from group B. che R che r = che B = R X B = X = A A R B X AR x + b G x X AB x + b G x X x G x X x G x = A R x G x A B x G x, 20 where the derivative of a step edge is equal to the delta function, x/x=x. Let us consider the ratio response exactly at the edge 17 x=0. The preceding equation is equivalent to che R che r = che B = R X B = AR x G x X A B x G x = AR G 0 A B G 0 = AR A B. 21 This result clearly shows that both cher and cheg, which are independent of, are robust to Gaussian smoothing. Consequently, the edge-based color coordinate system is robust to blur change. 5 Experiments To evaluate the performance of the proposed descriptors, object recognition and image retrieval experiments based on two image databases are conducted with respect to their robustness to three conditions: light source change, geometric affine transformation, and image instability. Group A consists of 172 images of 17 scenes under 11 different light sources, which are picked out from the 321 images of 30 scenes 2 after removing some darker images. Group B consists of 220 images of 20 scenes under 11 light sources, 18 which have rotation and viewpoint changes. Both of them can be downloaded from Refs. 19 and 20. Figure 1 shows some example images of the two groups. The images in both groups will also be synthetically transformed to simulate affine transformation and different blur levels. 5.1 Experiments on Object Recognition In this subsection, the experiment on the proposed illumination-independent descriptors considers the perfor

5 Table 1 Performance comparison of our proposed descriptors with four other descriptors in terms of robustness to illumination color. Group data set Descriptor RR % k=1 k=3 k=5 k=7 Dimensions Mean RR % A Existing: P p m m Proposed: OJ EJ CJ B Existing: P p m m Proposed: OJ EJ CJ mance under both synthetic transformation and real changes in illumination. Recognition of a pattern is performed by means of a k-nearest-neighbor KNN classification scheme based on feature vectors consisting of moment invariants. The performance is assessed with reference to the recognition ratio RR. Here k means we use the first k matches in the increasing sort of matching distance values. If k is set to 1, the image from the first rank is selected as matching object; otherwise, the most numerously matched object will be selected. In this paper, we set k to 1, 3, 5, and 7. We also compute the average RR for the performance over these four different k values Robustness to illumination color Here we test the image descriptors with respect to robustness to illumination color variations. In the two image groups, since images are taken under 11 light sources, the colors of the same object have large differences as shown in Fig. 1. The images of the same white paper under three light sources appear to be white, blue, and red respectively. The performances of the three proposed descriptors are compared with four other kinds of illumination-invariant descriptors, 4,8,15 which are as follows: P = p 1,p 2,p 3 = R X R, G X G, B X B, 22 m = m 1,m 2 = R XG G X R, G XB B X G, 23 RG BG p = p 1, p 2 =arctan p 1 p 2,arctan p 2 p 3, 24 m =arctan m 1 25 m 2. Histograms of these four descriptors are constructed to represent an image. There are three dimensions for P, two dimensions for m and p, and one dimension for m. Each dimension of each descriptor is divided into 16 equal bins, and uniform quantization is adopted to construct the descriptor vector. The recognition performance is estimated by means of a leave-one-out procedure. 12 The RRs of the proposed descriptors with different values of k are shown in Table Robustness to affine transformation In this experiment, we test the robustness to geometric affine transformation, which is usually encountered for realworld images. To simulate geometric affine transformation, each image pixel s location x,y is changed into x,y, where y=sx+y. That is, the new vertical coordinate is acquired by a shift of sx pixels along the vertical direc

6 Fig. 2 First row: affine transformation examples with different s. Second row: examples using Gaussian blur with different. tion while the horizontal coordinate is kept untouched. The scale factor is chosen from five different values 0.2, 0.4, 0.6, 0.8, 1. Some transformed example images are shown in Fig. 2. For each scale factor s, the transformed images compose the training set, while the original images are used as test ones. That is, we use the transformed images to recognize the original ones based on the OJ, EJ, and CJ descriptors, respectively. The changing performance RR with the selection of different values of k is shown in Fig. 3 and Fig. 4. From these two graphs, we can draw the conclusion that the RR performance of the each descriptor shows nearly no loss with increase of s. The numerical details of the affine transformation are shown in Table 2. The maximum change ranges of OJ, EJ, and CJ are 0%, 5.1%, and 1.2%, respectively, for image group A, and 2.3%, 2.8%, and 3.2%, respectively, for image group B. So OJ, EJ, and CJ are all robust to affine transformation. Fig. 3 The object recognition ratio of image group A as a function of affine transformation

7 Fig. 4 The object recognition ratio of image group B as a function of affine transformation Robustness to Gaussian blur Next we evaluate the performance of the proposed descriptors with respect to changing blur level. To simulate the blur change, Gaussian smoothing with standard deviation is applied to the PSF of the imaging device before processing the images in groups A and B. The value of is selected from 2, 4, 6, 8, 10. Some blurred image examples are shown in Fig. 2. As in the previous experiment, the original images serve as the test set while the blurred images are used as the training set. The performance variations in terms of RR for k=1,3,5,7 are shown in Fig. 5 and Fig. 6 for groups A and B, respectively. They tell us that the performance of OJ, EJ, and CJ always keeps stable as increases. The numerical at results are shown in Table 3. The maximal reductions in performance are only 2.3% for image group A and 5.9% for image group B. Thus, it is seen that the descriptors proposed here are very robust to blur. In Sec. 4, the robustness to blur of descriptor EJ was proved theoretically; the experiment results here confirm this conclusion. Table 2 Object recognition ratio change range. MAX means the maximal change among defined ranges. RR % Group Descriptor k=1 k=3 k=5 k=7 MAX % A OJ 100, 98.8, , , EJ 94.9, 91.4, , , CJ 100, 99.4, 95.3, , B OJ 99.5, 93.1, , , EJ 90.9, , , , CJ 99.0, , , ,

8 Fig. 5 The object recognition ratio of image group A as a function of image blur level. 5.2 Experiments on Image Retrieval In this subsection, we evaluate the performance of three proposed descriptors in an image retrieval task. We use the images from group B, and follow same experiment procedure as proposed in Ref. 8. The performance is assessed and compared by reviewing the rank results of correct matches, which indicate the number of correctly retrieved images. The normalized average rank 8 for a single Fig. 6 The object recognition ratio of image group B as a function of image blur level

9 Table 3 Object recognition ratio change range. MAX means the maximal change among defined ranges. Group Descriptor RR % k=1 k=3 k=5 k=7 MAX % A OJ 97.1, 94.2, , , EJ 99.4, 95.9, , , CJ 100, 99.4, , , B OJ 74.8, , , , EJ 71.7, , , , CJ 84.0, , , , query is defined as NAR = 1 N R R NN R i N RN R +1, 26 i=1 2 where N is the total number of images in the database, the number of images relevant to the candidate is N R, and R i is the rank at which the i th relevant image is retrieved. The smaller NAR is, the better the retrieval result is. Perfect retrieval occurs when NAR=0, and random retrieval results in NAR=0.5. The average NAR result over all queries, ANAR, is also given. We also compare the performance of our proposed descriptors with that of the four descriptors mentioned in Sec The comparison is in two respects: robustness to illumination color change and robustness to natural blurring effects Comparison in robustness to illumination change We compare the descriptors with respect to robustness to illumination color variations. In each of the 20 scenes, the first image is picked out as query image and the remaining ones are used as the image database. Therefore, for each query image, there are 10 relevant images of the same object, but under different light sources and orientations. The results are summarized in Table 4. The descriptor P performs best with ANAR= The performance of our descriptors is comparable to that of the others. In particular, the descriptor CJ has ANAR= 0.011, which outperforms p, m, and m. Although CJ s ANAR is larger than P, the dimensions used in CJ are much less than P, so it is much easier to implement. Table 4 Rank and ANAR for robustness to illumination color. The columns headed 1 10, 11 20, and 20 display the numbers of images in those rank ranges for all queries. indicates that the descriptor is robust to illumination change. The results of other descriptors, marked *, are from Ref. 8. Descriptor No. of images Ranks ANAR Feature dimensions Robustness to illumination P * p * m * m * OJ EJ CJ

10 Fig. 7 Examples in the natural blur image data set: a, b out-of-focus blur; c change in focus from foreground to background Comparison in robustness to real blurring effects In the experiment in Sec. 5.1, we synthetically simulated the natural image blurring effects. In this subsection, a natural image data set with real blurring effects is used. 8,21 This data set includes 20 pairs of images. Each pair consists of two images of the same scene: One is clear; the other is blurred. The blur is caused by changing the acquisition parameters, such as the shutter time and aperture. Figure 7 gives some example images in this data set. As in Ref. 8, the clear image nonblurred in each scene is selected as the query image; the others compose the image database for retrieval. Consequently, for each query there is only one matching image in the database. Table 5 provides the comparison results. All of our descriptors can work well under various blur level, while the descriptors P and m are not robust, as has been reported in Ref. 8. In particular, the combinational descriptor CJ outperforms all other descriptors; its ANAR is smaller by 44.4% than that of the second-best descriptor p. The experiments show that, although the real-world blurring effects are often not well modeled by a predefined Gaussian smoothing, the proposed blur robust descriptor CJ still obtains good results. 6 Conclusion Object recognition is a fundamental task in computer vision, and color can provide valuable clues for it. However, the color of objects will vary with the illumination incident on them. To address this problem, three independent descriptors are presented in this paper. Firstly, two new twodimensional color coordinate systems, the central color coordinate system and the edge-based color coordinate system, are defined, based on the diagonal-offset reflectance model. After introducing moment invariants, we proposed two color-constant descriptors in the new systems. Both descriptors are robust to illumination, affine transformation, and image blur change. In addition, the combina- Table 5 Rank and ANAR for robustness to natural blurring effects. indicates that the descriptor is is not robust to blurring. The results of other descriptors marked * are from Ref. 8. Descriptor No. of images Rank ANAR Feature dimensions Robustness to blurring P * p * m * m * OJ EJ CJ

11 tion of the two descriptors can characterize not only color content but also color edges in an image. Our experiments show that the proposed descriptors provide good performance under changes of illumination, object geometry, and image blur level. Furthermore, the proposed descriptors can be easily applied to other computer vision tasks. Acknowledgments We would like to thank Computational Vision Laboratory of Simon Fraser University for providing the image data set. We also thank Dr. Joost van de Weijer of the LEAR Team, INRIA, for providing the natural-blur data set. This work is supported by the National Natural Science Foundation of China , the National High Technology Research and Development Program of China 2007AA01Z168, and the Science Foundation of Beijing Jiaotong University 2007XM008. References 1. K. Barnard, V. Cardei, and B. V. Funt, A comparison of computational color constancy algorithms part 1: methodology and experiments with synthesized data, IEEE Trans. Image Process. 119, K. Barnard, L. Martin, A. Coath, and B. V. Funt, A comparison of computational color constancy algorithms part 2: experiments with image data, IEEE Trans. Image Process. 119, B. V. Funt, K. Barnard, and L. Martin, Is colour constancy good enough, in 5th Eur. Conf. on Computer Vision (ECCV), pp B. V. Funt and G. D. Finlayson, Color constant color indexing, IEEE Trans. Pattern Anal. Mach. Intell. 175, T. Gevers and A. Smeulders, Color based object recognition, Pattern Recogn. 32, D. A. Adjeroh and M. C. Lee, On ratio-based color indexing, IEEE Trans. Image Process. 1010, G. Healey and D. Slater, Global color constancy: recognition of objects by use of illumination-invariant properties of color distribution, J. Opt. Soc. Am. A 1111, J. van de Weijer and C. Schmid, Blur robust and color constant image description, in 2006 IEEE Int. Conf. on Image Processing (ICIP), pp G. D. Finlayson, S. Hordley, G. Schaefer, and G. Y. Tian, Illuminant and device invariant colour using histogram equalization, Pattern Recogn. 38, D. Muselet, B. Funt, and L. Macaire, Object recognition and pose estimation across illumination change, in 2nd Int. Conf. on Computer Vision Theory and Applications, pp G. D. Finlayson, S. D. Hordley, C. Lu, and M. S. Drew, On the removal of shadows from images, IEEE Trans. Pattern Anal. Mach. Intell. 281, F. Mindru, T. Tuytelaars, L. V. Gool, and T. Moons, Moment invariants for recognition under changing viewpoint and illumination, Comput. Vis. Image Underst. 941, S. Shafer, Using color to separate reflection components, Color Res. Appl. 104, G. Finlayson, S. Hordley, and R. Xu, Convex programming colour constancy with a diagonal-offset model, in 2005 IEEE Int. Conf. on Image Processing (ICIP), pp M. J. Swain and D. H. Ballard, Color indexing, Int. J. Comput. Vis. 71, G. Finlayson, S. Hordley, and P. Hubel, Color by correlation: a simple, unifying framework for color constancy, IEEE Trans. Pattern Anal. Mach. Intell. 2311, J. van de Weijer and C. Schmid, Coloring local feature extraction, in Proc. Eur. Conf. on Computer Vision (ECCV), Part II, pp K. Barnard, L. Martin, B. V. Funt, and A. Coath, A data set for colour research, Color Res. Appl. 273, colour_constancy_synthetic_test_data Bing Li received the BE in computer science from Beijing Jiaotong University in He is pursuing the PhD degree at the Institute of Computer Science and Engineering, Beijing Jiaotong University, Beijing, China. His current research interests include color constancy, visual perception, and computer vision. De Xu received the ME in computer science from Beijing Jiaotong University in He is now a professor at the Institute of Computer Science and Engineering, Beijing Jiaotong University, Beijing, China. He has published more than 100 papers in international conferences and journals. His research interests include database systems, computer vision, and multimedia processing. Weihua Xiong is now an imaging scientist in Omnivision Technologies, USA. He received his PhD from Simon Fraser University in He received his master s degree from Beijing University 1996 and bachelor s degree from Beijing Information University His primary research interests are color science, automatic white balancing, stereo vision and image processing. He has applied for five U.S. patents, and has published eleven papers in international conference, seven papers in journals, and four Chinese books. One of his papers, Stereo Retinex, was named the Best Vision Paper in the 3rd Canadian Conference on Computational and Robot Vision. He is a reviewer for international journals, including and the Journal of Electronic Engineering. He is also a member of the Digital BioColor Society and the Society of Imaging Science and Technology. Songhe Feng received the BE in computer science from Beijing Jiaotong University in He is pursuing the PhD degree at the Institute of Computer Science and Engineering, Beijing Jiaotong University, Beijing, China. His current research interests include image processing, image annotation, and computer vision

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL?

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL? HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL? Gerald Schaefer School of Computing and Technology Nottingham Trent University Nottingham, U.K. Gerald.Schaefer@ntu.ac.uk Abstract Keywords: The images

More information

Video-Based Illumination Estimation

Video-Based Illumination Estimation Video-Based Illumination Estimation Ning Wang 1,2, Brian Funt 2, Congyan Lang 1, and De Xu 1 1 School of Computer Science and Infromation Technology, Beijing Jiaotong University, Beijing, China 2 School

More information

Color Constancy from Hyper-Spectral Data

Color Constancy from Hyper-Spectral Data Color Constancy from Hyper-Spectral Data Th. Gevers, H. M. G. Stokman, J. van de Weijer Faculty of Science, University of Amsterdam, The Netherlands fgevers, stokman, joostwg@wins.uva.nl Abstract This

More information

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance

More information

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image [6] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image Matching Methods, Video and Signal Based Surveillance, 6. AVSS

More information

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom

More information

Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes

Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes Illumination Estimation Using a Multilinear Constraint on Dichromatic Planes Javier Toro 1 and Brian Funt 2 LBMU 1, Centre Hospitalier de l Université de Montréal Montréal, QC, Canada H2L 2W5 School of

More information

A Colour Constancy Algorithm Based on the Histogram of Feasible Colour Mappings

A Colour Constancy Algorithm Based on the Histogram of Feasible Colour Mappings A Colour Constancy Algorithm Based on the Histogram of Feasible Colour Mappings Jaume Vergés-Llahí and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial Technological Park of Barcelona, U

More information

High Information Rate and Efficient Color Barcode Decoding

High Information Rate and Efficient Color Barcode Decoding High Information Rate and Efficient Color Barcode Decoding Homayoun Bagherinia and Roberto Manduchi University of California, Santa Cruz, Santa Cruz, CA 95064, USA {hbagheri,manduchi}@soe.ucsc.edu http://www.ucsc.edu

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds 9 1th International Conference on Document Analysis and Recognition Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds Weihan Sun, Koichi Kise Graduate School

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

A Novel Extreme Point Selection Algorithm in SIFT

A Novel Extreme Point Selection Algorithm in SIFT A Novel Extreme Point Selection Algorithm in SIFT Ding Zuchun School of Electronic and Communication, South China University of Technolog Guangzhou, China zucding@gmail.com Abstract. This paper proposes

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Coloring Local Feature Extraction

Coloring Local Feature Extraction Coloring Local Feature Extraction Joost van de Weijer 1 and Cordelia Schmid 1 GRAVIR-INRIA, 655 Avenue de l Europe, Montbonnot 38330, France {Joost.van-de-Weijer, Cordelia.Schmid}@inrialpes.fr Abstract.

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Color Constancy by Derivative-based Gamut Mapping

Color Constancy by Derivative-based Gamut Mapping Color Constancy by Derivative-based Gamut Mapping Arjan Gijsenij, Theo Gevers, Joost Van de Weijer To cite this version: Arjan Gijsenij, Theo Gevers, Joost Van de Weijer. Color Constancy by Derivative-based

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Color Constancy from Illumination Changes

Color Constancy from Illumination Changes (MIRU2004) 2004 7 153-8505 4-6-1 E E-mail: {rei,robby,ki}@cvl.iis.u-tokyo.ac.jp Finlayson [10] Color Constancy from Illumination Changes Rei KAWAKAMI,RobbyT.TAN, and Katsushi IKEUCHI Institute of Industrial

More information

2006 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

2006 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, 6 IEEE Personal use of this material is permitted Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising

More information

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Gaussian Weighted Histogram Intersection for License Plate Classification, Pattern

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Gaussian Weighted Histogram Intersection for License Plate Classification, Pattern [6] IEEE. Reprinted, with permission, from [Wening Jia, Gaussian Weighted Histogram Intersection for License Plate Classification, Pattern Recognition, 6. ICPR 6. 8th International Conference on (Volume:3

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Color Constancy for Multiple Light Sources Arjan Gijsenij, Member, IEEE, Rui Lu, and Theo Gevers, Member, IEEE

Color Constancy for Multiple Light Sources Arjan Gijsenij, Member, IEEE, Rui Lu, and Theo Gevers, Member, IEEE IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 2, FEBRUARY 2012 697 Color Constancy for Multiple Light Sources Arjan Gijsenij, Member, IEEE, Rui Lu, and Theo Gevers, Member, IEEE Abstract Color constancy

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

Diagonal versus affine transformations for color correction

Diagonal versus affine transformations for color correction 2108 J. Opt. oc. Am. A/ Vol. 17, No. 11/ November 2000 JOA Communications Diagonal versus affine transformations for color correction Brian V. Funt and Benjamin C. ewis chool of Computing cience, imon

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features 1 Kum Sharanamma, 2 Krishnapriya Sharma 1,2 SIR MVIT Abstract- To describe the image features the Local binary pattern (LBP)

More information

Aalborg Universitet. A new approach for detecting local features Nguyen, Phuong Giang; Andersen, Hans Jørgen

Aalborg Universitet. A new approach for detecting local features Nguyen, Phuong Giang; Andersen, Hans Jørgen Aalborg Universitet A new approach for detecting local features Nguyen, Phuong Giang; Andersen, Hans Jørgen Published in: International Conference on Computer Vision Theory and Applications Publication

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

Generalized Gamut Mapping using Image Derivative Structures for Color Constancy

Generalized Gamut Mapping using Image Derivative Structures for Color Constancy DOI 10.1007/s11263-008-0171-3 Generalized Gamut Mapping using Image Derivative Structures for Color Constancy Arjan Gijsenij Theo Gevers Joost van de Weijer Received: 11 February 2008 / Accepted: 6 August

More information

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Jung H. Oh, Gyuho Eoh, and Beom H. Lee Electrical and Computer Engineering, Seoul National University,

More information

A Comparison of SIFT and SURF

A Comparison of SIFT and SURF A Comparison of SIFT and SURF P M Panchal 1, S R Panchal 2, S K Shah 3 PG Student, Department of Electronics & Communication Engineering, SVIT, Vasad-388306, India 1 Research Scholar, Department of Electronics

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Light, Color, and Surface Reflectance. Shida Beigpour

Light, Color, and Surface Reflectance. Shida Beigpour Light, Color, and Surface Reflectance Shida Beigpour Overview Introduction Multi-illuminant Intrinsic Image Estimation Multi-illuminant Scene Datasets Multi-illuminant Color Constancy Conclusions 2 Introduction

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Toward Part-based Document Image Decoding

Toward Part-based Document Image Decoding 2012 10th IAPR International Workshop on Document Analysis Systems Toward Part-based Document Image Decoding Wang Song, Seiichi Uchida Kyushu University, Fukuoka, Japan wangsong@human.ait.kyushu-u.ac.jp,

More information

Estimating the wavelength composition of scene illumination from image data is an

Estimating the wavelength composition of scene illumination from image data is an Chapter 3 The Principle and Improvement for AWB in DSC 3.1 Introduction Estimating the wavelength composition of scene illumination from image data is an important topics in color engineering. Solutions

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

Gray-World assumption on perceptual color spaces. Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca Gray-World assumption on perceptual color spaces Jonathan Cepeda-Negrete jonathancn@laviria.org Raul E. Sanchez-Yanez sanchezy@ugto.mx Universidad de Guanajuato División de Ingenierías Campus Irapuato-Salamanca

More information

Improvements to Gamut Mapping Colour Constancy Algorithms

Improvements to Gamut Mapping Colour Constancy Algorithms Improvements to Gamut Mapping Colour Constancy Algorithms Kobus Barnard Department of Computing Science, Simon Fraser University, 888 University Drive, Burnaby, BC, Canada, V5A 1S6 email: kobus@cs.sfu.ca

More information

Color constancy through inverse-intensity chromaticity space

Color constancy through inverse-intensity chromaticity space Tan et al. Vol. 21, No. 3/March 2004/J. Opt. Soc. Am. A 321 Color constancy through inverse-intensity chromaticity space Robby T. Tan Department of Computer Science, The University of Tokyo, 4-6-1 Komaba,

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

Introduction to color science

Introduction to color science Introduction to color science Trichromacy Spectral matching functions CIE XYZ color system xy-chromaticity diagram Color gamut Color temperature Color balancing algorithms Digital Image Processing: Bernd

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

Determinant of homography-matrix-based multiple-object recognition

Determinant of homography-matrix-based multiple-object recognition Determinant of homography-matrix-based multiple-object recognition 1 Nagachetan Bangalore, Madhu Kiran, Anil Suryaprakash Visio Ingenii Limited F2-F3 Maxet House Liverpool Road Luton, LU1 1RS United Kingdom

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Supplementary Material: Specular Highlight Removal in Facial Images

Supplementary Material: Specular Highlight Removal in Facial Images Supplementary Material: Specular Highlight Removal in Facial Images Chen Li 1 Stephen Lin 2 Kun Zhou 1 Katsushi Ikeuchi 2 1 State Key Lab of CAD&CG, Zhejiang University 2 Microsoft Research 1. Computation

More information

Removing Shadows from Images

Removing Shadows from Images Removing Shadows from Images Zeinab Sadeghipour Kermani School of Computing Science Simon Fraser University Burnaby, BC, V5A 1S6 Mark S. Drew School of Computing Science Simon Fraser University Burnaby,

More information

Image Segmentation and Similarity of Color-Texture Objects

Image Segmentation and Similarity of Color-Texture Objects IEEE TRANSACTIONS ON MULTIMEDIA, VOL. XX, NO. Y, MONTH 2002 1 Image Segmentation and Similarity of Color-Texture Objects Theo Gevers, member, IEEE Abstract We aim for content-based image retrieval of textured

More information

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao

More information

A Comparison of Computational Color Constancy Algorithms; Part Two: Experiments with Image Data

A Comparison of Computational Color Constancy Algorithms; Part Two: Experiments with Image Data This work has been accepted for publication in IEEE Transactions in Image Processing. 1 (See http://www.ieee.org/about/documentation/copyright/policies.htm for copyright issue details). A Comparison of

More information

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space Naoyuki ICHIMURA Electrotechnical Laboratory 1-1-4, Umezono, Tsukuba Ibaraki, 35-8568 Japan ichimura@etl.go.jp

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

MORPHOLOGICAL BOUNDARY BASED SHAPE REPRESENTATION SCHEMES ON MOMENT INVARIANTS FOR CLASSIFICATION OF TEXTURES

MORPHOLOGICAL BOUNDARY BASED SHAPE REPRESENTATION SCHEMES ON MOMENT INVARIANTS FOR CLASSIFICATION OF TEXTURES International Journal of Computer Science and Communication Vol. 3, No. 1, January-June 2012, pp. 125-130 MORPHOLOGICAL BOUNDARY BASED SHAPE REPRESENTATION SCHEMES ON MOMENT INVARIANTS FOR CLASSIFICATION

More information

A New Color Constancy Algorithm Based on the Histogram of Feasible Mappings

A New Color Constancy Algorithm Based on the Histogram of Feasible Mappings A New Color Constancy Algorithm Based on the Histogram of Feasible Mappings Jaume Vergés Llahí and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial, Technological Park of Barcelona, U Building

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Arnold W.M Smeulders Theo Gevers. University of Amsterdam smeulders}

Arnold W.M Smeulders Theo Gevers. University of Amsterdam    smeulders} Arnold W.M Smeulders Theo evers University of Amsterdam email: smeulders@wins.uva.nl http://carol.wins.uva.nl/~{gevers smeulders} 0 Prolem statement Query matching Query 0 Prolem statement Query classes

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Matching Interest Points Using Projective Invariant Concentric Circles

Matching Interest Points Using Projective Invariant Concentric Circles Matching Interest Points Using Projective Invariant Concentric Circles Han-Pang Chiu omás Lozano-Pérez Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of echnology Abstract

More information

Edge Histogram Descriptor, Geometric Moment and Sobel Edge Detector Combined Features Based Object Recognition and Retrieval System

Edge Histogram Descriptor, Geometric Moment and Sobel Edge Detector Combined Features Based Object Recognition and Retrieval System Edge Histogram Descriptor, Geometric Moment and Sobel Edge Detector Combined Features Based Object Recognition and Retrieval System Neetesh Prajapati M. Tech Scholar VNS college,bhopal Amit Kumar Nandanwar

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.

More information

Estimating basis functions for spectral sensitivity of digital cameras

Estimating basis functions for spectral sensitivity of digital cameras (MIRU2009) 2009 7 Estimating basis functions for spectral sensitivity of digital cameras Abstract Hongxun ZHAO, Rei KAWAKAMI, Robby T.TAN, and Katsushi IKEUCHI Institute of Industrial Science, The University

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

Evaluation and comparison of interest points/regions

Evaluation and comparison of interest points/regions Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :

More information

An Efficient Underwater Image Enhancement Using Color Constancy Deskewing Algorithm

An Efficient Underwater Image Enhancement Using Color Constancy Deskewing Algorithm An Efficient Underwater Image Enhancement Using Color Constancy Deskewing Algorithm R.SwarnaLakshmi 1, B. Loganathan 2 M.Phil Research Scholar, Govt Arts Science College, Coimbatore, India 1 Associate

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Part-Based Skew Estimation for Mathematical Expressions

Part-Based Skew Estimation for Mathematical Expressions Soma Shiraishi, Yaokai Feng, and Seiichi Uchida shiraishi@human.ait.kyushu-u.ac.jp {fengyk,uchida}@ait.kyushu-u.ac.jp Abstract We propose a novel method for the skew estimation on text images containing

More information

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Implementation and Comparison of Feature Detection Methods in Image Mosaicing IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Evaluating Colour-Based Object Recognition Algorithms Using the SOIL-47 Database

Evaluating Colour-Based Object Recognition Algorithms Using the SOIL-47 Database ACCV2002: The 5th Asian Conference on Computer Vision, 23 25 January 2002,Melbourne, Australia Evaluating Colour-Based Object Recognition Algorithms Using the SOIL-47 Database D. Koubaroulis J. Matas J.

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space MATEC Web of Conferences 95 83 (7) DOI:.5/ matecconf/79583 ICMME 6 Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space Tao Ni Qidong Li Le Sun and Lingtao Huang School

More information

Robust Shape Retrieval Using Maximum Likelihood Theory

Robust Shape Retrieval Using Maximum Likelihood Theory Robust Shape Retrieval Using Maximum Likelihood Theory Naif Alajlan 1, Paul Fieguth 2, and Mohamed Kamel 1 1 PAMI Lab, E & CE Dept., UW, Waterloo, ON, N2L 3G1, Canada. naif, mkamel@pami.uwaterloo.ca 2

More information

Color Segmentation Based Depth Adjustment for 3D Model Reconstruction from a Single Input Image

Color Segmentation Based Depth Adjustment for 3D Model Reconstruction from a Single Input Image Color Segmentation Based Depth Adjustment for 3D Model Reconstruction from a Single Input Image Vicky Sintunata and Terumasa Aoki Abstract In order to create a good 3D model reconstruction from an image,

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

Face recognition using SURF features

Face recognition using SURF features ace recognition using SUR features Geng Du*, ei Su, Anni Cai Multimedia Communication and Pattern Recognition Labs, School of Information and Telecommunication Engineering, Beijing University of Posts

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Expanding gait identification methods from straight to curved trajectories

Expanding gait identification methods from straight to curved trajectories Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Content Based Image Retrieval Using Combined Color & Texture Features

Content Based Image Retrieval Using Combined Color & Texture Features IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE) e-issn: 2278-1676,p-ISSN: 2320-3331, Volume 11, Issue 6 Ver. III (Nov. Dec. 2016), PP 01-05 www.iosrjournals.org Content Based Image Retrieval

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision CPSC 425: Computer Vision Image Credit: https://docs.adaptive-vision.com/4.7/studio/machine_vision_guide/templatematching.html Lecture 9: Template Matching (cont.) and Scaled Representations ( unless otherwise

More information

Robust Wide Baseline Point Matching Based on Scale Invariant Feature Descriptor

Robust Wide Baseline Point Matching Based on Scale Invariant Feature Descriptor Chinese Journal of Aeronautics 22(2009) 70-74 Chinese Journal of Aeronautics www.elsevier.com/locate/cja Robust Wide Baseline Point Matching Based on Scale Invariant Feature Descriptor Yue Sicong*, Wang

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2 6th International Conference on Electronic, Mechanical, Information and Management (EMIM 2016) Application of Geometry Rectification to Deformed Characters Liqun Wang1, a * and Honghui Fan2 1 School of

More information

Invariant Features from Interest Point Groups

Invariant Features from Interest Point Groups Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c

A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c 4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering (ICMMCCE 2015) A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b

More information