P-SURF: A Robust Local Image Descriptor *

Size: px
Start display at page:

Download "P-SURF: A Robust Local Image Descriptor *"

Transcription

1 JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 27, 2-25 (2) : A Robust Local Image Descriptor * CONGXIN LIU, JIE YANG AND HAI HUANG + Institute of Image Processing and Pattern Recognition Shanghai Jiao Tong University Shanghai, 224 P.R. China + Department of Computer Science and Engineering Zhejiang Sci-Tech University Hangzhou, 3 P.R. China -like representations are considered as being most resistant to common deformations, although their computational burden is heavy for low-computation applications such as mobile image retrieval. H. Bay et al. proposed an efficient implementation of called SURF. Although this descriptor has been able to represent the nature of some underlying image patterns, it is not enough to represent more complicated ones. Also, the proposed high-dimensional alternative to SURF indeed improves the distinctive character of the descriptor, while it appears to be less robust. In this paper, an enhanced version of SURF is proposed. Specifically, it consists of two components: the feature representation for independent intensity changes and the coupling description for these intensity changes. To this end, phase space is introduced to model the relationships between the intensity changes and several statistic metrics quantizing these relationships are also proposed to meet practical demands. The feature matching experiments demonstrate that our method achieves a favorable performance close to that of and faster construction-speed. We also present results showing that the use of the enhanced SURF representation in a mobile image retrieval application results in a comparable performance to. Keywords: SURF,, local image descriptor, image retrieval, image matching. INTRODUCTION Extracting and matching distinctive local image features between images is a fundamental problem in many applications. Many approaches have been presented to describe local image patterns in the literature. Popular descriptors include differential invariants [], steerable filters [2], complex filters [3], moment invariants [4], spin image [5], (Scale-Invariant Feature Transform) [6], and Shape Context [7]. The detailed performance evaluations of these descriptors were presented in [8, 9], where it was shown that the high-dimensional representations based on histograms of localized gradient orientations such as outperform other descriptors by a certain margin in matching images of both planar surfaces and 3D objects. Various refinements have been proposed in the literature to improve the gradient orientation based descriptors. For example, Ke and Sukthankar developed PCA- [] that represent the surface of an image patch by the principal components of the normalized gradient patch. The computational burden of PCA- is comparable to since the process of forming the normalized gradient patch involves many interpolation operations, which are time consuming. In addition, Received January 2, 2; revised August 7, 2 & January, 2; accepted January 3, 2. Communicated by Tong-Yee Lee. * This work was supported by the National Nature Science Foundation of China (No. 6752) and Projects of International Cooperation between Ministry of Science and Technology (No. 29DFA287). 2

2 22 CONGXIN LIU, JIE YANG AND HAI HUANG applying PCA also slows down the feature computation. GLOH (Gradient Location Orientation Histogram) [8] modified the representation by using alternative spatial sampling strategy and PCA for dimensionality reduction. 28-dimensional GLOH was proved to more distinctive than, but it needs more computational demands. Several low-cost descriptors have also been reported in the literature [-3]. H. Bay et al. proposed a descriptor, called SURF, which is an efficient implementation of by applying the integral image for faster computation []. It contains both feature detection and feature description. As can be observed from [], SURF is able to represent the nature of some underlying image patterns, although it cannot distinguish the following two image patterns as shown in Fig. 3 (a). This can be attributed to the fact that SURF describes the intensity changes of local image patterns in two orthogonal orientations independently; this makes it impossible to obtain enough structural information of the patterns. Moreover, to increase the distinctiveness of SURF, [] also proposed a highdimensional alternative to it. This extension is indeed more distinctive than SURF, while it appears to be less robust. Since the extension is only a finer subdivision scheme to the low-dimensional one, it is more sensitive to small errors in feature localization and small variations in the shapes of features. Furthermore, the finer subdivision scheme just encodes a little more structural information of local image patterns as compared with the low-dimensional one; therefore, the proposed alternative scheme can be further improved in discriminative power. Another two low-cost descriptors, namely CS-LBP (simplified center-symmetric local binary patterns) [2] and Contrast Context Histogram [3], both exploited the contrast between pixel values to reduce computational cost. The former one needs to set a contrast threshold in the case of flat regions, which increases the sensitivity of the descriptor to the parameter. The latter lacks the description of the correlation between adjacent pixels in its feature representation. Motivated by these findings, we still focus on such kinds of algorithms and explore an improved version of SURF. In this paper, phase space is introduced to improve the performance of SURF. Phase space, in which all possible states of a system are represented, is introduced from physics. In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional space. If an image surface is regarded as a system and its pixel location (x, y) is considered as a vector variable, then the triples (pixel intensity, horizontal gradient, and vertical gradient) can represent the state of an image at each sample position. Due to brightness changes, intensity value cannot serve as a state component in the context of image matching. Therefore we constructed a reduced phase space using the tuples (horizontal gradient, and vertical gradient) to describe the state of each pixel. Essentially, each region in phase space corresponds to a kind of relationship between the gradients. Besides, the densely sampled gradients are used to represent the intensity changes at each sample location instead of using the sparsely sampled Haar wavelet responses. This can be attributed to the following three reasons. First, like Haar wavelet responses, gradients can also reflect intensity changes. Second, the use of gradients can simplify the calculation of Haar wavelet responses and facilitate the implement of the descriptor in practice. Third, the computational complexity of two approaches is comparable on the normalized patches [4]. Compared to [6], our method (: phase-space based SURF) is more efficient since it neither computes gradient orientation nor applies any interpolation to the feature representation while yet preserving consider-

3 A ROBUST LOCAL IMAGE DESCRIPTOR 23 able distinctiveness. The remainder of this paper is organized as follows: in section 2, the details of the proposed local image descriptor are presented. In section 3, we provide detailed experimental results on feature matching experiments and also in the context of a mobile image retrieval application. Section 4 concludes the paper. 2. THE PHASE-SPACE BASED SURF DESCRIPTOR 2. Primary Representation can be summarized in the following steps: () Given an image patch, compute both the horizontal and vertical gradient maps. The horizontal gradient, vertical gradient and corresponding image gradient magnitude are first computed at each sample point over the image patches [4]. To guarantee invariance to orientation change, the coordinates of the descriptor are rotated relatively to the assigned dominant orientation. Both the new image coordinates and the gradient maps at each sample point can be obtained from the old ones by a linear transformation. The new gradient maps are illustrated with small arrows at each sample location in the middle-right of Fig.. haar x haar y haar x haar y SURF-64 haar y > haar x haar x haary haarx x haar x > haary y haarx haary y SURF-28 haar x haar y Process o Process dx v dy dx dy Independent component Gradients for the 4th sub-region dy dy v 2 Region Region dx dx Region 8 Haar responses for the 4th sub-region Gaussian kernel A phase space partition scheme Coupling component 4*4 sub-regions SURF representation over an image patch representation Fig.. The difference between and SURF. Region 8 (2) Building the descriptor based on a concrete phase-space partition scheme. In the new coordinate system, the input image patch is first divided into 4 4 subregions shown in the middle of Fig.. Such sub-regions inspired by [6] can capture important spatial information. (a) The representation for the independent intensity changes (similar as SURF). In each sub-region, the horizontal gradient, denoted as dx, and the vertical gradient, denoted as dy, are summed up to form the first two entries of the description vector. Given the polarity of the intensity changes, the absolute values of the gradients are also accumulated. Thus, a 4-dimensional vector v = { dx, dy, dx, dy } is obtained, which serves as the first set of entries of the proposed descriptor for each sub-region.

4 24 CONGXIN LIU, JIE YANG AND HAI HUANG Furthermore, to reduce the impact from misregistration errors, dx and dy are weighted by a Gaussian function with the deviation equal to half of the width of the image patch shown by the overlaid red circle in the middle of Fig.. (b) The coupling description for the intensity changes. Next, we use phase space to model the relationship between dx and dy. Specifically, the intensity space over each sub-region is first transformed into phase space, and then a specific subdivision scheme is applied to the space. A naive scheme to partition the phase space is illustrated in Fig., where the whole phase space is evenly divided into eight regions. Each region of the phase space represents a kind of relationship between dx and dy. In order to describe these regions, three statistic metrics, namely gradient norm: sqrt ((dx) 2 + (dy) 2 ), count norm:, and fluctuation norm: dx dy /( dx + dy ), are introduced. Each phase-space region is quantized by the sum of the metrics. These sums are illustrated by the green line segments on the lower-right of Fig.. In practice, which mea- sure is adopted depends on the actual needs. Generally, a good performance can be obtained by using the gradient norm. However, when quick computation is more important, count norm may be a good choice. Under rotation and scale change, fluctuation norm illustrates a good performance. In this way, another 8-dimensional vector v 2 is obtained. It can be observed that what v 2 tries to reveal is the underlying structure information of local image patterns. Therefore, v and v 2 are mutually complementary feature representations. This is different from the proposed high-dimensional extension to SURF [], where the extension is achieved by splitting up the old one based on the signs of haar x and haar y as shown on the left side of Fig.. Thus, our improved descriptor is clearly more distinctive than it. Likewise, in order to lessen the impact from the feature localization errors and shape errors of features, the gradient norm (or count norm, fluctuation norm) at each sample point is assigned a weight by a Gaussian kernel shown by the overlaid red circle in the middle of Fig.. The farther a sample point is away from the center point of the patch, the smaller weight it obtains to weight the statistic norms. Combining v and v 2, we obtain a combined vector v = {v, v 2 } for each sub-region. The descriptor for the image patch is achieved by concatenating these vectors on 4 4 sub-regions in the given order as shown in the middle of Fig.. The dimension of the descriptor depends on the specific phase space partition. is similar to in that they are both the histograms (or combined histograms) of spatially localized gradient; thereby they are both robust to significant shift ingradient positions. As shown in Fig., a gradient sample, shifting randomly over 5 5 set of sample locations, makes almost the same contribution to the histogram of the 4th sub-region. Furthermore, is less sensitive to noise than, as shown in the example of Fig. 2. This is attributed to the fact that integrates the gradient samples within the sub-regions and the phase-space regions, while only relies on the integration of the gradient samples within adjacent orientations. In terms of discriminative power, performs well, as shown in Fig. 3. One can imagine that the combination of the two intensity patterns and the other ones [] will result in a distinctive descriptor. Therefore, our good performance in non-geometrical scenes is not surprising in the following experiments.

5 A ROBUST LOCAL IMAGE DESCRIPTOR 25 Cl ean v v Noi sy SI FT dx dy dx dy dx dy dx dy SURF64 P- SURF28 Regi on Regi on Due to projective distortions or noise, a gradient sample v is transformed into v. If not beyond the range of the region, v makes the same contribution to as v. Therefore is more robust to the variations of gradient directions than the locally operating but less robust than SURF64. Note that the image patches are from []. dx dy dx dy dx dy dx dy Fig. 2. The robustness comparison between, SURF, and. d.4 SURF 2 d 2 SURF (-) 3 d d d d d d (a) Intensity pattern with fluctuations in two (b) Intensity pattern with frequencies in one orientations (4 pixels). orientation (8 pixels). Fig. 3. The distinctiveness comparison between, SURF, and on the underlying intensity patterns with significant fluctuations. 2.2 Refined Representations Some important issues about in practical applications should be further considered. First, how to lessen the boundary effect? () Dominant orientation alignment. The region layout scheme illustrated in Fig. usually causes significant boundary effect when the horizontal direction has been aligned with the dominant orientation. This is mainly attributed to the fact that the gradients close to the horizontal direction often commute between the region and the region 8 due to the initial misregistration errors. To address the problem, a refined region partition scheme is proposed in Fig. 4 (c), where the central axis of region is aligned with the horizontal direction and the dominant orientation is assumed in accordance with it. As a result, the gradients around the dominant orientation are almost partitioned into an identical region (region ), which tends to lessen boundary effect and increase the robustness of the descriptor; (2) Overlapping boundaries of phasespace regions. This measure enables the adjacent regions to share the gradients near the boundaries, which can further lessen boundary effect. The proposed overlapping schemes are illustrated in Figs. 4 (a)-(c). Second, how to reduce memory consumption and further increase matching speed? For embedded devices, lower memory usage and higher computational efficiency are critical to practical applications; therefore it is also necessary to find low-dimensional description schemes. Figs. 4 (a) and (b) show two alternative low-dimensional counterparts to the 8-region scheme, respectively. Third, some tricks may be taken for the convenience of computation. For the 4-re-

6 26 CONGXIN LIU, JIE YANG AND HAI HUANG dy dy dy dy=2.4dx Region2 dy=dx Region2 Region2 dy=dx dy=dx dx dx dx Region3 Region Region Region dy=-dx dy=-dx dy=-dx Region4 dy=-2.4dx (a) 4-region partition. (b) 6-region partition. (c) 8-region partition. Fig. 4. Refined representations. gion partition scheme, an offset-angle of π/4 radians is first subtracted from the dominant orientation and then the descriptor coordinates are rotated to align with this new dominant orientation. Next, the histogram entry of each gradient sample can be determined by judging the configurations of the signs of the gradient maps at each sample location. 2.3 Computational Complexity In this section, we discuss the computational burden of and. has to compute gradient (magnitude and orientation) at each sample location. This process involves time-consuming inverse tangent computation. circumvents and simplifies the problem by comparing the size of horizontal gradient and vertical gradient at each sample position or judging configurations of their signs to obtain the corresponding bin in the histogram. Moreover, to avoid boundary effects, carries out trilinear interpolation to smoothen the histogram, while the representation doesn t apply any interpolation scheme. Based on the above reasons, is computationally faster than. 3. Experimental Setup 3. EXPERIMENT EVALUATION In this section, is evaluated according to the evaluation metrics in [8]. The standard dataset, which comes from [8, 4], contains eight image sequences. Various deformations have been applied to these images, like viewpoint change, scale and rotation change, light change, blur, and JPEG compression. The descriptors in [8] were built on 4 4 normalized regions [4] as being transformed from affine invariant regions. These regions are extracted by Harris-affine and Hessian-affine feature detectors [5]. The dominant orientations of these image patches are defined as the directions of the smoothed gradients over the patches. A Gaussian weighting function with the standard variance equal to half of the width of the image patches is used to assign a weight to the (dx, dy) at each sample point. Additionally, to evaluate the performance of on the patches invariant to scale change, DOG [6] is used to conduct the experiments of Recall versus IT and mobile image retrieval. Recall versus IT (Increasing transformations) is defined as follows. Recall is the ratio of the number of correct matches found to that of total matches found by the descrip-

7 A ROBUST LOCAL IMAGE DESCRIPTOR 27 tor. The curves are obtained by varying the test images to match the first images in the sequences. Due to using DOG as detector, the metric x a Hx b 2 < t [5] is applied to the determination of the number of the correct match for the convenience of calculation. H is the homography between the image pairs; x a and x b is a tentative corresponding point pair; t represents the distance threshold. -like approaches obtained the best performance [8] and are still identified as being most resistant to common image deformations until today [2, 3, 6]. Hence, we compare to, SURF in this paper. Since the concept of scale is not distinct over the normalized patches, SURF was simplified by using its densely sampled alternative instead. The simplified SURF is called A-SURF64 in the following experiments. We believe the similar relative performance will re-occur in its original way of implementation []. A-SURF28 was achieved by following the approach proposed in []. The distance threshold t is set to be 3 (pixel); Overlapping regions are within the range of. All the test programs were performed on a LAPTOP of AMD.9GHz, 2GM. 3.2 The Experimental Results of Feature Matching 3.2. Scheme selection The scheme for our experiments is selected from the proposed ones in Fig. 4 by comparing their performance on Hessian-Affine regions. The results are shown in Fig. 5, where the 4-region scheme performs relatively better than the other two on the image pairs. Similar results are also obtained in the other scenes. Moreover, 28 outperforms 64 (it is the second part of 28) in all the cases. Due to the space limit, the other experimental results aren t shown here. In the following experiments, the 4-region scheme is used to compare with due to its relatively leading performance and more reasonable dimensions Bark Leuven precision precision (a) Scale + rotation. (b) Illumination change. Fig. 5. Performance comparison of under different parameter configurations on Hessian- Affine regions Comparative experiments on boundary effect In this section, a series of comparative experiments were conducted to verify the ef-

8 28 CONGXIN LIU, JIE YANG AND HAI HUANG.9 Bike HarAff&PSURF(N) HesAff&PSURF(N) HarAff&PSURF(D).9 Bark HarAff&PSURF(N) HesAff&PSURF(N) HarAff&PSURF(D).8.7 HesAff&PSURF(D) HarAff&PSURF(DO) HesAff&PSURF(DO).8.7 HesAff&PSURF(D) HarAff&PSURF(DO) HesAff&PSURF(DO) precision (a) Blur (st-4th). Leuven precision (b) Scale + rotation (st-4th). Ubc HarAff&PSURF(N) HesAff&PSURF(N) HarAff&PSURF(D) HesAff&PSURF(D) HarAff&PSURF(DO) HesAff&PSURF(DO) precision.2. HarAff&PSURF(N) HesAff&PSURF(N) HarAff&PSURF(D) HesAff&PSURF(D) HarAff&PSURF(DO) HesAff&PSURF(DO) precision (c) Illumination change (st-4th). (d) JPEG compression (st-4th). Fig. 6. The results of reducing boundary effect for (a) Blur; (b) Scale + rotation; (c) Illumination change; (d) JPEG compression. fectiveness of the measures proposed in section 2.2 in avoiding boundary effect. The experimental results are shown in Fig. 6, where (N) represents without dominant orientation alignment, (D) with dominant orientation alignment, and P- SURF (DO) with dominant orientation alignment and boundary overlapping. From the plots in Fig. 6, it can be observed that with dominant orientation alignment outperforms (N) significantly in robustness and distinctiveness. In the experiment of Recall versus IT, when dominant orientation alignment is used, the number of correct matching point pairs increased by around 5%. Therefore, dominant orientation alignment is an effective measure for to reduce boundary effect. Moreover, boundary overlapping can also improve s performance in most cases, but the performance gain obtained from it is less than that introduced by the dominant orientation alignment. By the way, radially-weighted scheme was also employed to lessen boundary effect. The experimental results demonstrated that this measure improves the robustness of P- SURF, but reduces the discriminative capability of the descriptor. Concerning the overall performance, we abandoned this measure Feature matching experiment Figs. 7 (a)-(f) show the experimental results of on the different image pairs

9 A ROBUST LOCAL IMAGE DESCRIPTOR Wall HarAff& HarAff& HarAff&A-SURF28 HarAff&A-SURF64 HesAff&.9.8 Bark HarAff& HarAff&PSURF28 HarAff&A-SURF28 HarAff&A-SURF64 HesAff&.7 HesAff& HesAff&A-SURF28.7 HesAff&PSURF28 HesAff&A-SURF28 HesAff&A-SURF64 HesAff&A-SURF precision (a) Viewpoint (st-5th). HarAff& HarAff&PSURF28 HarAff&A-SURF28 HarAff&A-SURF64 HesAff& HesAff&PSURF28 HesAff&A-SURF28 HesAff&A-SURF64 Bike.2.8 -precision (b) Scale + rotation (st-4th). Tree HarAff& HarAff& HarAff&A-SURF28 HarAff&A-SURF64 HesAff& HesAff& HesAff&A-SURF28 HesAff&A-SURF precision (c) Blur (st-4th). Leuven.2.8 -precision (d) Blur (st-4th). Ubc HarAff& HarAff&PSURF28 HarAff&A-SURF28 HarAff&A-SURF64 HesAff& HesAff&PSURF28 HesAff&A-SURF28 HesAff&A-SURF precision.2. HarAff& HarAff&PSURF28 HarAff&A-SURF28 HarAff&A-SURF64 HesAff& HesAff&PSURF28 HesAff&A-SURF28 HesAff&A-SURF precision (e) Illumination change (st-4th). (f) JPEG compression (st-4th). Fig. 7. (a) Recall versus -precision curve for viewpoint change; (b) Scale + rotation; (c, d) Blur; (e) Illumination change; (f) JPEG compression. from the standard dataset. In Fig. 7 (a), we show the comparative result of the four methods, namely,, A-SURF64, and A-SURF28, under affine transformation, while Fig. 7 (b) shows the result under scale change and rotation. Moreover, Figs. 7 (c) and (d) show the performance comparison of the methods under significant amount of image blur in the structured scene and textured scene, respectively. Finally, the comparative results under illumination change and JPEG compression are shown in Figs. 7 (e) and (f). In all of these plots, Harris-Affine and Hessian-Affine were used as detectors.

10 2 CONGXIN LIU, JIE YANG AND HAI HUANG As can be seen from these plots, obtains comparable or better results compared to in the non-geometric transformation scenes and lower scores in the geometric transformation scenes. This is because that s smoothed gradient orientation histogram was carefully designed for handling misregistration errors caused by viewpoint or other changes. However, both the local out-performance and the minor performance difference confirm the robustness of our method under such geometric transformations. Moreover, we can also observe that is superior to A-SURF64 and A-SURF28 on all the image pairs. This demonstrates that our method indeed results in a significant improvement in performance. In addition, as expected, A-SURF28 performs better than A-SURF64 in most cases, but it is closely followed by the latter in many cases. In the cases of Leuven and Ubc, it is even outperformed by the latter. Thus, A-SURF28, which was proposed as the extension of SURF, is more sensitive to noise as compared to the other three algorithms in this experiment. It is also observed that the performance gap between and is larger for Harris Affine regions than for Hessian Affine regions. This could be attributed to that the higher localization accuracy of Hessian Affine detector favors our method more compared to as the latter can handle more localization errors. Figs. 8 (a)-(h) show that the overall quality of correspondences found by and A-SURF28 is close to that of, while A-SURF64 obtains a slightly lower score in this experiment. In some cases, even surpasses in ratio, which is Graffiti A-SURF28 A-SURF Wall viewpoint change (a) Viewpoint (structured scene) Boat A-SURF28 A-SURF64.2. A-SURF28 A-SURF viewpiont change (b) Viewpoint (textured scene). Bark scale change.2. A-SURF28 A-SURF scale change (c) Scale + rotation (structured scene). (d) Scale + rotation (textured scene). Fig. 8. (a, b) Recall versus IT curve for viewpoint change; (c, d) Scale + rotation.

11 A ROBUST LOCAL IMAGE DESCRIPTOR Bike Tree A-SURF28 A-SURF ASURF28 ASURF Increasing blur (e) Blur (structured scene). Leuven Increasing blur (f) Blur (textured scene). Ubc A-SURF28 A-SURF Decreasing light.2. A-SURF28 A-SURF JPEG Compress % (g) Illumination change. (h) JPEG compression. Fig. 8. (Cont d) (e, f) Blur; (g) Illumination change; (h) JPEG compression. shown in plots (d), (e), (g) and (h). These experimental results are in accordance with the experimental results shown in Fig. 7. Compared to Recall versus -precision curve, Recall versus IT curve can more directly reflect the performance of the descriptors in the practical applications such as image retrieval and 3D reconstruction. To sum up, the above experiments have demonstrated that has a similar performance to. Compared to SURF, has obtained a significant improvement in performance almost in all the cases. 3.3 Computational Cost Comparison Table compares with and SURF in terms of running time. From Table, it can be observed that 28 is more than three times faster than in the descriptor construction while slightly slower than A-SURF64 and A-SURF28. Note that the time values are the average of 5 times of tests, which are calculated from 758 local image regions extracted by Harris-Affine detector in the first image of Graffiti set. Table. Comparison of the average time consumption. A-SURF64 A-SURF28 28 Detection Descriptor 28s 28s 27s.698s

12 22 CONGXIN LIU, JIE YANG AND HAI HUANG 3.4 The Experimental Results for a Mobile Image Retrieval In this experiment, first, 5, images of different scenes have been scanned from the Magazine Business Week to create a reference image database for the later retrieval. Then, 2, test images are captured from the same scenes using different mobile phones by different people. Some of them are showed in Fig. 9. As can be seen, significant image degradations have occurred in these test images, like non-linear illumination change, blurring, and noise contamination. Furthermore, viewpoint change and scale change further increase the degradations. Reference image database (from a scanner) Test image database (from mobile phones) Fig. 9. Example images for the mobile image retrieval. Table 2. The practical results of image retrieval. A-SURF64 A-SURF28 28 Cluster algorithm AKM [8] AKM AKM AKM Cluster centers 5, 5, 5, 5, Accuracy rate 94.5% 95.9% 96.6% 96.2% Image retrieval was performed based on the bag of keypoints [7]. The final retrieval results are illustrated in Table 2, which shows that 28 offers a comparable performance to. The explanations for the retrieval results are shown as follows. First, the main reason resides in that the descriptors created by and both have the similar discriminative capability. In the case of non-geometrical transformation scenes, even obtains higher quality of matching point pairs than. Second, due to much noise contained in the test images, robust descriptors usually perform better in these cases. has more robustness than and therefore it obtains a better score in this experiment. Third, the approximate computations in clustering and encoding significantly reduce discriminative requirements for descriptors. As a result, A-SURF64 and A-SURF 28 also obtain a good performance in this experiment. The distinctiveness of a descriptor, which is very crucial to find exact correspondences among images, is less important in image retrieval, whose purpose is to find the most similar image as a whole.

13 A ROBUST LOCAL IMAGE DESCRIPTOR 23 Finally, the saliency detection algorithm [9] contributes much to the high retrieval rates illustrated in Table 2. This algorithm helps to obtain high informational regions while filtering out much background noise. As a result, 85% of the feature points are removed and a 2%-3% performance increase occurs. 4. CONCLUSION In this paper, we improve the SURF representation by introducing phase space to capture more structure information of local image patterns. In phase space, one region represents a kind of relationship between intensity changes. By building histograms on such regions, these relationships can be quantized. Three popular schemes to partition phase space are proposed and two corresponding measures, namely dominant orientation alignment and overlapping boundaries, are also introduced to lessen the boundary effect. Moreover, in order to compute more conveniently, the densely sampled gradients are used to represent intensity changes instead of the sparsely sampled Haar wavelet responses used by SURF. Experimental results demonstrate that a significant improvement in performance compared to SURF and its high-dimensional extension has been achieved. Compared to the state-of-the-art method, illustrates a favorable performance and a significant reduction in computational demands. Also, in the case of the mobile image retrieval experiment, our method yields an accuracy rate close to that of. Thus, P- SURF is an appropriate trade-off between performance and computational burden. We believe that it holds a great promise for many applications in the Computer Vision community, especially in the cases where low computational requirements are necessary. In the future, we will apply to large-scale mobile image retrieval. REFERENCES. J. Koenderink and A. J. van Doorn, Representation of local geometry in the visual system, Biological Cybernetics, Vol. 55, 987, pp W. Freeman and E. Adelson, The design and use of steerable filters, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 3, 99, pp F. Schaffalitzky and A. Zisserman, Multi-view matching for unordered image sets, in Proceedings of European Conference on Computer Vision, 22, pp L. J. V. Gool, T. Moons, and D. Ungureanu, Affine/photometric invariants for planar intensity patterns, in Proceedings of European Conference on Computer Vision, 996, pp S. Lazebnik, C. Schmid, and J. Ponce, Sparse texture representation using affineinvariant neighborhoods, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 23, pp D. G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, Vol. 6, 24, pp S. Belongie, J. Malik, and J. Puzicha, Shape matching and object recognition using shape contexts, IEEE Transactions on Pattern Analysis and Machine Intelligence,

14 24 CONGXIN LIU, JIE YANG AND HAI HUANG Vol. 24, 22, pp K. Mikolajczyk and C. Schmid, A performance evaluation of local descriptors, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, 25, pp P. Moreels and P. Perona, Evaluation of features detectors and descriptors based on 3D objects, in Proceedings of IEEE International Conference on Computer Vision, Vol., 25, pp Y. Ke and R. Sukthankar, PCA-: A more distinctive representation for local image descriptors, in Proceedings of IEEE International Conference on Computer Vision, 24, pp H. Bay, T. Tuytelaars, and L. V. Gool, SURF: speeded up robust features, in Proceedings of European Conference on Computer Vision, Vol., 26, pp M. Heikkilä, M. Pietikäinen, and C. Schmid, Description of interest regions with local binary patterns, Pattern Recognition, Vol. 42, 29, pp C. R. Huanga, C. S. Chena, and P. C. Chung, Contrast context histogram An efficient discriminating local descriptor for object recognition and image matching, Pattern Recognition, Vol. 42, 28, pp K. Mikolajczyk and C. Schmid, Scale and affine invariant interest point detectors, International Journal of Computer Vision, Vol. 6, 24, pp Z. Chen and S. K. Sun, A Zernike moment phase-based descriptor for local image representation and matching, IEEE Transactions on Image Processing, Vol. 9, 2, pp J. Sivic and A. Zisserman, Video google: A text retrieval approach to object matching in videos, in Proceedings of IEEE International Conference on Computer Vision, 23, pp J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, Object retrieval with large vocabularies and fast spatial matching, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 27, pp X. Hou and L. Zhang, Saliency detection: A spectral residual approach, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 27, pp. -8. Congxin Liu ( 刘 ) received a B.S. degree from Wuhan University of Hydraulic and Electrical Engineering (Yi Chang), China, in 997, a M.S. degree from Three Gorges University, China, in 24. He is currently a Ph.D. student in Shanghai Jiao Tong University. His research interests include local invariant feature and image matching.

15 A ROBUST LOCAL IMAGE DESCRIPTOR 25 Jie Yang ( ) received a Ph.D. degree in Computer Science from the University of Hamburg, Germany in 994. Dr. Yang is now the Professor of the Institute of Image Processing and Pattern Recognition in Shanghai Jiao Tong University. He has taken charge of many research projects and published more than 2 journal papers. His major research interests are image retrieval, object detection and recognition, data mining, and medical image processing. Hai Huang ( ) received his Ph.D. degree in Department of Computer Science and Engineering of Shanghai Jiao Tong University, China in 26. Currently, he is an Assistant Professor in Zhejiang Sci-Tech University, Hangzhou, China, 2. His research interests include cryptograhy, information security and digital watermarking.

Evaluation and comparison of interest points/regions

Evaluation and comparison of interest points/regions Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :

More information

Semantic-Context-Based Augmented Descriptor For Image Feature Matching

Semantic-Context-Based Augmented Descriptor For Image Feature Matching Semantic-Context-Based Augmented Descriptor For Image Feature Matching Samir Khoualed, Thierry Chateau, and Umberto Castellani 2 Institut Pascal, CNRS/University of Blaise Pascal Clermont-Ferrand, France

More information

Local Pixel Class Pattern Based on Fuzzy Reasoning for Feature Description

Local Pixel Class Pattern Based on Fuzzy Reasoning for Feature Description Local Pixel Class Pattern Based on Fuzzy Reasoning for Feature Description WEIREN SHI *, SHUHAN CHEN, LI FANG College of Automation Chongqing University No74, Shazheng Street, Shapingba District, Chongqing

More information

Fast Image Matching Using Multi-level Texture Descriptor

Fast Image Matching Using Multi-level Texture Descriptor Fast Image Matching Using Multi-level Texture Descriptor Hui-Fuang Ng *, Chih-Yang Lin #, and Tatenda Muindisi * Department of Computer Science, Universiti Tunku Abdul Rahman, Malaysia. E-mail: nghf@utar.edu.my

More information

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Implementation and Comparison of Feature Detection Methods in Image Mosaicing IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

Comparison of Feature Detection and Matching Approaches: SIFT and SURF

Comparison of Feature Detection and Matching Approaches: SIFT and SURF GRD Journals- Global Research and Development Journal for Engineering Volume 2 Issue 4 March 2017 ISSN: 2455-5703 Comparison of Detection and Matching Approaches: SIFT and SURF Darshana Mistry PhD student

More information

Local Descriptor based on Texture of Projections

Local Descriptor based on Texture of Projections Local Descriptor based on Texture of Projections N V Kartheek Medathati Center for Visual Information Technology International Institute of Information Technology Hyderabad, India nvkartheek@research.iiit.ac.in

More information

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.

More information

IMAGE RETRIEVAL USING VLAD WITH MULTIPLE FEATURES

IMAGE RETRIEVAL USING VLAD WITH MULTIPLE FEATURES IMAGE RETRIEVAL USING VLAD WITH MULTIPLE FEATURES Pin-Syuan Huang, Jing-Yi Tsai, Yu-Fang Wang, and Chun-Yi Tsai Department of Computer Science and Information Engineering, National Taitung University,

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Local Intensity Order Pattern for Feature Description

Local Intensity Order Pattern for Feature Description Local Intensity Order Pattern for Feature Description Zhenhua Wang Bin Fan Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation Chinese Academy of Sciences, 9, Beijing, China {wzh,bfan,fcwu}@nlpr.ia.ac.cn

More information

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013 Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition

More information

Research Article Center Symmetric Local Multilevel Pattern Based Descriptor and Its Application in Image Matching

Research Article Center Symmetric Local Multilevel Pattern Based Descriptor and Its Application in Image Matching International Optics Volume 26, Article ID 58454, 9 pages http://dx.doi.org/.55/26/58454 Research Article Center Symmetric Local Multilevel Pattern Based Descriptor and Its Application in Image Matching

More information

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image SURF CSED441:Introduction to Computer Vision (2015S) Lecture6: SURF and HOG Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Speed Up Robust Features (SURF) Simplified version of SIFT Faster computation but

More information

PCA-SIFT: A More Distinctive Representation for Local Image Descriptors

PCA-SIFT: A More Distinctive Representation for Local Image Descriptors PCA-: A More Distinctive Representation for Local Image Descriptors Yan Ke, Rahul Sukthankar 2, {yke,rahuls}@cs.cmu.edu School of Computer Science, Carnegie Mellon University; 2 Intel Research Pittsburgh

More information

State-of-the-Art: Transformation Invariant Descriptors. Asha S, Sreeraj M

State-of-the-Art: Transformation Invariant Descriptors. Asha S, Sreeraj M International Journal of Scientific & Engineering Research, Volume 4, Issue ş, 2013 1994 State-of-the-Art: Transformation Invariant Descriptors Asha S, Sreeraj M Abstract As the popularity of digital videos

More information

A Novel Extreme Point Selection Algorithm in SIFT

A Novel Extreme Point Selection Algorithm in SIFT A Novel Extreme Point Selection Algorithm in SIFT Ding Zuchun School of Electronic and Communication, South China University of Technolog Guangzhou, China zucding@gmail.com Abstract. This paper proposes

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

A performance evaluation of local descriptors

A performance evaluation of local descriptors MIKOLAJCZYK AND SCHMID: A PERFORMANCE EVALUATION OF LOCAL DESCRIPTORS A performance evaluation of local descriptors Krystian Mikolajczyk and Cordelia Schmid Dept. of Engineering Science INRIA Rhône-Alpes

More information

Local Image Features

Local Image Features Local Image Features Ali Borji UWM Many slides from James Hayes, Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Overview of Keypoint Matching 1. Find a set of distinctive key- points A 1 A 2 A 3 B 3

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

SIFT-Rank: Ordinal Description for Invariant Feature Correspondence

SIFT-Rank: Ordinal Description for Invariant Feature Correspondence -Rank: Ordinal Description for Invariant Feature Correspondence Matthew Toews and William Wells III Harvard Medical School, Brigham and Women s Hospital {mt,sw}@bwh.harvard.edu Abstract This paper investigates

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Fingerprint Recognition using Robust Local Features Madhuri and

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

SURF: Speeded Up Robust Features

SURF: Speeded Up Robust Features SURF: Speeded Up Robust Features Herbert Bay 1, Tinne Tuytelaars 2, and Luc Van Gool 12 1 ETH Zurich {bay, vangool}@vision.ee.ethz.ch 2 Katholieke Universiteit Leuven {Tinne.Tuytelaars, Luc.Vangool}@esat.kuleuven.be

More information

String distance for automatic image classification

String distance for automatic image classification String distance for automatic image classification Nguyen Hong Thinh*, Le Vu Ha*, Barat Cecile** and Ducottet Christophe** *University of Engineering and Technology, Vietnam National University of HaNoi,

More information

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching Akshay Bhatia, Robert Laganière School of Information Technology and Engineering University of Ottawa

More information

Patch Descriptors. EE/CSE 576 Linda Shapiro

Patch Descriptors. EE/CSE 576 Linda Shapiro Patch Descriptors EE/CSE 576 Linda Shapiro 1 How can we find corresponding points? How can we find correspondences? How do we describe an image patch? How do we describe an image patch? Patches with similar

More information

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors Shao-Tzu Huang, Chen-Chien Hsu, Wei-Yen Wang International Science Index, Electrical and Computer Engineering waset.org/publication/0007607

More information

Patch Descriptors. CSE 455 Linda Shapiro

Patch Descriptors. CSE 455 Linda Shapiro Patch Descriptors CSE 455 Linda Shapiro How can we find corresponding points? How can we find correspondences? How do we describe an image patch? How do we describe an image patch? Patches with similar

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Local Image Features

Local Image Features Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment

More information

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The

More information

ROBUST SCENE CLASSIFICATION BY GIST WITH ANGULAR RADIAL PARTITIONING. Wei Liu, Serkan Kiranyaz and Moncef Gabbouj

ROBUST SCENE CLASSIFICATION BY GIST WITH ANGULAR RADIAL PARTITIONING. Wei Liu, Serkan Kiranyaz and Moncef Gabbouj Proceedings of the 5th International Symposium on Communications, Control and Signal Processing, ISCCSP 2012, Rome, Italy, 2-4 May 2012 ROBUST SCENE CLASSIFICATION BY GIST WITH ANGULAR RADIAL PARTITIONING

More information

FAIR: Towards A New Feature for Affinely-Invariant Recognition

FAIR: Towards A New Feature for Affinely-Invariant Recognition FAIR: Towards A New Feature for Affinely-Invariant Recognition Radim Šára, Martin Matoušek Czech Technical University Center for Machine Perception Karlovo nam 3, CZ-235 Prague, Czech Republic {sara,xmatousm}@cmp.felk.cvut.cz

More information

Motion illusion, rotating snakes

Motion illusion, rotating snakes Motion illusion, rotating snakes Local features: main components 1) Detection: Find a set of distinctive key points. 2) Description: Extract feature descriptor around each interest point as vector. x 1

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

A Novel Real-Time Feature Matching Scheme

A Novel Real-Time Feature Matching Scheme Sensors & Transducers, Vol. 165, Issue, February 01, pp. 17-11 Sensors & Transducers 01 by IFSA Publishing, S. L. http://www.sensorsportal.com A Novel Real-Time Feature Matching Scheme Ying Liu, * Hongbo

More information

Comparison of Local Feature Descriptors

Comparison of Local Feature Descriptors Department of EECS, University of California, Berkeley. December 13, 26 1 Local Features 2 Mikolajczyk s Dataset Caltech 11 Dataset 3 Evaluation of Feature Detectors Evaluation of Feature Deriptors 4 Applications

More information

PCA-SIFT: A More Distinctive Representation for Local Image Descriptors

PCA-SIFT: A More Distinctive Representation for Local Image Descriptors PCA-: A More Distinctive Representation for Local Image Descriptors Yan Ke, Rahul Sukthankar Email: yke@cmu.edu, rahul.sukthankar@intel.com IRP-TR-3-5 November 23 INFORMATION IN THIS DOCUMENT IS PROVIDED

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

Local Image Features

Local Image Features Local Image Features Computer Vision Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Flashed Face Distortion 2nd Place in the 8th Annual Best

More information

A Novel Feature Descriptor Invariant to Complex Brightness Changes

A Novel Feature Descriptor Invariant to Complex Brightness Changes A Novel Feature Descriptor Invariant to Complex Brightness Changes Feng Tang, Suk Hwan Lim, Nelson L. Chang Hewlett-Packard Labs Palo Alto, California, USA {first.last}@hp.com Hai Tao University of California,

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Image Features: Detection, Description, and Matching and their Applications

Image Features: Detection, Description, and Matching and their Applications Image Features: Detection, Description, and Matching and their Applications Image Representation: Global Versus Local Features Features/ keypoints/ interset points are interesting locations in the image.

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

2D Image Processing Feature Descriptors

2D Image Processing Feature Descriptors 2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview

More information

Finding the Best Feature Detector-Descriptor Combination

Finding the Best Feature Detector-Descriptor Combination Finding the Best Feature Detector-Descriptor Combination Anders Lindbjerg Dahl, Henrik Aanæs DTU Informatics Technical University of Denmark Lyngby, Denmark abd@imm.dtu.dk, haa@imm.dtu.dk Kim Steenstrup

More information

Feature description based on Mean Local Mapped Pattern

Feature description based on Mean Local Mapped Pattern Feature description based on Mean Local Mapped Pattern Carolina Toledo Ferraz, Osmando Pereira Junior, Adilson Gonzaga Department of Electrical Engineering - EESC/USP University of São Paulo Av. Trabalhador

More information

FESID: Finite Element Scale Invariant Detector

FESID: Finite Element Scale Invariant Detector : Finite Element Scale Invariant Detector Dermot Kerr 1,SonyaColeman 1, and Bryan Scotney 2 1 School of Computing and Intelligent Systems, University of Ulster, Magee, BT48 7JL, Northern Ireland 2 School

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

SCALE INVARIANT FEATURE TRANSFORM (SIFT) 1 SCALE INVARIANT FEATURE TRANSFORM (SIFT) OUTLINE SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion 2 SIFT BACKGROUND Scale-invariant feature transform SIFT: to detect

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

Stereoscopic Images Generation By Monocular Camera

Stereoscopic Images Generation By Monocular Camera Stereoscopic Images Generation By Monocular Camera Swapnil Lonare M. tech Student Department of Electronics Engineering (Communication) Abha Gaikwad - Patil College of Engineering. Nagpur, India 440016

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

Lecture 10 Detectors and descriptors

Lecture 10 Detectors and descriptors Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Cell Clustering Using Shape and Cell Context. Descriptor

Cell Clustering Using Shape and Cell Context. Descriptor Cell Clustering Using Shape and Cell Context Descriptor Allison Mok: 55596627 F. Park E. Esser UC Irvine August 11, 2011 Abstract Given a set of boundary points from a 2-D image, the shape context captures

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

Click to edit title style

Click to edit title style Class 2: Low-level Representation Liangliang Cao, Jan 31, 2013 EECS 6890 Topics in Information Processing Spring 2013, Columbia University http://rogerioferis.com/visualrecognitionandsearch Visual Recognition

More information

An Evaluation of Volumetric Interest Points

An Evaluation of Volumetric Interest Points An Evaluation of Volumetric Interest Points Tsz-Ho YU Oliver WOODFORD Roberto CIPOLLA Machine Intelligence Lab Department of Engineering, University of Cambridge About this project We conducted the first

More information

AK Computer Vision Feature Point Detectors and Descriptors

AK Computer Vision Feature Point Detectors and Descriptors AK Computer Vision Feature Point Detectors and Descriptors 1 Feature Point Detectors and Descriptors: Motivation 2 Step 1: Detect local features should be invariant to scale and rotation, or perspective

More information

arxiv: v3 [cs.cv] 3 Oct 2012

arxiv: v3 [cs.cv] 3 Oct 2012 Combined Descriptors in Spatial Pyramid Domain for Image Classification Junlin Hu and Ping Guo arxiv:1210.0386v3 [cs.cv] 3 Oct 2012 Image Processing and Pattern Recognition Laboratory Beijing Normal University,

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Viewpoint Invariant Features from Single Images Using 3D Geometry

Viewpoint Invariant Features from Single Images Using 3D Geometry Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie

More information

Tensor Decomposition of Dense SIFT Descriptors in Object Recognition

Tensor Decomposition of Dense SIFT Descriptors in Object Recognition Tensor Decomposition of Dense SIFT Descriptors in Object Recognition Tan Vo 1 and Dat Tran 1 and Wanli Ma 1 1- Faculty of Education, Science, Technology and Mathematics University of Canberra, Australia

More information

SURF: Speeded Up Robust Features

SURF: Speeded Up Robust Features SURF: Speeded Up Robust Features Herbert Bay 1, Tinne Tuytelaars 2, and Luc Van Gool 1,2 1 ETH Zurich {bay, vangool}@vision.ee.ethz.ch 2 Katholieke Universiteit Leuven {Tinne.Tuytelaars, Luc.Vangool}@esat.kuleuven.be

More information

Image Processing. Image Features

Image Processing. Image Features Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images

Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images Ebrahim Karami, Siva Prasad, and Mohamed Shehata Faculty of Engineering and Applied Sciences, Memorial University,

More information

SIFT - scale-invariant feature transform Konrad Schindler

SIFT - scale-invariant feature transform Konrad Schindler SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective

More information

FACULTY OF ENGINEERING AND INFORMATION TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE. Project Plan

FACULTY OF ENGINEERING AND INFORMATION TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE. Project Plan FACULTY OF ENGINEERING AND INFORMATION TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE Project Plan Structured Object Recognition for Content Based Image Retrieval Supervisors: Dr. Antonio Robles Kelly Dr. Jun

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of

More information

HYBRID CENTER-SYMMETRIC LOCAL PATTERN FOR DYNAMIC BACKGROUND SUBTRACTION. Gengjian Xue, Li Song, Jun Sun, Meng Wu

HYBRID CENTER-SYMMETRIC LOCAL PATTERN FOR DYNAMIC BACKGROUND SUBTRACTION. Gengjian Xue, Li Song, Jun Sun, Meng Wu HYBRID CENTER-SYMMETRIC LOCAL PATTERN FOR DYNAMIC BACKGROUND SUBTRACTION Gengjian Xue, Li Song, Jun Sun, Meng Wu Institute of Image Communication and Information Processing, Shanghai Jiao Tong University,

More information

Deformation Invariant Image Matching

Deformation Invariant Image Matching Deformation Invariant Image Matching Haibin Ling David W. Jacobs Center for Automation Research, Computer Science Department University of Maryland, College Park {hbling, djacobs}@ umiacs.umd.edu Abstract

More information

A Comparison of SIFT and SURF

A Comparison of SIFT and SURF A Comparison of SIFT and SURF P M Panchal 1, S R Panchal 2, S K Shah 3 PG Student, Department of Electronics & Communication Engineering, SVIT, Vasad-388306, India 1 Research Scholar, Department of Electronics

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Object Recognition Algorithms for Computer Vision System: A Survey

Object Recognition Algorithms for Computer Vision System: A Survey Volume 117 No. 21 2017, 69-74 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Object Recognition Algorithms for Computer Vision System: A Survey Anu

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Fuzzy based Multiple Dictionary Bag of Words for Image Classification

Fuzzy based Multiple Dictionary Bag of Words for Image Classification Available online at www.sciencedirect.com Procedia Engineering 38 (2012 ) 2196 2206 International Conference on Modeling Optimisation and Computing Fuzzy based Multiple Dictionary Bag of Words for Image

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

Local features: detection and description May 12 th, 2015

Local features: detection and description May 12 th, 2015 Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59

More information

Face Recognition using SURF Features and SVM Classifier

Face Recognition using SURF Features and SVM Classifier International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 8, Number 1 (016) pp. 1-8 Research India Publications http://www.ripublication.com Face Recognition using SURF Features

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Evaluation of the Influence of Feature Detectors and Photometric Descriptors in Object Recognition

Evaluation of the Influence of Feature Detectors and Photometric Descriptors in Object Recognition Department of Numerical Analysis and Computer Science Evaluation of the Influence of Feature Detectors and Photometric Descriptors in Object Recognition Fredrik Furesjö and Henrik I. Christensen TRITA-NA-P0406

More information

A Keypoint Descriptor Inspired by Retinal Computation

A Keypoint Descriptor Inspired by Retinal Computation A Keypoint Descriptor Inspired by Retinal Computation Bongsoo Suh, Sungjoon Choi, Han Lee Stanford University {bssuh,sungjoonchoi,hanlee}@stanford.edu Abstract. The main goal of our project is to implement

More information

More effective image matching with Scale Invariant Feature Transform

More effective image matching with Scale Invariant Feature Transform More effective image matching with Scale Invariant Feature Transform Cosmin Ancuti *, Philippe Bekaert * Hasselt University Expertise Centre for Digital Media Transnationale Universiteit Limburg- School

More information

Local Features Tutorial: Nov. 8, 04

Local Features Tutorial: Nov. 8, 04 Local Features Tutorial: Nov. 8, 04 Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

Key properties of local features

Key properties of local features Key properties of local features Locality, robust against occlusions Must be highly distinctive, a good feature should allow for correct object identification with low probability of mismatch Easy to etract

More information

A System of Image Matching and 3D Reconstruction

A System of Image Matching and 3D Reconstruction A System of Image Matching and 3D Reconstruction CS231A Project Report 1. Introduction Xianfeng Rui Given thousands of unordered images of photos with a variety of scenes in your gallery, you will find

More information

SIFT: Scale Invariant Feature Transform

SIFT: Scale Invariant Feature Transform 1 / 25 SIFT: Scale Invariant Feature Transform Ahmed Othman Systems Design Department University of Waterloo, Canada October, 23, 2012 2 / 25 1 SIFT Introduction Scale-space extrema detection Keypoint

More information