Image Feature Evaluation for Contents-based Image Retrieval

Size: px
Start display at page:

Download "Image Feature Evaluation for Contents-based Image Retrieval"

Transcription

1 Image Feature Evaluation for Contents-based Image Retrieval Adam Kuffner and Antonio Robles-Kelly, Department of Theoretical Physics, Australian National University, Canberra, Australia Vision Science, Technology and Applications (VISTA), NICTA, Canberra, Australia Department of Information Engineering, Australian National University, Canberra, Australia Abstract This paper is concerned with feature evaluation for content-based image retrieval. Here we concentrate our attention on the evaluation of image features amongst three alternatives, namely the, the maximally stable extremal regions and the scale invariant feature transform. To evaluate these image features in a content-based image retrieval setting, we have used the KD-tree algorithm. We use the KD-tree algorithm to match those features corresponding to the query image with those recovered from the images in the data set under study. With the matches at hand, we use a nearest neighbour approach to threshold the Euclidean distances between pairs of corresponding features. In this way, the retrieval is such that those features whose pairwise distances are small, vote for a retrieval candidate in the data-set. This voting scheme allows us to arrange the images in the data set in order of relevance and permits the recovery of measures of performance for each of the three alternatives. In our experiments, we focus in the evaluation of the effects of scaling and rotation in the retrieval performance. Introduction The contents-based retrieval in image databases is a daunting and often costly task. As in a traditional database, the image must be described in order to be incorporated to the database and form part of the index. The engine of the database uses the index to quick-search the most probable candidates and then, making use of a similarity measure, retrieves the candidate images in order of relevance. Image retrieval has been a long standing problem in computer vision and pattern recognition and early surveys can be tracked back to the mid-eighties (Tamura & Yokoya 98). Nonetheless, one of the first attempts to cast the problem of retrieving images from a database as a task based upon content was that introduced in (Sclaroff & Pentland 99). Here, Sclaroff and Pentland presented a method in which the user is allowed to provide a search model, such as a sketch or example image so as to perform a query whose output is a set of thumbnails ordered by relevance. In their model, the concept of relevance implies similarity, which is modeled as a continuous value between zero and unity. This measure is often modeled as National ICT Australia is funded by the Australian Governments Backing Australia s Ability initiative, in part through the Australian Research Council. Copyright c 6, Australian Computer Society, Inc. This paper appeared at HCSNet Workshop on the Use of Vision in HCI (VisHCI 6), Canberra, Australia. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 56. R. Goecke, A. Robles-Kelly & T. Caelli, Eds. Reproduction for academic, not-for profit purposes permitted provided this text is included. Figure : Diagram of our voting score recovery scheme. a metric based on image features such as colours, corners, edges, etc. Following this trend, automatic image database systems use elementary features in the image to index and retrieve the candidates from the database. For instance, QBIC (Niblack et al. 99) allows images to be retrieved using shape, colour and texture. FourEyes (Picard 995) employs a high-level image feature processing scheme to modify the structure of the database and the retrieval parameters. Photobook (Pentland, Picard & Sclaroff 99) is a collection of tools to search and organise image datasets. SQUID (Shape Queries Using Image Databases) (Farzin & Kittler 996) uses a scale space representation of shape so as to accomplish queries based upon contour similarity. The main argument levelled against these systems concerns their lack of robustness to rotation and occlusion. Also, they often require a human expert to determine the parameters of the search criteria. As a result, retrieval methods vary greatly from one application to another and currently available image database systems make use of a hash or index to retrieve the images. Furthermore, the results of the query operation and the performance of retrieval applications rely heavily on an appropriate selection of the image features used to characterise the images under study. As a result, recently, there has been an increasing interest in the evaluation of image features and descriptors computed from interest points on the image. For instance, Caneiro and Jepson (Caneiro & Jepson ) have used Receiver Operating Characteristics (ROC) to compare test query descriptors against a library of reference computed from a separate dataset. Mikolajczyk and Schmid (Mikolajczyk & Schmid 5) have evaluated a number of local descriptors in the context of matching and recognition. Mikolajczyk et al. (Mikolajczyk, Tuytelaars, Schmid, Zisserman, Matas, Schaffalitzky, Kadir & Gool n.d.) have taken this analysis further and evaluated local descriptors subject to affine transformations. In a series of related developments, Randen and Husoy (Randen & Husoy 999) and Varma and Zisserman (Varma & Zisserman. ) have compared different local image descriptors, also known as filters, for texture classification.

2 Object Object Object Object Object 5 Object 6 Object 7 Object 8 Object 9 Object Object Object Object Object Object 5 Object 6 Object 7 Object 8 Object 9 Object Figure : Sample views for the objects in the Columbia University COIL database. Image Feature Evaluation and Contents-based Image Retrieval In contrast with other studies elsewhere in the literature, here we compare the performance of local descriptors for purposes of contents-based image retrieval. Rather than evaluating the capabilities of the image features to describe the scene subject to affine transformations, we focus on the adequacy of the local descriptors for contentsbased image retrieval and compare them using the same evaluation criteria and experimental vehicle. To this end, we have selected three alternatives which have previously shown a good performance in such a context and evaluate the retrieval rate under viewpoint rotation and image scaling. As mentioned earlier, in this paper we aim to evaluate the adequacy for content-based image retrieval of three feature recovery methods. These are the Harris corner detector (Harris & Stephens 988), the Maximally Stable Extremal Regions () (Matas, Chum, Martin & Pajdla ) and the Scale Invariant Feature Transform (SIFT) (Lowe ). Thus, we have divided the section into two parts. The first of these concerns an overview of the local image descriptors used as alternatives for the recognition process. The second of these introduces the retrieval scheme used for purposes of the evaluation presented in this paper.. Retrieval Process Having provided an overview of the image features to be evaluated, we now present the image retrieval scheme used throughout the paper for the purposes of evaluating the suitability of the local descriptors above for purposes of content-based image retrieval. The diagramatic representation of our retrieval scheme is shown in Figure. Our method recovers, at input, the features for both, the images in the database and the query image. With the image features at hand, we recover the correspondences between features in both, the query and each of the data images making use of the KD-tree algorithm (Bentley 975). These correspondences are an equivalence relation which we use to recover a score that depicts the similarity between the query and the data images. This score is based upon the Euclidean distances between pairs of corresponding features. Being more formal, consider the query image I Q and the data image I D whose respective feature sets are Ω Q = {ω Q (), ω Q (),..., ω ( Ω )} and Ω D = {ω D (), ω D (),..., ω D ( Ω )}, where ω Q (i) and ω D (i) are the i th feature vectors for the model and the data images. If there is a match between the feature vectors ω Q (i) and ω D (j), their squared Euclidean distance r(ω Q (i), ω D (j)) can be used to recover the score β = Γ max{ Ω Q, Ω D } where Γ is the set of feature vectors recovered from the data image I D whose pairwise distances with respect to () their matching query-image features are below a given threshold ɛ, i.e. ω D (i) Γ r(ω Q (i), ω D (j)) ɛ. As a result, if β =, the feature vectors recovered from the query image are all sufficiently close to the features in the data image. Furthermore, the number of features for the query and data images must be equal, i.e. Ω Q = Ω D. On the other hand, if β tends to zero, then the number of feature vectors in the query image which are far apart from those features in the data image to which they have been matched is large. Hence, β is a normalised voting score which can be viewed as a similarity measure between the query and the data image. Furthermore, β captures the similarity between images based upon their features. Thus, the retrieval is based upon the image contents. To construct the feature vectors ω Q (i) and ω D (i) we have considered the nature of each of the three alternatives evaluated here. Furthermore, by construction, the feature vectors are a set of parameters that describe the feature under study. For instance, recall that the Harris Corner detector finds specific corners on objects and features within a grey scaled image. It does this by taking an image and convolving it with a Sobel gradient filter to produce gradient maps, which are then used to compute the locally averaged moment matrix. It then combines the eigenvalues of the moment matrix to compute a corner strength, of which maximum values indicate the corner positions. Thus, in our experiments, our feature vector corresponds to the x and y coordinates on the image plane for the detected corners. In the case of the, the algorithm operates on a grey-scale image by finding the regions that are maximally stable with respect to changes in pixel intensities. With the at hand, we fit an ellipse to each of the recovered regions making use of the algorithm of Fitzgibbon, Pilu and Fisher (Fitzgibbon, Pilu & Fisher 999), which fits ellipses to the recovered so as to minimise the sum of squared algebraic distances. Thus, for the, our feature vector is a five-dimensional one comprised by the centroid coordinates, orientation and major and minor axis lengths of the ellipse fitted to each of the maximally stable regions. In contrast with the and, where the feature vector is based upon the geometric interpretation of the aim of computation of the feature recovery method, in the case of the SIFT, we make use of the 8-element descriptor yield by the method of Lowe (Lowe ). The SIFT recovers and characterises points invariant to scaling making use of a four-stage cascading filter approach which commences with a scale-space extrema detection. This first step consists of a differenceof-gaussians, which is used to identify potential interest points. Keypoint localisation is then used to eliminate previously calculated keypoints that, either have low contrast or are not localised on an edge. After recovering keypoint orientations, local gradient data is used to construct a descriptor for each of the recovered keypoints. Results As mentioned earlier, our aim here is the evaluation of the three feature descriptors above for purposes of contents-

3 ϕ =.5 harris corner scale.5 results mser scale.5 results sift scale.5 results ϕ =.5 harris corner scale.5 results mser scale.5 results sift scale.5 results ϕ =.75 harris corner scale.75 results mser scale.75 results sift scale.75 results ϕ = harris corner scale results mser scale results sift scale results Figure : Retrieval results for the three alternatives of image feature vectors and four different values of the scaling factor ϕ. based image retrieval. Thus, in this section, we assess the quality of the retrieval results using, as an experimental vehicle, the Columbia University COIL- and an inhouse acquired database of urban scenes. To evaluate the effect of scaling on the image retrieval operation performance, we have performed experiments with four imagescales ϕ, which correspond to 5%, 5%, 75% and % of the image size, i.e. ϕ = {.5,.5,.75, }. For each of the views, our feature set is comprised by the feature vectors recovered by each of the three alternatives under study.. Columbia University COIL- Database The COIL- database contains 7 views for objects acquired by rotating the object under study about the vertical axis. The scaling of the views and this rotation account for the affine transformations mentioned earlier. In Figure, we show sample views for each of the objects in the database. For our feature-based image retrieval experiments, we have removed out of the 7 views for each object, i.e. every other seven views. These views are our query images. The views in the database constitute our data set, i.e. 6 views. For each of our query views, we retrieve the four images from the data set which amount to the highest values of the voting score β. Ideally, these scheme should select the four data views indexed immediately before and after the query view. In other words, the correct retrieval results for the query view indexed i are those views indexed i, i, i + and i +. This scheme allows us to use the number of correctly recovered views as a measure of the accuracy of the matching algorithm and, hence, lends itself naturally to the performance assessment task in hand. In Figure, we show, for each scale and image feature alternative, i.e., and SIFT descriptors, the mean retrieval rate as a function of object index. We also indicate, using error bars, the standard error for the ten query images per object. In the plots, we have used the indexing provided in Figure.. Urban-scene Database Having presented evaluation results on the COIL- database, we now turn our attention to a more challenging setting. This is provided by a database of urban scenes. This database contains views for scenes acquired

4 Scene Scene Scene Scene Scene 5 Scene 6 Scene 7 Scene 8 Scene 9 Scene Figure : Sample views for the scenes in our database. ϕ =.5 harris corner scale.5 results MSER scale.5 results SIFT scale.5 results ϕ =.5 harris corner scale.5 results MSER scale.5 results SIFT scale.5 results ϕ =.75 harris corner scale.75 results MSER scale.75 results SIFT scale.75 results ϕ = harris corner scale results MSER scale results SIFT scale results Figure 5: Retrieval results for the three alternatives of image feature vectors and four different values of the scaling factor ϕ. by rotating the camera about its vertical axis from o to 66 o degrees in steps of 6 o, i.e. views per scene. This viewpoint rotation, in conjunction with the scaling operations on the imagery, accounts for the affine transformations of the scene under study. In Figure, we show sample views for each of the scenes in the database. For our feature-based image retrieval experiments, we have followed an akin approach to that employed on the COIL- database. At this point, it is worth noting that, since our images a true-colour ones, we have converted them into gray-scale. After performing this conversion as a preprocessing step, we have removed out of the views for each object. This amounts to one every other three views. We use the excised views as query images,

5 whereas the remaining 8 images in the database constitutes our data set. As done previously, we retrieve the four images from the data set which amount to the highest values of the voting score β for each of the query views. Again, the correct retrieval results for the query images are those immediately before and after the view of reference. This scheme, being consistent with the one used to assess the performance of the image retrieval results on the COIL database, not only permits the direct association of the number of correctly recovered views to the accuracy measures computed from our experiments, but allows a direct comparison between the datasets used in both parts of our quantitative study. In Figure 5, we repeat the sequence in Figure for our urban-scene database. The plots show the mean and standard deviation for the retrieval rates as a function of object index and image-scale ϕ. In the plots, we have used the indexing provided in Figure.. Discussion From the plots, we can conclude that the best performance, in terms of mean retrieval rate is given by the. This is regardless of the scaling factor ϕ. Despite providing a margin of improvement in terms of performance with respect to the alternatives, the standard error for the is the largest, as it is the variation in mean retrieval rate between scales. It is also worth noting that the SIFT is scale invariant, as claimed in (Lowe ). Thus, for the SIFT, the retrieval rate variation is small between scales, the retrieval rate itself is the lowest of the three alternatives. This may be due to the fact that the SIFT can be affected by viewpoint rotations. Finally, the show a sensitivity to scale and rotation which delivers a mean retrieval rate and standard error which are half-way between the and the SIFT. This is in accordance with the stability assumption on the regions recovered by the algorithm in (Matas et al. ). Conclusions In this paper, we have presented an evaluation of three local image descriptors for purposes of contents-based image retrieval. In our experiments, we have accounted for the effects of rotation and scaling transformations on the retrieval rate and the standard error. Despite the evaluation presented here is not exhaustive, our experimental setting is quite general and can be easily extended to other descriptors elsewhere in the literature. Furthermore, the KD- Tree algorithm used here may be substituted with other relational matching algorithms for purposes of further comparison and evaluation. Harris, C. J. & Stephens, M. (988), A combined corner and edge detector, in Proc. th Alvey Vision Conference, pp Lowe, D. (), Distinctive image features from scaleinvariant keypoints, International Journal of Computer Vision 6(), 9. Matas, J., Chum, O., Martin, U. & Pajdla, T. (), Robust wide baseline stereo from maximally stable extremal regions, in Proceedings of the British Machine Vision Conference, pp Mikolajczyk, K. & Schmid, C. (5), A performance evaluation of local descriptors, IEEE Trans. on Pattern Analysis and Machine Intelligence 7(), Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T. & Gool, L. V. (n.d.), A comparison of affine region detectors. Submitted to the International Journal of Computer Vision, downloadable from Niblack et al., W. (99), The qbic project: Querying images by content using color, texture and shape, in Proc. SPIE Conference on Storage and Retrieval of Image and Video Databases, 98, pp Pentland, A. P., Picard, R. W. & Sclaroff, S. (99), Photobook: tools for contents based manipulation of image databases, in Storage and Retrieval for Image and Video Database, Vol. II, pp. 7. Picard, R. W. (995), Light-years from lena: Video and image libraries ind the furture, in International Conference on Image Processing, Vol., pp.. Randen, T. & Husoy, J. H. (999), Filtering for texture classification : A comparative study, IEEE Trans. on Pattern Analysis and Machine Intelligence (), 9. Sclaroff, S. & Pentland, A. (99), A model framework for correspondence and description, in Proceedings of the International Conference on Computer Vision, pp. 8. Tamura, H. & Yokoya, N. (98), Image databe systems: a survey, Pattern Recognition 7(), 9 9. Varma, M. & Zisserman., A. (), Texture classification: Are filter banks necessary?, in Conference on Computer Vision and Pattern Recognition, pp. I:77 8. References Bentley, J. (975), Multidimensional binary search trees used for associative searching, Communications of the ACM 8(9). Caneiro, G. & Jepson, A. (), Phase-based local features, in European Conference on Computer Vision, pp. I: Farzin, S. A. M. & Kittler, J. (996), Robust and efficient shape indexing through curvature scale space, in Proceedings of the 7th British Machine Vision Conference, Vol., pp Fitzgibbon, A., Pilu, M. & Fisher, R. B. (999), Direct least square fitting of ellipses, IEEE Trans. on Pattern Analysis and Machine Intelligence (5), 76 8.

FACULTY OF ENGINEERING AND INFORMATION TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE. Project Plan

FACULTY OF ENGINEERING AND INFORMATION TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE. Project Plan FACULTY OF ENGINEERING AND INFORMATION TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE Project Plan Structured Object Recognition for Content Based Image Retrieval Supervisors: Dr. Antonio Robles Kelly Dr. Jun

More information

Specular 3D Object Tracking by View Generative Learning

Specular 3D Object Tracking by View Generative Learning Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Evaluation and comparison of interest points/regions

Evaluation and comparison of interest points/regions Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :

More information

Requirements for region detection

Requirements for region detection Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds 9 1th International Conference on Document Analysis and Recognition Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds Weihan Sun, Koichi Kise Graduate School

More information

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

Speeding up the Detection of Line Drawings Using a Hash Table

Speeding up the Detection of Line Drawings Using a Hash Table Speeding up the Detection of Line Drawings Using a Hash Table Weihan Sun, Koichi Kise 2 Graduate School of Engineering, Osaka Prefecture University, Japan sunweihan@m.cs.osakafu-u.ac.jp, 2 kise@cs.osakafu-u.ac.jp

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching Akshay Bhatia, Robert Laganière School of Information Technology and Engineering University of Ottawa

More information

An Evaluation of Volumetric Interest Points

An Evaluation of Volumetric Interest Points An Evaluation of Volumetric Interest Points Tsz-Ho YU Oliver WOODFORD Roberto CIPOLLA Machine Intelligence Lab Department of Engineering, University of Cambridge About this project We conducted the first

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Construction of Precise Local Affine Frames

Construction of Precise Local Affine Frames Construction of Precise Local Affine Frames Andrej Mikulik, Jiri Matas, Michal Perdoch, Ondrej Chum Center for Machine Perception Czech Technical University in Prague Czech Republic e-mail: mikulik@cmp.felk.cvut.cz

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

Stereoscopic Images Generation By Monocular Camera

Stereoscopic Images Generation By Monocular Camera Stereoscopic Images Generation By Monocular Camera Swapnil Lonare M. tech Student Department of Electronics Engineering (Communication) Abha Gaikwad - Patil College of Engineering. Nagpur, India 440016

More information

A Novel Real-Time Feature Matching Scheme

A Novel Real-Time Feature Matching Scheme Sensors & Transducers, Vol. 165, Issue, February 01, pp. 17-11 Sensors & Transducers 01 by IFSA Publishing, S. L. http://www.sensorsportal.com A Novel Real-Time Feature Matching Scheme Ying Liu, * Hongbo

More information

Aalborg Universitet. A new approach for detecting local features Nguyen, Phuong Giang; Andersen, Hans Jørgen

Aalborg Universitet. A new approach for detecting local features Nguyen, Phuong Giang; Andersen, Hans Jørgen Aalborg Universitet A new approach for detecting local features Nguyen, Phuong Giang; Andersen, Hans Jørgen Published in: International Conference on Computer Vision Theory and Applications Publication

More information

Structure Guided Salient Region Detector

Structure Guided Salient Region Detector Structure Guided Salient Region Detector Shufei Fan, Frank Ferrie Center for Intelligent Machines McGill University Montréal H3A2A7, Canada Abstract This paper presents a novel method for detection of

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

Wide Baseline Matching using Triplet Vector Descriptor

Wide Baseline Matching using Triplet Vector Descriptor 1 Wide Baseline Matching using Triplet Vector Descriptor Yasushi Kanazawa Koki Uemura Department of Knowledge-based Information Engineering Toyohashi University of Technology, Toyohashi 441-8580, JAPAN

More information

Shape Descriptors for Maximally Stable Extremal Regions

Shape Descriptors for Maximally Stable Extremal Regions Shape Descriptors for Maximally Stable Extremal Regions Per-Erik Forssén and David G. Lowe Department of Computer Science University of British Columbia {perfo,lowe}@cs.ubc.ca Abstract This paper introduces

More information

Video Google: A Text Retrieval Approach to Object Matching in Videos

Video Google: A Text Retrieval Approach to Object Matching in Videos Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic, Frederik Schaffalitzky, Andrew Zisserman Visual Geometry Group University of Oxford The vision Enable video, e.g. a feature

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Patch-based Object Recognition. Basic Idea

Patch-based Object Recognition. Basic Idea Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest

More information

Image Features: Detection, Description, and Matching and their Applications

Image Features: Detection, Description, and Matching and their Applications Image Features: Detection, Description, and Matching and their Applications Image Representation: Global Versus Local Features Features/ keypoints/ interset points are interesting locations in the image.

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Multi-Image Matching using Multi-Scale Oriented Patches

Multi-Image Matching using Multi-Scale Oriented Patches Multi-Image Matching using Multi-Scale Oriented Patches Matthew Brown Department of Computer Science University of British Columbia mbrown@cs.ubc.ca Richard Szeliski Vision Technology Group Microsoft Research

More information

Global localization from a single feature correspondence

Global localization from a single feature correspondence Global localization from a single feature correspondence Friedrich Fraundorfer and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology {fraunfri,bischof}@icg.tu-graz.ac.at

More information

Salient Visual Features to Help Close the Loop in 6D SLAM

Salient Visual Features to Help Close the Loop in 6D SLAM Visual Features to Help Close the Loop in 6D SLAM Lars Kunze, Kai Lingemann, Andreas Nüchter, and Joachim Hertzberg University of Osnabrück, Institute of Computer Science Knowledge Based Systems Research

More information

VEHICLE MAKE AND MODEL RECOGNITION BY KEYPOINT MATCHING OF PSEUDO FRONTAL VIEW

VEHICLE MAKE AND MODEL RECOGNITION BY KEYPOINT MATCHING OF PSEUDO FRONTAL VIEW VEHICLE MAKE AND MODEL RECOGNITION BY KEYPOINT MATCHING OF PSEUDO FRONTAL VIEW Yukiko Shinozuka, Ruiko Miyano, Takuya Minagawa and Hideo Saito Department of Information and Computer Science, Keio University

More information

Fuzzy based Multiple Dictionary Bag of Words for Image Classification

Fuzzy based Multiple Dictionary Bag of Words for Image Classification Available online at www.sciencedirect.com Procedia Engineering 38 (2012 ) 2196 2206 International Conference on Modeling Optimisation and Computing Fuzzy based Multiple Dictionary Bag of Words for Image

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

Matching Local Invariant Features with Contextual Information: An Experimental Evaluation.

Matching Local Invariant Features with Contextual Information: An Experimental Evaluation. Matching Local Invariant Features with Contextual Information: An Experimental Evaluation. Desire Sidibe, Philippe Montesinos, Stefan Janaqi LGI2P - Ecole des Mines Ales, Parc scientifique G. Besse, 30035

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Local features: detection and description. Local invariant features

Local features: detection and description. Local invariant features Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

A Novel Extreme Point Selection Algorithm in SIFT

A Novel Extreme Point Selection Algorithm in SIFT A Novel Extreme Point Selection Algorithm in SIFT Ding Zuchun School of Electronic and Communication, South China University of Technolog Guangzhou, China zucding@gmail.com Abstract. This paper proposes

More information

State-of-the-Art: Transformation Invariant Descriptors. Asha S, Sreeraj M

State-of-the-Art: Transformation Invariant Descriptors. Asha S, Sreeraj M International Journal of Scientific & Engineering Research, Volume 4, Issue ş, 2013 1994 State-of-the-Art: Transformation Invariant Descriptors Asha S, Sreeraj M Abstract As the popularity of digital videos

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Invariant Features from Interest Point Groups

Invariant Features from Interest Point Groups Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper

More information

Tensor Decomposition of Dense SIFT Descriptors in Object Recognition

Tensor Decomposition of Dense SIFT Descriptors in Object Recognition Tensor Decomposition of Dense SIFT Descriptors in Object Recognition Tan Vo 1 and Dat Tran 1 and Wanli Ma 1 1- Faculty of Education, Science, Technology and Mathematics University of Canberra, Australia

More information

FESID: Finite Element Scale Invariant Detector

FESID: Finite Element Scale Invariant Detector : Finite Element Scale Invariant Detector Dermot Kerr 1,SonyaColeman 1, and Bryan Scotney 2 1 School of Computing and Intelligent Systems, University of Ulster, Magee, BT48 7JL, Northern Ireland 2 School

More information

An alternative to scale-space representation for extracting local features in image recognition

An alternative to scale-space representation for extracting local features in image recognition Aalborg Universitet An alternative to scale-space representation for extracting local features in image recognition Andersen, Hans Jørgen; Nguyen, Phuong Giang Published in: International Conference on

More information

Object Recognition Algorithms for Computer Vision System: A Survey

Object Recognition Algorithms for Computer Vision System: A Survey Volume 117 No. 21 2017, 69-74 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Object Recognition Algorithms for Computer Vision System: A Survey Anu

More information

Computer Vision I - Basics of Image Processing Part 2

Computer Vision I - Basics of Image Processing Part 2 Computer Vision I - Basics of Image Processing Part 2 Carsten Rother 07/11/2014 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Multi-modal Registration of Visual Data. Massimiliano Corsini Visual Computing Lab, ISTI - CNR - Italy

Multi-modal Registration of Visual Data. Massimiliano Corsini Visual Computing Lab, ISTI - CNR - Italy Multi-modal Registration of Visual Data Massimiliano Corsini Visual Computing Lab, ISTI - CNR - Italy Overview Introduction and Background Features Detection and Description (2D case) Features Detection

More information

Adaptive Dominant Points Detector for Visual Landmarks Description Λ

Adaptive Dominant Points Detector for Visual Landmarks Description Λ Adaptive Dominant Points Detector for Visual Landmarks Description Λ Esther Antúnez 1, Rebeca Marl 2, Antonio Bandera 2, and Walter G. Kropatsch 1 1 PRIP, Vienna University of Technology, Austria fenunez,krwg@prip.tuwien.ac.at

More information

Partial Copy Detection for Line Drawings by Local Feature Matching

Partial Copy Detection for Line Drawings by Local Feature Matching THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. E-mail: 599-31 1-1 sunweihan@m.cs.osakafu-u.ac.jp, kise@cs.osakafu-u.ac.jp MSER HOG HOG MSER SIFT Partial

More information

Lecture 10 Detectors and descriptors

Lecture 10 Detectors and descriptors Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =

More information

Shape recognition with edge-based features

Shape recognition with edge-based features Shape recognition with edge-based features K. Mikolajczyk A. Zisserman C. Schmid Dept. of Engineering Science Dept. of Engineering Science INRIA Rhône-Alpes Oxford, OX1 3PJ Oxford, OX1 3PJ 38330 Montbonnot

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Feature Detection and Matching

Feature Detection and Matching and Matching CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS4243) Camera Models 1 /

More information

Local Image Features

Local Image Features Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment

More information

Robust Online Object Learning and Recognition by MSER Tracking

Robust Online Object Learning and Recognition by MSER Tracking Computer Vision Winter Workshop 28, Janez Perš (ed.) Moravske Toplice, Slovenia, February 4 6 Slovenian Pattern Recognition Society, Ljubljana, Slovenia Robust Online Object Learning and Recognition by

More information

DETECTORS AND DESCRIPTORS FOR PHOTOGRAMMETRIC APPLICATIONS

DETECTORS AND DESCRIPTORS FOR PHOTOGRAMMETRIC APPLICATIONS DETECTORS AND DESCRIPTORS FOR PHOTOGRAMMETRIC APPLICATIONS KEY WORDS: Features Detection, Orientation, Precision Fabio Remondino Institute for Geodesy and Photogrammetry, ETH Zurich, Switzerland E-mail:

More information

AK Computer Vision Feature Point Detectors and Descriptors

AK Computer Vision Feature Point Detectors and Descriptors AK Computer Vision Feature Point Detectors and Descriptors 1 Feature Point Detectors and Descriptors: Motivation 2 Step 1: Detect local features should be invariant to scale and rotation, or perspective

More information

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl

More information

Lucas-Kanade Scale Invariant Feature Transform for Uncontrolled Viewpoint Face Recognition

Lucas-Kanade Scale Invariant Feature Transform for Uncontrolled Viewpoint Face Recognition Lucas-Kanade Scale Invariant Feature Transform for Uncontrolled Viewpoint Face Recognition Yongbin Gao 1, Hyo Jong Lee 1, 2 1 Division of Computer Science and Engineering, 2 Center for Advanced Image and

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian. Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the

More information

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi Journal of Asian Scientific Research, 013, 3(1):68-74 Journal of Asian Scientific Research journal homepage: http://aessweb.com/journal-detail.php?id=5003 FEATURES COMPOSTON FOR PROFCENT AND REAL TME RETREVAL

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

SEARCH BY MOBILE IMAGE BASED ON VISUAL AND SPATIAL CONSISTENCY. Xianglong Liu, Yihua Lou, Adams Wei Yu, Bo Lang

SEARCH BY MOBILE IMAGE BASED ON VISUAL AND SPATIAL CONSISTENCY. Xianglong Liu, Yihua Lou, Adams Wei Yu, Bo Lang SEARCH BY MOBILE IMAGE BASED ON VISUAL AND SPATIAL CONSISTENCY Xianglong Liu, Yihua Lou, Adams Wei Yu, Bo Lang State Key Laboratory of Software Development Environment Beihang University, Beijing 100191,

More information

COMPARING COMBINATIONS OF FEATURE REGIONS FOR PANORAMIC VSLAM

COMPARING COMBINATIONS OF FEATURE REGIONS FOR PANORAMIC VSLAM COMPARING COMBINATIONS OF FEATURE REGIONS FOR PANORAMIC VSLAM Arnau Ramisa, Ramón López de Mántaras Artificial Intelligence Research Institute, UAB Campus, 08193, Bellaterra, SPAIN aramisa@iiia.csic.es,

More information

Part-Based Skew Estimation for Mathematical Expressions

Part-Based Skew Estimation for Mathematical Expressions Soma Shiraishi, Yaokai Feng, and Seiichi Uchida shiraishi@human.ait.kyushu-u.ac.jp {fengyk,uchida}@ait.kyushu-u.ac.jp Abstract We propose a novel method for the skew estimation on text images containing

More information

3D model search and pose estimation from single images using VIP features

3D model search and pose estimation from single images using VIP features 3D model search and pose estimation from single images using VIP features Changchang Wu 2, Friedrich Fraundorfer 1, 1 Department of Computer Science ETH Zurich, Switzerland {fraundorfer, marc.pollefeys}@inf.ethz.ch

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

Automatic Image Alignment

Automatic Image Alignment Automatic Image Alignment Mike Nese with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2010 Live Homography DEMO Check out panoramio.com

More information

A Rapid Automatic Image Registration Method Based on Improved SIFT

A Rapid Automatic Image Registration Method Based on Improved SIFT Available online at www.sciencedirect.com Procedia Environmental Sciences 11 (2011) 85 91 A Rapid Automatic Image Registration Method Based on Improved SIFT Zhu Hongbo, Xu Xuejun, Wang Jing, Chen Xuesong,

More information

A performance evaluation of local descriptors

A performance evaluation of local descriptors MIKOLAJCZYK AND SCHMID: A PERFORMANCE EVALUATION OF LOCAL DESCRIPTORS A performance evaluation of local descriptors Krystian Mikolajczyk and Cordelia Schmid Dept. of Engineering Science INRIA Rhône-Alpes

More information

Image Processing. Image Features

Image Processing. Image Features Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching

More information

Automatic Image Alignment

Automatic Image Alignment Automatic Image Alignment with a lot of slides stolen from Steve Seitz and Rick Szeliski Mike Nese CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2018 Live Homography

More information

Video Processing for Judicial Applications

Video Processing for Judicial Applications Video Processing for Judicial Applications Konstantinos Avgerinakis, Alexia Briassouli, Ioannis Kompatsiaris Informatics and Telematics Institute, Centre for Research and Technology, Hellas Thessaloniki,

More information

Motion illusion, rotating snakes

Motion illusion, rotating snakes Motion illusion, rotating snakes Local features: main components 1) Detection: Find a set of distinctive key points. 2) Description: Extract feature descriptor around each interest point as vector. x 1

More information

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe AN EFFICIENT BINARY CORNER DETECTOR P. Saeedi, P. Lawrence and D. Lowe Department of Electrical and Computer Engineering, Department of Computer Science University of British Columbia Vancouver, BC, V6T

More information

A Quasi-random Sampling Approach to Image Retrieval

A Quasi-random Sampling Approach to Image Retrieval A Quasi-random Sampling Approach to Image Retrieval Jun Zhou 1,2 and Antonio Robles-Kelly 1,2 1 National ICT Australia (NICTA), Locked Bag 8001, Canberra ACT 2601, Australia 2 RSISE, Bldg. 115, Australian

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion Marek Schikora 1 and Benedikt Romba 2 1 FGAN-FKIE, Germany 2 Bonn University, Germany schikora@fgan.de, romba@uni-bonn.de Abstract: In this

More information

Performance Characterization of Image Feature Detectors in Relation to the Scene Content Utilizing a Large Image Database

Performance Characterization of Image Feature Detectors in Relation to the Scene Content Utilizing a Large Image Database IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. XX, XXXXX 20XX 1 Performance Characterization of Image Feature Detectors in Relation to the Scene Content Utilizing a Large Image Database Bruno Ferrarini

More information

SURF: Speeded Up Robust Features

SURF: Speeded Up Robust Features SURF: Speeded Up Robust Features Herbert Bay 1, Tinne Tuytelaars 2, and Luc Van Gool 12 1 ETH Zurich {bay, vangool}@vision.ee.ethz.ch 2 Katholieke Universiteit Leuven {Tinne.Tuytelaars, Luc.Vangool}@esat.kuleuven.be

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

Local features: detection and description May 12 th, 2015

Local features: detection and description May 12 th, 2015 Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59

More information