EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Size: px
Start display at page:

Download "EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm"

Transcription

1 EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University Abstract In this report, we investigate the application of the Scale-Invariant Feature Transform (SIFT) to the problem of CD cover recognition. The algorithm uses modified SIFT approach to match keypoints between the query image taken by camera phone and the original database of CD covers. The extracted features are highly distinctive as they are shift, scale and rotation invariant. They are also partially invariant to illumination and affine transformations. All these properties make them very suitable to the problem at hand. Experimental results show the efficient performance of the developed algorithm in terms of recognizing all the 99 images provided in the training set. I. Introduction As the number of mobile phones with built-in cameras increase, applications based on Mobile Augmented Reality become more and more attractive. The problem described in this report is CD cover recognition from a picture taken by a camera phone. This idea can be used in marketing where you can provide interaction between the user and the CD producer when the latter recognizes an image sent from the user's mobile phone and sends him back advertising material. In this report, we apply a slightly modified version of the Scale- Invariant Feature Transform (SIFT) to solve the problem. In Section II, we discuss initial experiments that suggested using local robust features to solve the problem. Then, we present the whole algorithm developed to suite the given image database and training set images. Section III describes the SIFT algorithm in more detail and explains the modifications made to increase simplicit In Section IV, the procedure of image matching to the database is discussed. Section V presents the results we got on applying our algorithm on the training set consisting of 99 images where 90 images have matching CDs in the database while 9 images have no matches. We conclude in Section VI by reviewing the benefits of the whole technique. II. Algorithm Selection & Overview A. Comparing Two Solutions A survey was made on possible techniques that can be used to solve the CD cover recognition problem. We found [1] that there are two solutions that appeared very much suitable for it. The first is to use eigenimages in order to project the images on lower dimension space and then identify the new images by their distances to the images in the database in this space. The second approach was to use local robust features as representative of each image and do the matching based on these features. We studied the images in the training set. We found that the main deformations present are 1. Rotation that may reach up to 40º and affine or projective transformation.. Noise, motion blur, defocus and spurious glare/ reflection from the CD cover. 3. Sometimes the CD is occluded on the edge by the hand holding it. But also there are some common good characteristics that apply for all the images which are 1. The CD is always in the middle of the image.. The images are high resolution 180 x 960 compared to images taken by cell phones. 3. No big variation in scale between areas containing CD covers all over the images. Using eigenimages is much simpler and faster but we have to wrap the image treating any affine or projective transformation before projecting it on the lower dimension space. So, the first step is to accurately segment the CD cover from the whole image. Some experiments were made using different edge detection algorithms [,3] and then line detection by Hough transform [1,3]. The best results worked well for most CD covers but there were always some CD covers that failed the segmentation process. This was because the background clutter was high. Since we can have any background in our problem, the segmentation approach was rejected because it may become unreliable in some cases. Fig. 1 shows images of the same CD from the training set, where the segmentation algorithm we developed gives very good results for the first and unacceptable results for the second. (a) (b) Fig.1. Results of segmentation algorithm (a) Good performance (b) Bad performance

2 This suggested turning to the other solution of using local robust features. Of course, if we made right segmentation, this will make the task of the feature detector much easier because it will not detect features from the background. However, segmentation here is an optional step. We still can identify the CD cover when there are some wrong features from the background. To reduce number of features from the background, we thought of cropping the image. This is because we always find the CD cover in the center of the image. Too small cropping area will be a problem for large CDs because many of the features near the edges will be deleted. Too large cropping area will be a problem for small CDs because you will still get many wrong features from the background. By experiment, we settled on cropping the center part of the query image to size of 750 x 750 pixels. Also, we resize either the database image or the query image after cropping to 400 x 400 pixels. This is the size on which we designed the feature detection algorithm. More detail about the choice of this size in Section V. Also, we convert the image to grayscale because we are not making use of color information anymore. After that, we apply the SIFT algorithm [4,5] on the image where we detect the robust keypoints, generate the feature descriptors for each keypoint and compare the feature descriptors of the query image to the feature descriptors of the database images to get the closest match. Fig. is a block diagram showing an overview of the whole algorithm. III. Modified SIFT In this section, we describe the SIFT algorithm [4,5] in more detail. We also state the modifications that were made to increase simplicity. Since, we have a small database of only 30 CDs, these simplifications were quite reasonable and didn t affect the accuracy of the algorithm. A. Detection of the Keypoints The first step in the SIFT algorithm is the detection of keypoints which are desired to be localized and robust to image deformations. To do so, a Gaussian image pyramid L ( σ ) is generated by successively filtering the image I ( with gaussian filter G( σ ) according to equation 1. Every octave which corresponds to factor of change in σ, the image is downsampled by. Adjacent Gaussian images are subtracted to produce the difference-of-gaussian images (DoG) (equation ) which approximate the Laplacian of a Gaussian filtering. This process is shown in Fig. 3. We have a small image database, so we don't need a large number of keypoints for each image. Also, the difference in scale between large and small CDs is not so big. So, we decided to have only octaves and to detect keypoints from only one interval per octave. This required generating 4 images per octave in the Gaussian pyramid so that the DoG pyramid has 3 images and thus we can detect local maxima and minima. Fig. 4 shows example of generated pyramids for database image 'Sheryl_Crow_Light_Eyes'. 1 ( x + y ) σ G( σ ) = e πσ L( σ ) = G( σ ) I( (1) D( σ ) = L( kσ ) L( σ ) () Fig.. Algorithm block diagram Fig.3. Gaussian & DoG pyramids (Source: Reference 5)

3 Tr( = D xx + D yy Tr( < Det( Det( = D ( r + 1) r xx D yy ( D ) xy (3.b) (a) (b) Fig. 4. (a) Gaussian pyramid & (b) DoG pyramid for CD number 3 To detect a certain keypoint, it has to pass three tests. The main test is that it has to be a local maximum or a local minimum with respect to its 6 neighbors in a 3x3 region at the current and adjacent scales in the DoG pyramid. For local extrema detection, we followed the idea described in [6], where we make a dilated and eroded version of the DoG pyramid using a 3x3 flat SE. Then, a pixel is a local maximum if its value is equal to the pixel at the same location in the dilated DoG pyramid and greater than the values immediately above and below it in this pyramid. Local minima are detected the same way from the eroded DoG pyramid. Instead of performing the scale-space interpolation described in [5] for more accurate localization of the keypoint and to remove the keypoints with low contrast, we just discard keypoints in which the absolute value of DoG pyramid at the interval they are detected is smaller than certain threshold. We follow the same threshold used in [5] which is The final test is to make sure that the keypoint is not lying on a strong edge. For this, we use discrete differences between neighboring pixels around the keypoints under study to calculate the Hessian matrix (equation 3.a). After that, we discard the keypoints that don't satisfy the condition in equation 3.b. If this ratio in 3.b. is small, this means that the eigenvalues of the Hessian matrix are close to each other which means that the keypoint is lying on a corner and not an edge. We used the value of r = 10 suggested in SIFT paper [5]. The detected keypoints on the previous CD image are shown in Fig. 5. D Η = D xx xy D D xy yy (3.a) Fig. 5. Detected keypoints for CD number 3 B. Orientation Assignment In order for the feature descriptors to be rotation invariant, an orientation is assigned to each keypoint and all subsequent operations are done relative to the orientation of the keypoint. This allows for matching even if the query image is rotated by any angle. In order to simplify the algorithm, we tried to skip this part and assume no orientation for all keypoints. When we tested that, it gave wrong results with nearly all the images where the CD cover is rotated with an angle of 15º to 0º or more. We realized that this step can't be eliminated. The scale of the keypoint is used to select the Gaussian smoothed image, L, with the closest scale. Then, the gradient magnitude and orientation are calculated using equation 4. An orientation histogram is formed from the gradient orientations of sample points within a region around the keypoint. The orientation histogram has 36 bins. Each sample added to the histogram is weighted by its gradient magnitude and by a Gaussian-weighted circular window. The SIFT paper then suggests locating the highest peak in the histogram and any other local peak that is within 80 percent of the highest peak. In order to decrease the number of keypoints without affecting the accuracy much, we assign only one orientation to each keypoint which corresponds to the peak of the histogram. A parabola is fit to the 3 histogram values closest to the peak to interpolate the peak position for better accuracy. m( = θ ( = tan ( L( x + 1, L( x 1, ) ( L( y + 1) L( y 1) ) 1 + L( y + 1) L( y 1) L( x + 1, L( x 1, (4)

4 C. The Feature Descriptor First the image gradient magnitudes and orientations are calculated around the keypoint, using the scale of the keypoint to select the level of Gaussian blur for the image. The coordinates of the descriptor and the gradient orientations are rotated relative to the keypoint orientation. Note here that after the grid around the keypoint is rotated, we need to interpolate the Gaussian blurred image around the keypoint at non-integer pixel values. We found that the D interpolation in MATLAB takes much time. So, for simplicit we always approximate the grid around the keypoint after rotation to the next integer value. By experiment, we realized that, this operation increased the speed much and still had minor effect on the accuracy of the whole algorithm. The gradient magnitude is weighted by a gaussian weighting function with σ, equal to one half of the descriptor window width to give less credit to gradients far from center of descriptor. Then, these magnitude samples are accumulated into an orientation histogram summarizing the content over 4x4 subregion. Fig. 6 describes the whole operation. Trilinear interpolation is used to distribute the value of each gradient sample into adjacent bins. The descriptor is formed from a vector containing the values of all the orientation histogram entries. The algorithm uses 4x4 array of histograms with 8 orientation bins in each resulting in a feature vector of 18 elements. The feature vector is then normalized to unit length to reduce the effect of illumination change. The values in unit length vector are thresholded to 0. and then renormalized to unit length. This is done to take care of the effect of nonlinear illumination changes. Fig. 6. x descriptor array computed from 8x8 samples (Source: Reference 5) D. Simplifications to SIFT Algorithm In summar the main modifications that were made on the original SIFT algorithm are: 1. We don't perform the scale-space interpolation which is done for more accurate localization of the keypoint.. We assign only one orientation to each keypoint which corresponds to the peak of the histogram. We don't search for other local peaks within 80 percent of the highest peak. 3. After we rotate the grid around the keypoint with the orientation angle, we don't do D interpolation. We approximate the grid to integer pixel values. IV. Image Matching For image matching, we saved the feature vectors for the original CD covers. When a query image is applied to the algorithm, preprocessing steps discussed in Section II are first performed. Then, we use our modified SIFT algorithm to calculate the feature vectors for this query image. The minimum Euclidean distance between each feature vector of the query image and all the feature vectors of the database is found. The CD having a feature vector with the minimum Euclidean distance to a feature vector of the query image is given a vote to be the right CD. After we go over all the feature vectors of the query image giving votes to CDs in the database, we observed that we always have the right CD to be the one with the highest number of votes. The problem now is how we can identify a 'No Match'. For this, we saw that the 'No Match' query images are in many cases confused with the CDs that have a large number of feature vectors in the feature vectors database. We decided to compare the highest vote (corresponding to the right CD) and the second highest vote (corresponding to the most conflicting CD). If the difference between them is larger than a threshold, then there is a match and this match corresponds to the highest vote. If the difference is smaller than a threshold, then we declare a 'No Match'. For CDs with large number of feature vectors we put a larger threshold. So, the rule that we followed is: 1. Detect the highest and the second highest votes. If the difference between them is greater than THRESHOLD, the output is the CD with highest vote 3. If not, the output is 'No Match' 4. THRESHOLD equals 30 for CD number, 19 and 4. THRESHOLD equals 15 for CD number 8 and 14. THRESHOLD equals 7 for all other CDs The values of THRESHOLD were chosen by experiment on training set images either with match or no match. Also, we have to say that we tried to perform the ratio test described in [5] but for some images the number of votes that passed the ratio test was too small to be above the threshold. So, we didn t use the ratio test for image matching. V. Results First, we applied this modified SIFT algorithm on the dataset of original CD covers. We saved the feature descriptors in a database for the purpose of image matching. CD 10 has the smallest number of keypoints (9). CD has the largest number of keypoints (363). The average number of keypoints per CD is 189 which was reasonable for our matching rule. Fig. 7 shows the number of keypoints for every CD in the original CD images dataset. We have to mention here that all the experiments and matching rule were also designed on images with resolution 300 x 300 and the highest match also corresponded to the right CD with fewer number of keypoints and much less processing time. The problem was that the threshold between the highest and second highest votes was small so that we could not design an efficient rule to detect a 'No Match' condition.

5 Number of Keypoints CD Number Fig. 7. Number of keypoints for every CD in the original dataset We applied the image matching procedure described in the previous section on the 99 images in the training set where 90 have matches in the database (3 for every CD) and 9 have no matches. Fig. 8 shows the difference between the highest vote which corresponds to the right CD and the second highest vote. The values in the figure are the average values for the 3 images describing every CD. We can see the efficiency of the developed algorithm in terms of the big difference in number of votes between the highest and second highest votes which made setting a threshold for 'No Match' easier. 10 new CD cover images not present in the training set. The algorithm worked well and always declared a 'No Match'. Example of the 10 new no match CD covers is shown in Fig. 10. Number of Votes Highest Vote Second Vote CD Number Fig. 9. Number of highest and second highest votes for CDs with no matches. Highest vote CD number is shown Highest Vote (Right CD) Second Vote (Conflicting CD) Average Number of Votes CD Number Fig. 8. Average number of right and conflicting votes per CD in the training set Fig. 9 shows the results of the algorithm in terms of the highest and second highest votes when applied on 'No Match' CDs where the CD number corresponding to the highest vote is shown on the figure. The difference between the highest and second highest votes is much smaller than the design threshold in case of CDs with no matches. We can see from figures 8 and 9 that our matching rule works very well on the training set images and that all the 99 images can be identified correctly using this rule. As a final check on our matching rule and its performance towards no matches, the algorithm was tested on Fig. 10. Example of a no match CD cover not in training set VI. Conclusion An algorithm was developed to identify a CD cover image taken by a camera phone. It is based mainly on using SIFT features to match the image to original CD database. Some modifications were made to increase the simplicity of the SIFT algorithm. Also, a rule for image matching was designed. Applying the algorithm on the training set, we found that it was always able (100 % accurac to identify the right CD or to declare a No Match in case of no match condition. The algorithm was highly robust to scale difference, rotation by any angle and to other artifacts like noise, motion blur, defocus and reflection from the CD cover. With more tuning to the matching rule and other parameters in the algorithm, it is likely that we can work on lower resolution images like 300 x 300 pixels, or perhaps even smaller. This can make the algorithm much faster and simpler.

6 References [1] B. Girod, Lecture Notes for EE 368: Digital Image Processing, Spring 008. [] R.C. Gonzalez and R.E. Woods, Digital Image Processing, Prentice Hall, 008. [3] R.C. Gonzalez, R.E. Woods and S.L. Eddins, Digital Image Processing using MATLAB, Prentice Hall, 004. [4] D. Lowe, Object Recognition from Local Scale-Invariant Features, Proceedings of the Seventh IEEE International Conference on Computer Vision, vol., [5] D. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, International Journal of Computer Vision, vol. 60, no., pp , 004. [6] G. Hoffmann, P. Kimball, S. Russell, Identification of Paintings in Camera-Phone Images, EE368 Project Report, Spring, 007.

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

Local Features Tutorial: Nov. 8, 04

Local Features Tutorial: Nov. 8, 04 Local Features Tutorial: Nov. 8, 04 Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

SCALE INVARIANT FEATURE TRANSFORM (SIFT) 1 SCALE INVARIANT FEATURE TRANSFORM (SIFT) OUTLINE SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion 2 SIFT BACKGROUND Scale-invariant feature transform SIFT: to detect

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Key properties of local features

Key properties of local features Key properties of local features Locality, robust against occlusions Must be highly distinctive, a good feature should allow for correct object identification with low probability of mismatch Easy to etract

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Pictures at an Exhibition: EE368 Project

Pictures at an Exhibition: EE368 Project Pictures at an Exhibition: EE368 Project Jacob Mattingley Stanford University jacobm@stanford.edu Abstract This report presents an algorithm which matches photographs of paintings to a small database.

More information

Obtaining Feature Correspondences

Obtaining Feature Correspondences Obtaining Feature Correspondences Neill Campbell May 9, 2008 A state-of-the-art system for finding objects in images has recently been developed by David Lowe. The algorithm is termed the Scale-Invariant

More information

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.

More information

Pictures at an Exhibition

Pictures at an Exhibition Pictures at an Exhibition Han-I Su Department of Electrical Engineering Stanford University, CA, 94305 Abstract We employ an image identification algorithm for interactive museum guide with pictures taken

More information

School of Computing University of Utah

School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Scale Invariant Feature Transform by David Lowe

Scale Invariant Feature Transform by David Lowe Scale Invariant Feature Transform by David Lowe Presented by: Jerry Chen Achal Dave Vaishaal Shankar Some slides from Jason Clemons Motivation Image Matching Correspondence Problem Desirable Feature Characteristics

More information

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Implementation and Comparison of Feature Detection Methods in Image Mosaicing IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image

More information

Identification of Paintings in Camera-Phone Images

Identification of Paintings in Camera-Phone Images Identification of Paintings in Camera-Phone Images Gabriel M. Hoffmann, Peter W. Kimball, Stephen P. Russell Department of Aeronautics and Astronautics Stanford University, Stanford, CA 94305 {gabeh,pkimball,sprussell}@stanford.edu

More information

SIFT: Scale Invariant Feature Transform

SIFT: Scale Invariant Feature Transform 1 / 25 SIFT: Scale Invariant Feature Transform Ahmed Othman Systems Design Department University of Waterloo, Canada October, 23, 2012 2 / 25 1 SIFT Introduction Scale-space extrema detection Keypoint

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

SIFT - scale-invariant feature transform Konrad Schindler

SIFT - scale-invariant feature transform Konrad Schindler SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of

More information

Distinctive Image Features from Scale-Invariant Keypoints

Distinctive Image Features from Scale-Invariant Keypoints Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe Computer Science Department University of British Columbia Vancouver, B.C., Canada lowe@cs.ubc.ca January 5, 2004 Abstract This paper

More information

2D Image Processing Feature Descriptors

2D Image Processing Feature Descriptors 2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview

More information

Object Detection by Point Feature Matching using Matlab

Object Detection by Point Feature Matching using Matlab Object Detection by Point Feature Matching using Matlab 1 Faishal Badsha, 2 Rafiqul Islam, 3,* Mohammad Farhad Bulbul 1 Department of Mathematics and Statistics, Bangladesh University of Business and Technology,

More information

A Comparison of SIFT and SURF

A Comparison of SIFT and SURF A Comparison of SIFT and SURF P M Panchal 1, S R Panchal 2, S K Shah 3 PG Student, Department of Electronics & Communication Engineering, SVIT, Vasad-388306, India 1 Research Scholar, Department of Electronics

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Distinctive Image Features from Scale-Invariant Keypoints

Distinctive Image Features from Scale-Invariant Keypoints Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe Computer Science Department University of British Columbia Vancouver, B.C., Canada Draft: Submitted for publication. This version:

More information

Distinctive Image Features from Scale-Invariant Keypoints

Distinctive Image Features from Scale-Invariant Keypoints International Journal of Computer Vision 60(2), 91 110, 2004 c 2004 Kluwer Academic Publishers. Manufactured in The Netherlands. Distinctive Image Features from Scale-Invariant Keypoints DAVID G. LOWE

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Feature Detection and Matching

Feature Detection and Matching and Matching CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS4243) Camera Models 1 /

More information

Comparison of Feature Detection and Matching Approaches: SIFT and SURF

Comparison of Feature Detection and Matching Approaches: SIFT and SURF GRD Journals- Global Research and Development Journal for Engineering Volume 2 Issue 4 March 2017 ISSN: 2455-5703 Comparison of Detection and Matching Approaches: SIFT and SURF Darshana Mistry PhD student

More information

Verslag Project beeldverwerking A study of the 2D SIFT algorithm

Verslag Project beeldverwerking A study of the 2D SIFT algorithm Faculteit Ingenieurswetenschappen 27 januari 2008 Verslag Project beeldverwerking 2007-2008 A study of the 2D SIFT algorithm Dimitri Van Cauwelaert Prof. dr. ir. W. Philips dr. ir. A. Pizurica 2 Content

More information

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition )

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition ) Keypoint detection n n Many applications benefit from features localized in (x,y) (image registration, panorama stitching, motion estimation + tracking, recognition ) Edges well localized only in one direction

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Faster Image Feature Extraction Hardware

Faster Image Feature Extraction Hardware IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727 PP 33-38 www.iosrjournals.org Jibu J.V, Sherin Das, Mini Kumari G Assistant Professor,College of Engineering, Chengannur.Alappuzha,

More information

Motion illusion, rotating snakes

Motion illusion, rotating snakes Motion illusion, rotating snakes Local features: main components 1) Detection: Find a set of distinctive key points. 2) Description: Extract feature descriptor around each interest point as vector. x 1

More information

TA Section 7 Problem Set 3. SIFT (Lowe 2004) Shape Context (Belongie et al. 2002) Voxel Coloring (Seitz and Dyer 1999)

TA Section 7 Problem Set 3. SIFT (Lowe 2004) Shape Context (Belongie et al. 2002) Voxel Coloring (Seitz and Dyer 1999) TA Section 7 Problem Set 3 SIFT (Lowe 2004) Shape Context (Belongie et al. 2002) Voxel Coloring (Seitz and Dyer 1999) Sam Corbett-Davies TA Section 7 02-13-2014 Distinctive Image Features from Scale-Invariant

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Scale Invariant Feature Transform (SIFT) CS 763 Ajit Rajwade

Scale Invariant Feature Transform (SIFT) CS 763 Ajit Rajwade Scale Invariant Feature Transform (SIFT) CS 763 Ajit Rajwade What is SIFT? It is a technique for detecting salient stable feature points in an image. For ever such point it also provides a set of features

More information

Ulas Bagci

Ulas Bagci CAP5415- Computer Vision Lecture 5 and 6- Finding Features, Affine Invariance, SIFT Ulas Bagci bagci@ucf.edu 1 Outline Concept of Scale Pyramids Scale- space approaches briefly Scale invariant region selecqon

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

Patch-based Object Recognition. Basic Idea

Patch-based Object Recognition. Basic Idea Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest

More information

IMAGE-GUIDED TOURS: FAST-APPROXIMATED SIFT WITH U-SURF FEATURES

IMAGE-GUIDED TOURS: FAST-APPROXIMATED SIFT WITH U-SURF FEATURES IMAGE-GUIDED TOURS: FAST-APPROXIMATED SIFT WITH U-SURF FEATURES Eric Chu, Erin Hsu, Sandy Yu Department of Electrical Engineering Stanford University {echu508, erinhsu, snowy}@stanford.edu Abstract In

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

Eppur si muove ( And yet it moves )

Eppur si muove ( And yet it moves ) Eppur si muove ( And yet it moves ) - Galileo Galilei University of Texas at Arlington Tracking of Image Features CSE 4392-5369 Vision-based Robot Sensing, Localization and Control Dr. Gian Luca Mariottini,

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature

More information

Local Image Features

Local Image Features Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment

More information

Local features: detection and description. Local invariant features

Local features: detection and description. Local invariant features Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

CS 558: Computer Vision 4 th Set of Notes

CS 558: Computer Vision 4 th Set of Notes 1 CS 558: Computer Vision 4 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Overview Keypoint matching Hessian

More information

Feature descriptors and matching

Feature descriptors and matching Feature descriptors and matching Detections at multiple scales Invariance of MOPS Intensity Scale Rotation Color and Lighting Out-of-plane rotation Out-of-plane rotation Better representation than color:

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

3D Object Recognition using Multiclass SVM-KNN

3D Object Recognition using Multiclass SVM-KNN 3D Object Recognition using Multiclass SVM-KNN R. Muralidharan, C. Chandradekar April 29, 2014 Presented by: Tasadduk Chowdhury Problem We address the problem of recognizing 3D objects based on various

More information

HISTOGRAMS OF ORIENTATIO N GRADIENTS

HISTOGRAMS OF ORIENTATIO N GRADIENTS HISTOGRAMS OF ORIENTATIO N GRADIENTS Histograms of Orientation Gradients Objective: object recognition Basic idea Local shape information often well described by the distribution of intensity gradients

More information

THE ANNALS OF DUNAREA DE JOS UNIVERSITY OF GALATI FASCICLE III, 2007 ISSN X ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS

THE ANNALS OF DUNAREA DE JOS UNIVERSITY OF GALATI FASCICLE III, 2007 ISSN X ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS SIFT BASE ALGORITHM FOR POINT FEATURE TRACKING Adrian Burlacu, Cosmin Copot and Corneliu Lazar Gh. Asachi Technical University of Iasi, epartment

More information

3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform. Wenqi Zhu

3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform. Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform Wenqi Zhu wenqizhu@buffalo.edu Problem Statement! 3D reconstruction 3D reconstruction is a problem of recovering depth information

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

CS 556: Computer Vision. Lecture 3

CS 556: Computer Vision. Lecture 3 CS 556: Computer Vision Lecture 3 Prof. Sinisa Todorovic sinisa@eecs.oregonstate.edu 1 Outline Matlab Image features -- Interest points Point descriptors Homework 1 2 Basic MATLAB Commands 3 Basic MATLAB

More information

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image SURF CSED441:Introduction to Computer Vision (2015S) Lecture6: SURF and HOG Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Speed Up Robust Features (SURF) Simplified version of SIFT Faster computation but

More information

ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012

ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012 Feature Tracking of Objects in Underwater Video Sequences Prabhakar C J & Praveen Kumar P U Department of P.G. Studies and Research in Computer Science Kuvempu University, Shankaraghatta - 577451 Karnataka,

More information

CS664 Lecture #21: SIFT, object recognition, dynamic programming

CS664 Lecture #21: SIFT, object recognition, dynamic programming CS664 Lecture #21: SIFT, object recognition, dynamic programming Some material taken from: Sebastian Thrun, Stanford http://cs223b.stanford.edu/ Yuri Boykov, Western Ontario David Lowe, UBC http://www.cs.ubc.ca/~lowe/keypoints/

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

Lecture 10 Detectors and descriptors

Lecture 10 Detectors and descriptors Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =

More information

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013 Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Feature Matching and Robust Fitting

Feature Matching and Robust Fitting Feature Matching and Robust Fitting Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Project 2 questions? This

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

CS 556: Computer Vision. Lecture 3

CS 556: Computer Vision. Lecture 3 CS 556: Computer Vision Lecture 3 Prof. Sinisa Todorovic sinisa@eecs.oregonstate.edu Interest Points Harris corners Hessian points SIFT Difference-of-Gaussians SURF 2 Properties of Interest Points Locality

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Local Image Features

Local Image Features Local Image Features Computer Vision Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Flashed Face Distortion 2nd Place in the 8th Annual Best

More information

Panoramic Image Stitching

Panoramic Image Stitching Mcgill University Panoramic Image Stitching by Kai Wang Pengbo Li A report submitted in fulfillment for the COMP 558 Final project in the Faculty of Computer Science April 2013 Mcgill University Abstract

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Local features: detection and description May 12 th, 2015

Local features: detection and description May 12 th, 2015 Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59

More information

Part-Based Skew Estimation for Mathematical Expressions

Part-Based Skew Estimation for Mathematical Expressions Soma Shiraishi, Yaokai Feng, and Seiichi Uchida shiraishi@human.ait.kyushu-u.ac.jp {fengyk,uchida}@ait.kyushu-u.ac.jp Abstract We propose a novel method for the skew estimation on text images containing

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

Fast Image Matching Using Multi-level Texture Descriptor

Fast Image Matching Using Multi-level Texture Descriptor Fast Image Matching Using Multi-level Texture Descriptor Hui-Fuang Ng *, Chih-Yang Lin #, and Tatenda Muindisi * Department of Computer Science, Universiti Tunku Abdul Rahman, Malaysia. E-mail: nghf@utar.edu.my

More information

Image Mosaic with Rotated Camera and Book Searching Applications Using SURF Method

Image Mosaic with Rotated Camera and Book Searching Applications Using SURF Method Image Mosaic with Rotated Camera and Book Searching Applications Using SURF Method Jaejoon Kim School of Computer and Communication, Daegu University, 201 Daegudae-ro, Gyeongsan, Gyeongbuk, 38453, South

More information

IMAGE MATCHING USING SCALE INVARIANT FEATURE TRANSFORM (SIFT) Naotoshi Seo and David A. Schug

IMAGE MATCHING USING SCALE INVARIANT FEATURE TRANSFORM (SIFT) Naotoshi Seo and David A. Schug IMAGE MATCHING USING SCALE INVARIANT FEATURE TRANSFORM (SIFT) Naotoshi Seo and David A. Schug University of Maryland ENEE631 Digital Image and Video Processing Final Project Professor: Min Wu ABSTRACT

More information

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl

More information

SIFT Descriptor Extraction on the GPU for Large-Scale Video Analysis. Hannes Fassold, Jakub Rosner

SIFT Descriptor Extraction on the GPU for Large-Scale Video Analysis. Hannes Fassold, Jakub Rosner SIFT Descriptor Extraction on the GPU for Large-Scale Video Analysis Hannes Fassold, Jakub Rosner 2014-03-26 2 Overview GPU-activities @ AVM research group SIFT descriptor extraction Algorithm GPU implementation

More information

CS231A Section 6: Problem Set 3

CS231A Section 6: Problem Set 3 CS231A Section 6: Problem Set 3 Kevin Wong Review 6 -! 1 11/09/2012 Announcements PS3 Due 2:15pm Tuesday, Nov 13 Extra Office Hours: Friday 6 8pm Huang Common Area, Basement Level. Review 6 -! 2 Topics

More information

An Implementation on Histogram of Oriented Gradients for Human Detection

An Implementation on Histogram of Oriented Gradients for Human Detection An Implementation on Histogram of Oriented Gradients for Human Detection Cansın Yıldız Dept. of Computer Engineering Bilkent University Ankara,Turkey cansin@cs.bilkent.edu.tr Abstract I implemented a Histogram

More information

IJMTES International Journal of Modern Trends in Engineering and Science ISSN:

IJMTES International Journal of Modern Trends in Engineering and Science ISSN: A Novel Method to Count the Number of Cars in an Unmanned Aerial Vehicle Images A.Mathavaraja 1, R.Sathyamoorthy 1 (Department of ECE, UG Student, IFET college of engineering, Villupuram, Tamilnadu,mathavarajaifet@gmail.com)

More information