Scale Invariant Segment Detection and Tracking

Size: px
Start display at page:

Download "Scale Invariant Segment Detection and Tracking"

Transcription

1 Scale Invariant Segment Detection and Tracking Amaury Nègre 1, James L. Crowley 1, and Christian Laugier 1 INRIA, Grenoble, France firstname.lastname@inrialpes.fr Abstract. This paper presents a new feature detector that fits correctly anisotropic elongated shapes. This detector that mixes ridges and lines detection appears to be very robust to affine and particularly scale transformation. A tracking process, that uses the same properties than the segment detector has been studied in the aim to estimate time-to-contact from detected obstacles in an urban environment. Experimental results show that the method gives good results even in complex scenes. 1 Introduction Visual feature detection and tracking is often the root of many visual robotics tasks like localization and mapping [2,4], object recognition[9], etc. Local interest point detection for image matching began with the Moravec corner detector [11]. This detector was improved by Harris and Stephens [6], who made it more repeatable under image transformations. Nevertheless the scale invariance was not achieved as the detection was made only at a fixed scale level. The scale representation of image has been studied in [7] and used in [1] to detect peak and ridge in a scale space. Linderberg [8] also worked on automatic selection of the best scale for feature detection. Lowe [9] proposed an object recognition method based on local 3D extrema in a difference-of-gaussian pyramid. In [10] an other scale invariant interest point has been developed by mixing Harris and Laplacian detector. Two problems remain with these interest point detection algorithms : they are not adapted to anisotropic structures and they are not centered with the "physical" object they represent. For an elongated object like pedestrian or pole, one would obtain a feature point centered in the middle of this object and some parameters describing the object s shape. In this article, we propose an algorithm based on ridge lines [5] to detect scale invariant segments that fit correctly elongated shapes. In a second part, we will see how to track such segments using a particle filter to estimate object s motion in the 3D scale-space and we ll see how to use this tracking to measure the time-to-contact. Next, we ll experimentally evaluate the quality of our algorithm by detecting and tracking objects in a very simple case and comparing with a ground-truth. And then we ll apply our algorithm in a complex urban environment with a moving camera video sequence.

2 2 Amaury Nègre, James L. Crowley, Christian Laugier 2 Scale invariant segment detector 2.1 Algorithm description Extremal points in a Laplacian scale space have long known to provide scale invariant feature points. However, if we consider only this criteria we ll need to deal with two problems : 1. The Laplacian exhibits local extrema in the center of object but also on the edge, where characteristic scale is nonsense, we thus need to eliminate such response; 2. In the case of elongated object like poles, pedestrians, etc., detected points can move all along the object, only the two extremities can give stable points, but the center of object is often preferable. To solve those problems, our method consists in detecting segments instead of 2D points in a Laplacian scale-space (approximated by a Laplacian pyramid) : 1. For each pixel of the Laplacian pyramid we first remove the edge response by eliminating pixels where the ratio between the Laplacian and gradient values is greater than a threshold. 2. We compute for each pixel the best segment direction which corresponds to the ridge direction. As shown in [5], this direction is normal to the principal curvature direction given by the eigen vector ) associated to the biggest eigen value of the Hessian matrix H = ( 2 f 2 f x 2 xy 2 f 2 f xy y 2 where 2 f x 2, 2 f xy, 2 f y 2 are the second derivatives of the Image. 3. For each pixel we search the length l that maximizes the following score function: S(X, l, u) = ( L(X + k u X ) + L(X k u X ) k=0..l 2 L(X + k u X ) L(X k u X ) 2 L min ) Where L(X) is the Laplacian value at pixel X and u is the previously computed direction This score is high when many pixels along the line have a high Laplacian ( L(X + k u X ) + L(X k u X ) ), and is maximal at the center of the object ( L(X + k u X ) L(X k u X ) increases near the border). The term L min makes the score decrease when k is greater than the object s characteristic length. It should be noted that the cost of the score function is only proportional to the maximal searched length. 4. We then search local maxima of the previous score in the pyramid to obtain best segments. When the feature is not a perfect line, the local direction of the principal curvature is not strictly aligned with the feature. The score can then be optimized by scanning some neighboring directions. (1)

3 Scale Invariant Segment Detection and Tracking 3 A typical example of segments detection in an urban environment is shown in Figure 2.1. The detected segments are represented by red ellipses, the main axis of the ellipses represents the segment s half-edges, and the width of the ellipses represents the object scales. The interesting point is that all detected segments are centered on the object they represent and the detected scales fit very well object s size. Fig. 1. Scale invariant segment detector in an urban environment, the detected segments are represented by red ellipses. 2.2 Performance evaluation With the aim of studying the stability of segments detection to various transformations, we used the repeatability criterion as described in [13]. It consists in detecting all the segments in an original image, applying a known image transformation and counting the number of new segments that can be associated with the first set. The association between segments is done considering the center s position and scale. To take into account the fact we work with a Scale Space Pyramid, the maximal association distance is proportional to the scale. We compared the segments repeatability with a Harris detector [6] computed at different scales and a SIFT detector [9]. The results are shown in Figure 2, we can see that the segments detector is not as much robust as we thought to rotation change, but actually the low performance comes from the localization error that can be high along the ridge direction. As expected, the change of scale doesn t affect the detector s performances.

4 4 Amaury Nègre, James L. Crowley, Christian Laugier (a) Original test image and detected segments Ridge Segments SIFT Harris Repeatability Angle (degree) (b) Repeatability with respect to rotation angle Ridge Segments SIFT Harris Repeatability Scale factor (c) Repeatability with respect to scale factor. Fig. 2. Repeatability of Harris, SIFT and Ridge-Segment detector with respect to rotation and scale change. We can see that the segment detector obtain moderate score for rotation changes but gives similar results than the SIFT detector with change of scale. 3 Segment tracking and estimation of time-to-contact 3.1 Introduction of the Time-to-conctact The time to contact τ (also called time-to-collision or time-to-crash) is a crucial information in the context of obstacle avoidance in dynamic environment. This

5 Scale Invariant Segment Detection and Tracking 5 measure represents the distance between two actors in the temporal space. It has been shown in [12] that the time to contact between the camera and a visible obstacle can be computed in the image space by measuring the variation of the intrinsic scale : if s is the intrinsic scale of the obstacle in the image, then τ can be approximated by : τ = s s t (2) The difficult point with this method is to identify and measuring the change of scale of an obstacle in the video sequence. This can be achieved by detecting obstacle s ridge segments as explained in the previous section and track these segments in the 3D scale-space in order to evaluate the object s motion and change of scale. 3.2 Segment tracking To perform the segment tracking, we propose to use the score function described in 1 as a measurement function to the tracking process. As this function is non-gaussian and non linear, we were motivated to use a particle filtering method [3]. Such a particle filter uses a set of samples (particles) to represent a probability distribution resulting for a Bayesian filtering process. Formally, if we let X t be the state of the target at time t, Z t be the observation of the target at time t, then the goal of the particle filter is to estimate : P (X t Z t1...z tn ) To this end, the filter relies on two models : (1) the observation model P (Z t X t ) that predicts a target observation, and (2) a displacement model P (X tk X tk 1 ) that predicts the motion of the particles. The target tracking is implemented as follows ; we use a set of particles to characterize the target s position and the target s speed in the image coded by three 3-dimensions vectors : - c : the segment s center position in the 3D Scale-Space; - v : the segment s center speed ; - r : the 3D vector between the center and an extremity (called "half-edge" in the following). c X = v r When a target is identified in an image for the first time, all particles are initialized around the detected position and with a zero speed. Next, each camera s image (Z t ) is used as an observation to update the particle filter. The observation model takes into account the score function described in Equation 1 :

6 6 Amaury Nègre, James L. Crowley, Christian Laugier r P (Z t X t )) S(c t, r t, r t ) In between two images, we consider that the target s center is subjected to a Gaussian acceleration a and that the half-edge can also be affected by a Gaussian noise n. Each particle is then updated using the following model : X t+ t = c t+ t v t+ t = c t + (v t + a t) t v t + a t r t+ t r t + n t The output of the particle filter is a probabilistic estimation of the target state. In practice, to obtain a single state of the target, we compute the average pose of all particles weighted by their probabilities. It can be noted that the tracking is automatically ended either when the target get out the image frame or when the observation probability fall down. As we track many segment at same time, we also add a mechanism to fuse differents segments that converge. The global schema of the detection and tracking of scale invariant segments is shown on Figure 3. Fig. 3. Global schema of scale invariant segment detection and tracking. 4 Experimental results 4.1 Simple tracking To demonstrate the ability of the tracking, we consider a simple video sequence with a black rectangle printed on a white sheet. The tracking result is shown on

7 Scale Invariant Segment Detection and Tracking 7 Figure 4(a). We can see that the estimated segment (red ellipse) is centered on the black rectangle, but the length doesn t fit very well the rectangle s length, however, the rectangle scale (height) is precisely approximated. To evaluate this precision, the height of the rectangle has been measured by hand in each image and used to estimate the actual time-to-contact. The results are plotted in Figure 4(b)(c). As expected the two curves fit very well. (a) Image sequences Detected size Real size 40 Estimated TTC Real TTC 90 Segment size (pixels) Time (b) Scale of the tracked segment TTC Time (c) Time-To-Contact of the tracked segment Fig. 4. Tracking of a simple black rectangle on a white sheet. In (a) we can see the set of particles particles (green lines) and the estimated state (red ellipse). The plot (b) represents the estimated scale of the tracked segment and a ground truth obtained by measuring by hand the rectangle height for each image. We can see in (c) that the estimated time-to-contact fits very well the ground truth. 4.2 Tracking in urban context To demonstrate the efficiency of the scale invariant segment detector and tracking, we have tested our algorithm on real world sequences. To obtain real-time performance, we implemented the algorithm on a Graphic Processing Unit (GPU), which made it possible to detect and track simultaneously 64 segments in 640x480 images at 17 frames per second.

8 8 Amaury Nègre, James L. Crowley, Christian Laugier Some results are shown in Figure 5, tracked segments are represented by red ellipse and each segment s trajectory in the image is drawn with a black curve. In this sequence, we noted that the segment s tracker locks well on to road lines, car s elements, trees and poles. For a more detailed analysis, we plotted the evolution of scale of the pole located in the center right part of the image (Figure 6(a)). As the camera moves forward and backward, the object s size increases and decreases which is approved by the tracker. The approximated time-to-contact (Figure 6(b)) is also appreciated as it begins positive and decreases (which mean the obstacle is approaching) and next it swaps negative, which mean the obstacle is getting away). 5 Conclusion In this paper, we developed a new feature detector that can be used for many visual tasks. Based on ridge and line detector, this detector is scale invariant and well adapted to elongated objects that are few textured. A tracking algorithm based on a particle filter has been elaborated for such segment s features. The same ridge and linearity criteria than the segment s detector has been used for the filter s importance function. This tracking has prove effective in complex urban video-sequences and has been used for time-to-contact estimation based on change of scale. One particularity of the detector is that it is localized in the center of corresponding object, this property is interesting as it can improve object s localization and data association, as a appearance descriptor would only describe all the object and not only a part (like on object s edge or corner). Such a descriptor is necessary to perform long term tracking and for the SLAM problem, this would be approached in future research. References 1. J. L. Crowley and A. C. Parker. A representation for shape based on peaks and ridges in the difference of low-pass transform. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(2): , Andrew J. Davison, Ian D. Reid, Nicholas D. Molton, and Olivier Stasse. Monoslam: Real-time single camera slam. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6): , Arnaud Doucet, Nando De Freitas, and Neil Gordon, editors. Sequential Monte Carlo methods in practice. Springer, E. Eade and T. Drummond. Scalable monocular slam. Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, 1: , June TRAN Thi Thanh Hai and Augustin Lux. Extraction de caractéristiques locales : Crêtes et pics. In RIVF, pages , C. Harris and M. Stephens. A combined corner and edge detector. pages , Manchester, Jan J. Koenderink. The structure of images. Biological Cybernetics, 50: , Tony Lindeberg. Detecting salient blob-like image structures and their scales with a scale-space primal sketch: a method for focus-of-attention. volume 11, pages , Hingham, MA, USA, Kluwer Academic Publishers.

9 Scale Invariant Segment Detection and Tracking 9 Fig. 5. Segment detection (right) and tracking (left) in an urban context. Red ellipses with blue axes represent the tracked segments, black curves represents segment s trajectories in the image.

10 10 Amaury Nègre, James L. Crowley, Christian Laugier Estimated Size (pixels) Estimated TTC Time (a) Evolution of scale for the segments localized in the white pole Time (b) Estimated Time-To-Contact of the white pole. Fig. 6. (a) Evolution of the scale of the tracked segments localized on the white pole (center right of images). As the car moves forward and backward, the scale increases and decreased. The estimated Time-To-Contact (b) is well approximated : at the beginning it is positive and it decreases (which mean the obstacle is approaching) and next the TTC swaps negative (which means the obstacle is getting away. 9. D. G. Lowe. Object recognition from local scale-invariant feature. In International Conference on Computer Vision, pages , K. Mikolajczyk and C. Schmid. Indexing based on scale invariant interest points. In Proceedind of the International Conference on Computer Vision, pages , H. Moravec. Rover visual avoidance. In International Joint Conference on Artificial Intelligence, pages , Vancouver, Canada, Amaury Negre, Christophe Braillon, Jim Crowley, and Christian Laugier. Real-time Time-To-Collision from variation of Intrinsic Scale. In Proc. of the Int. Symp. on Experimental Robotics, Rio de Janeiro, Brazil, Schmid, R. Mohr, and C. Bauckhage. Comparing and evaluating interest points. pages , 1998.

Scale Invariant Detection and Tracking of Elongated Structures

Scale Invariant Detection and Tracking of Elongated Structures Scale Invariant Detection and Tracking of Elongated Structures Amaury Nègre, James L. Crowley, Christian Laugier To cite this version: Amaury Nègre, James L. Crowley, Christian Laugier. Scale Invariant

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

A Comparison of Three Methods for Measure of Time to Contact

A Comparison of Three Methods for Measure of Time to Contact The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA A Comparison of Three Methods for Measure of Time to Contact Guillem Alenyà, Amaury Nègre and

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Problems with template matching

Problems with template matching Problems with template matching The template represents the object as we expect to find it in the image The object can indeed be scaled or rotated This technique requires a separate template for each scale

More information

Time-to-Contact from Image Intensity

Time-to-Contact from Image Intensity Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract

More information

Scale Invariant Feature Transform by David Lowe

Scale Invariant Feature Transform by David Lowe Scale Invariant Feature Transform by David Lowe Presented by: Jerry Chen Achal Dave Vaishaal Shankar Some slides from Jason Clemons Motivation Image Matching Correspondence Problem Desirable Feature Characteristics

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Distinctive Image Features from Scale-Invariant Keypoints

Distinctive Image Features from Scale-Invariant Keypoints International Journal of Computer Vision 60(2), 91 110, 2004 c 2004 Kluwer Academic Publishers. Manufactured in The Netherlands. Distinctive Image Features from Scale-Invariant Keypoints DAVID G. LOWE

More information

Distinctive Image Features from Scale-Invariant Keypoints

Distinctive Image Features from Scale-Invariant Keypoints Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe Computer Science Department University of British Columbia Vancouver, B.C., Canada lowe@cs.ubc.ca January 5, 2004 Abstract This paper

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

Image Matching. AKA: Image registration, the correspondence problem, Tracking,

Image Matching. AKA: Image registration, the correspondence problem, Tracking, Image Matching AKA: Image registration, the correspondence problem, Tracking, What Corresponds to What? Daisy? Daisy From: www.amphian.com Relevant for Analysis of Image Pairs (or more) Also Relevant for

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Specular 3D Object Tracking by View Generative Learning

Specular 3D Object Tracking by View Generative Learning Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Image features. Image Features

Image features. Image Features Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in

More information

Feature Matching and Robust Fitting

Feature Matching and Robust Fitting Feature Matching and Robust Fitting Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Project 2 questions? This

More information

Patch-based Object Recognition. Basic Idea

Patch-based Object Recognition. Basic Idea Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Vision and Image Processing Lab., CRV Tutorial day- May 30, 2010 Ottawa, Canada

Vision and Image Processing Lab., CRV Tutorial day- May 30, 2010 Ottawa, Canada Spatio-Temporal Salient Features Amir H. Shabani Vision and Image Processing Lab., University of Waterloo, ON CRV Tutorial day- May 30, 2010 Ottawa, Canada 1 Applications Automated surveillance for scene

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Motion illusion, rotating snakes

Motion illusion, rotating snakes Motion illusion, rotating snakes Local features: main components 1) Detection: Find a set of distinctive key points. 2) Description: Extract feature descriptor around each interest point as vector. x 1

More information

Local features: detection and description. Local invariant features

Local features: detection and description. Local invariant features Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms

More information

Evaluation of the Influence of Feature Detectors and Photometric Descriptors in Object Recognition

Evaluation of the Influence of Feature Detectors and Photometric Descriptors in Object Recognition Department of Numerical Analysis and Computer Science Evaluation of the Influence of Feature Detectors and Photometric Descriptors in Object Recognition Fredrik Furesjö and Henrik I. Christensen TRITA-NA-P0406

More information

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

SCALE INVARIANT FEATURE TRANSFORM (SIFT) 1 SCALE INVARIANT FEATURE TRANSFORM (SIFT) OUTLINE SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion 2 SIFT BACKGROUND Scale-invariant feature transform SIFT: to detect

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

School of Computing University of Utah

School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Lecture 4: Harris corner detection Szeliski: 4.1 Reading Announcements Project 1 (Hybrid Images) code due next Wednesday, Feb 14, by 11:59pm Artifacts due Friday, Feb

More information

Requirements for region detection

Requirements for region detection Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform

More information

Distinctive Image Features from Scale-Invariant Keypoints

Distinctive Image Features from Scale-Invariant Keypoints Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe Computer Science Department University of British Columbia Vancouver, B.C., Canada Draft: Submitted for publication. This version:

More information

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte

More information

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

Local Image preprocessing (cont d)

Local Image preprocessing (cont d) Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe AN EFFICIENT BINARY CORNER DETECTOR P. Saeedi, P. Lawrence and D. Lowe Department of Electrical and Computer Engineering, Department of Computer Science University of British Columbia Vancouver, BC, V6T

More information

An Evaluation of Volumetric Interest Points

An Evaluation of Volumetric Interest Points An Evaluation of Volumetric Interest Points Tsz-Ho YU Oliver WOODFORD Roberto CIPOLLA Machine Intelligence Lab Department of Engineering, University of Cambridge About this project We conducted the first

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Local Image Features

Local Image Features Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment

More information

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian. Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the

More information

Evaluation and comparison of interest points/regions

Evaluation and comparison of interest points/regions Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

A Hybrid Feature Extractor using Fast Hessian Detector and SIFT

A Hybrid Feature Extractor using Fast Hessian Detector and SIFT Technologies 2015, 3, 103-110; doi:10.3390/technologies3020103 OPEN ACCESS technologies ISSN 2227-7080 www.mdpi.com/journal/technologies Article A Hybrid Feature Extractor using Fast Hessian Detector and

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

Salient Visual Features to Help Close the Loop in 6D SLAM

Salient Visual Features to Help Close the Loop in 6D SLAM Visual Features to Help Close the Loop in 6D SLAM Lars Kunze, Kai Lingemann, Andreas Nüchter, and Joachim Hertzberg University of Osnabrück, Institute of Computer Science Knowledge Based Systems Research

More information

Corner Detection. GV12/3072 Image Processing.

Corner Detection. GV12/3072 Image Processing. Corner Detection 1 Last Week 2 Outline Corners and point features Moravec operator Image structure tensor Harris corner detector Sub-pixel accuracy SUSAN FAST Example descriptor: SIFT 3 Point Features

More information

Object Detection by Point Feature Matching using Matlab

Object Detection by Point Feature Matching using Matlab Object Detection by Point Feature Matching using Matlab 1 Faishal Badsha, 2 Rafiqul Islam, 3,* Mohammad Farhad Bulbul 1 Department of Mathematics and Statistics, Bangladesh University of Business and Technology,

More information

Verslag Project beeldverwerking A study of the 2D SIFT algorithm

Verslag Project beeldverwerking A study of the 2D SIFT algorithm Faculteit Ingenieurswetenschappen 27 januari 2008 Verslag Project beeldverwerking 2007-2008 A study of the 2D SIFT algorithm Dimitri Van Cauwelaert Prof. dr. ir. W. Philips dr. ir. A. Pizurica 2 Content

More information

Monocular SLAM for a Small-Size Humanoid Robot

Monocular SLAM for a Small-Size Humanoid Robot Tamkang Journal of Science and Engineering, Vol. 14, No. 2, pp. 123 129 (2011) 123 Monocular SLAM for a Small-Size Humanoid Robot Yin-Tien Wang*, Duen-Yan Hung and Sheng-Hsien Cheng Department of Mechanical

More information

Pictures at an Exhibition

Pictures at an Exhibition Pictures at an Exhibition Han-I Su Department of Electrical Engineering Stanford University, CA, 94305 Abstract We employ an image identification algorithm for interactive museum guide with pictures taken

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Department of Electrical and Computer Systems Engineering

Department of Electrical and Computer Systems Engineering Department of Electrical and Computer Systems Engineering Technical Report MECSE-25-2005 Vision-based indoor localization of a motorized wheelchair P. Chakravarty Vision-based Indoor Localization of a

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Monocular Visual-IMU Odometry: A Comparative Evaluation of the Detector-Descriptor Based Methods

Monocular Visual-IMU Odometry: A Comparative Evaluation of the Detector-Descriptor Based Methods Monocular Visual-IMU Odometry: A Comparative Evaluation of the Detector-Descriptor Based Methods Xingshuai Dong a, Xinghui Dong b*, and Junyu Dong a a Ocean University of China, Qingdao, 266071, China

More information

Comparison of Local Feature Descriptors

Comparison of Local Feature Descriptors Department of EECS, University of California, Berkeley. December 13, 26 1 Local Features 2 Mikolajczyk s Dataset Caltech 11 Dataset 3 Evaluation of Feature Detectors Evaluation of Feature Deriptors 4 Applications

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

Feature Detection and Matching

Feature Detection and Matching and Matching CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS4243) Camera Models 1 /

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition )

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition ) Keypoint detection n n Many applications benefit from features localized in (x,y) (image registration, panorama stitching, motion estimation + tracking, recognition ) Edges well localized only in one direction

More information

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Yan Lin 1,2 Bin Kong 2 Fei Zheng 2 1 Center for Biomimetic Sensing and Control Research, Institute of Intelligent Machines,

More information

Attentional Landmarks and Active Gaze Control for Visual SLAM

Attentional Landmarks and Active Gaze Control for Visual SLAM 1 Attentional Landmarks and Active Gaze Control for Visual SLAM Simone Frintrop and Patric Jensfelt Abstract This paper is centered around landmark detection, tracking and matching for visual SLAM (Simultaneous

More information

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds 9 1th International Conference on Document Analysis and Recognition Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds Weihan Sun, Koichi Kise Graduate School

More information

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Proceedings of MUSME 2011, the International Symposium on Multibody Systems and Mechatronics Valencia, Spain, 25-28 October 2011 HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Pedro Achanccaray, Cristian

More information

Recognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17

Recognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17 Recognition (Part 4) CSE 152 Lecture 17 Announcements Homework 5 is due June 9, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying Images Chapter 17: Detecting Objects in Images

More information

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Implementation and Comparison of Feature Detection Methods in Image Mosaicing IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

NOWADAYS, the computer vision is one of the top

NOWADAYS, the computer vision is one of the top Evaluation of Interest Point Detectors for Scenes with Changing Lightening Conditions Martin Zukal, Petr Cika, Radim Burget Abstract The paper is aimed at the description of different image interest point

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

Robust Vision-Based Detection and Grasping Object for Manipulator using SIFT Keypoint Detector

Robust Vision-Based Detection and Grasping Object for Manipulator using SIFT Keypoint Detector Proceedings of the 2014 International Conference on Advanced Mechatronic Systems, Kumamoto, Japan, August 10-12, 2014 Robust Vision-Based Detection and Grasping Object for Manipulator using SIFT Keypoint

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION. Maral Mesmakhosroshahi, Joohee Kim

IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION. Maral Mesmakhosroshahi, Joohee Kim IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION Maral Mesmakhosroshahi, Joohee Kim Department of Electrical and Computer Engineering Illinois Institute

More information

DETECTORS AND DESCRIPTORS FOR PHOTOGRAMMETRIC APPLICATIONS

DETECTORS AND DESCRIPTORS FOR PHOTOGRAMMETRIC APPLICATIONS DETECTORS AND DESCRIPTORS FOR PHOTOGRAMMETRIC APPLICATIONS KEY WORDS: Features Detection, Orientation, Precision Fabio Remondino Institute for Geodesy and Photogrammetry, ETH Zurich, Switzerland E-mail:

More information

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test Announcements (Part 3) CSE 152 Lecture 16 Homework 3 is due today, 11:59 PM Homework 4 will be assigned today Due Sat, Jun 4, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying

More information

THE ANNALS OF DUNAREA DE JOS UNIVERSITY OF GALATI FASCICLE III, 2007 ISSN X ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS

THE ANNALS OF DUNAREA DE JOS UNIVERSITY OF GALATI FASCICLE III, 2007 ISSN X ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS SIFT BASE ALGORITHM FOR POINT FEATURE TRACKING Adrian Burlacu, Cosmin Copot and Corneliu Lazar Gh. Asachi Technical University of Iasi, epartment

More information

The Brightness Clustering Transform and Locally Contrasting Keypoints

The Brightness Clustering Transform and Locally Contrasting Keypoints The Brightness Clustering Transform and Locally Contrasting Keypoints Jaime Lomeli-R. Mark S. Nixon University of Southampton, Electronics and Computer Sciences jlr2g12@ecs.soton.ac.uk Abstract. In recent

More information

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of

More information

Structure Guided Salient Region Detector

Structure Guided Salient Region Detector Structure Guided Salient Region Detector Shufei Fan, Frank Ferrie Center for Intelligent Machines McGill University Montréal H3A2A7, Canada Abstract This paper presents a novel method for detection of

More information

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett

More information

Invariant Features from Interest Point Groups

Invariant Features from Interest Point Groups Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper

More information