Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Size: px
Start display at page:

Download "Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology"

Transcription

1 Corner Detection Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology April 11, 2006 Abstract Corners and edges are two of the most important geometrical features in image processing. This talk discusses corner detection approaches using SUSAN and compares the results with gradient approaches Harris and KLT. Corner Detection

2 Corners and Edges Edges are locations where the brightness f(x, y) varies rapidly in a certain direction. Corners are locations where f(x, y) varies rapidly in two (nearly orthogonal) directions. Corner Edge Corner Detection 1

3 Change Measures The variation in brightness can be characterized by directional derivatives. f x = f x f Mx f y = f y f My where M x and M y are directional masks such as those shown below. M x = M y = We want to use the directional derivatives to find edges and corners and to determine their orientation. Because derivatives are involved, the measures are vulnerable to noise. Corner Detection 2

4 Change Measures To reduce noise effects it is customary to smooth the directional derivatives. Let w(x, y) be a smoothing filter mask. Form smoothed products of derivatives, denoted by f 2 x = w f 2 x f 2 y = w f 2 y f x f y = w (f x f y ) Define the matrix C = [ ] f 2 x fx f y f x f y f 2 y The matrix C is real and symmetric. Therefore, it is positive-definite and has positive real eigenvalues. Corner Detection 3

5 Eigenvalues and Eigenvectors The eigenvalues and eigenvectors of a symmetric matrix are [ ] a b M = b c λ 1 = a + c 2 λ 2 = a + c (a c2 + 4b (a c2 + 4b 2 v 1 = [ 1 2 (a c) + 1 (a c)2 + 4b 2 2, 1] T v 2 = [ 1 2 (a c) 1 (a c)2 + 4b 2 2, 1] T The values and directions can be checked by setting, for example, a b c or a b c. In the both cases, λ 1 λ 2 > 0. However, the dominant eigenvector changes to correspond to the greatest gradient. Corner Detection 4

6 Corner Detection Original Image KLT Corner Detection Contour Plot for λ 1 with λ 2 extremes marked in red Surface Plot for λ 2 Corner Detection 5

7 Edge Classification - Monochrome Images We can classify the region near a particular pixel (x, y) as: Nearly uniform if λ 1 and λ 2 are small. Step edge if λ 1 is large and λ 2 is small. The eigenvector v 1 is orthogonal to the edge. A corner if λ 1 and λ 2 are both large. The eigenvectors v 1 and v 2 are orthogonal to the edges at the corner. Some observations: Since λ 1 λ 2 0 the above tests may be simplified. The eigenvalues correspond to edge magnitudes and eigenvectors to directions. Corner Detection 6

8 Harris Corner Detector The Harris algorithm implements a way compute a parameter based on the eigenvalues that captures cornerness. However, it does not require that the eigenvalues be explicitly calculated. We can make use of the following results from matrix algebra: det C = λ 1 λ 2 trace C = λ 1 + λ 2 Define the Harris detection parameter as H = λ 1 λ 2 α(λ 1 + λ 2 ) 2 = det C + α(trace C) 2 Introduce λ 1 = λ and λ 2 = κλ, 0 κ 1. H = λ 2 (κ α(1 + κ) 2 ) Corner Detection 7

9 Harris Corner Detector (cont) The parameter α controls the sensitivity of the detector. To have H > 0 requires κ α < (1 + κ) α plays the role of a sensitivity parameter. Larger α Smaller H less sensitive detector and fewer corners detected. Smaller α Larger H more sensitive detector and more corners detected. For a given α a corner is detected if H > H t, a threshold. Usually H t is a fixed value near zero and α is a variable parameter. H = det C α(trace C) 2 > H t Corner Detection 8

10 Kanade-Lucas-Tomasi (KLT) Corner Detector Also based on matrix C, but uses explicit calculation of eigenvalue λ 2. The detector has two parameters: a threshold λ t on λ 2 and the edge dimension D of a square window of size D D. KLT Corner Detector Algorithm 1. Compute C at each point (x, y) of the image. 2. For each image point p = (x, y) (a) Find the smallest value of λ 2 in D-neighborhood of p. (b) If that λ 2 > λ t then put p into a list L. 3. Sort L in decreasing order of λ 2 4. Scan the sorted list from top to bottom. For each current point p, delete all lower points in the list in the D-neighborhood of p. Corner Detection 9

11 KLT Algorithm (cont) The KLT algorithm produces a list of feature points such that: For each point, λ 2 > λ t. The D-neighborhoods of the points do not overlap. Parameter Selection The threshold λ t may be estimated from a histogram of λ 2. Try to select λ t to a valley of the histogram. Window size D is usually between 2 and 10. Select by trial and error. For large D the corners may move away from the actual position and some close neighbor corners may be lost. Corner Detection 10

12 Comparison of Harris and KLT KLT and Harris are both based on eigenvalues of C KLT tends to be used in America and Harris in Europe. KLT used for motion tracking in KLT Tracker algorithm. Harris provides good repeatability under rotation and changing illumination. Harris often used in image matching and image database retrieval. Both may detect interest points other than corners. Reference: Corner Detection 11

13 SUSAN Feature Detector SUSAN is derived from the phrase Univalue Segment Assimilating Nucleus Algorithm For each pixel place a mask at the pixel find the pixels that match the brightness of the center pixel compare the area of the USAN to a threshold Corner Detection 12

14 eus (USAN). He argues that the USAN corresponding to a corner (case (a) in Figure 3) has an USAN area ss than half the total mask area. It is clear from Figure 3 that a local minimum in USAN area will find the t point of the corner. SUSAN Mask Approximation to the circular mask Nucleus Figure 3: The Smith USAN (SUSAN) corner finding mask. The typical SUSAN mask employs an approximately circular region of 37 pixels. actice, the circular mask is approximated using a 5 x 5 pixel square with 3 pixels added on to the center of edge (Figure 3). The intensity of the nucleus is then compared with the intensity of every other pixel within ask using the following comparison function: Corner Detection 13

15 SUSAN Principle The concept of each image point having associated with it a local area of similar brightness is the basis for the SUSAN principle. The local area or USAN contains much information about the structure of the image. It is effectively region finding on a small scale. From the size, centroid and second moments of the USAN two dimensional features and edges can be detected. This approach to feature detection has many differences to the gradient methods, the most obvious being that no image derivatives are used and that no noise reduction is needed. SUSAN Smallest Univalue Segment Assimilating Nucleus Stephen M. Smith SUSAN site: Corner Detection 14

16 USAN Information The area of an USAN conveys the most important information about the structure of the image in the region around any point in question. The USAN area is maximum when the nucleus lies in a flat region of the image surface falls to half of this maximum very near a straight edge and falls even further when inside a corner This property of the USAN s area is used as the main determinant of the presence of edges and two dimensional features. Corner Detection 15

17 USAN Information (cont) Surface plot of the USAN as the mask is moved over the test area. Note that the scale is inverted. Flat areas have the largest USAN, edges have an intermediate USAN, and corners have a small USAN. Corner Detection 16

18 USAN Information (cont) USAN for a small part of a noisy image. The smoothing effect of the mask suppresses the noise, which is on the order of 15 counts out of 256 levels. The edge and corner features are evident as the smallest values on the inverted scale. Corner Detection 17

19 SUSAN Algorithm The mask is placed over each pixel and the following calculation is made: 1. Let r 0 be the current pixel location and r any other within the mask. Calculate C(r, r 0 ) = e I(r) I(r0 ) 6 t where I is the image gray level and t is a threshold setting. 2. Calculate the sum (USAN) for pixel r 0 : n(r 0 ) = r C(r, r 0 ) 3. Compare n(r 0 ) to a threshold g. For corner detection it is safe to set g = n max /2. Corner Detection 18

20 Corner Detection with SUSAN Corner Detection 19

21 Edge Detection with SUSAN Corner Detection 20

22 Corner Detection with SUSAN House Test Image Corner Detection 21

23 Corner Detection with SUSAN Points found by SUSAN corner detection with t = 20. Corner Detection 22

24 Corner Detection with SUSAN Points found by SUSAN corner detection with t = 80. Corner Detection 23

25 Corner Detection with SUSAN Original Test Image Corner Detection 24

26 Corner Detection with SUSAN Points found by SUSAN corner detection (t = 20) Corner Detection 25

27 Comparison of Corner Detector Performance In the paper Assessing the Performance of Corner Detectors for Point Feature Tracking Applications the authors considered four detectors: 1. The Kitchen-Rosenfeld detector (historically one of the first to use a cornerness measure.) 2. The Harris corner detector 3. The Kanade-Lucas-Tomasi corner detector 4. The SUSAN corner detector. The Kitchen-Rosenfeld method uses second and first order derivatives, the Harris and KLT methods use only first order derivatives and SUSAN uses a geometrical criteria in calculating the cornerness value witn no derivatives. Corner Detection 26

28 Kitchen-Rosenfeld Corner Detector The Kitchen-Rosenfeld algorithm is one of the earliest corner detectors reported in the literature, hence it has been used as a bench mark. This algorithm calculates the cornerness value C as the produce of the local gradient magnitude and the rate of change of gradient direction. The quantity C is given by, C = f xxf 2 y + f yy f 2 x 2f xy f x f y f 2 x + f 2 y where f(x, y) is the image brightness. Corner Detection 27

29 Harris Corner Detector The Harris cornerness measure used in the test was C = f 2 x + f 2 y f 2 x f 2 y f x f y 2 = trace C det C This is closely related to the Harris cornerness measure H discussed earlier and is often referred to as the Harris detector. It is actually a simplification of a form due to Noble [4]: where ɛ is a small positive value. C = trace C det C + ɛ Corner Detection 28

30 Performance Requirements Good temporal stability - corners should appear in every frame of a sequence (from the time they are first detected), and should not flicker (turn on and off) between frames. Accurate localization - the calculated image-plane position of a corner, given by the detector, should be as close to the actual position of the corner as possible. Robust with respect to noise Computationally efficient Corner Detection 29

31 Test Setup The test employed sequences of 30 image with no motion so that the appearance and disappearance of corners in each frame is purely due to image plane noise and illumination conditions. The internal parameters of each detector were adjusted to give its best performance. Four image sequences were used: 1. indoor image sequence of a toy dog is used with only artificial light interference 2. outdoor sequence of a building illuminated with only natural lighting 3. indoor lab sequence with plenty of identifiable corners using direct light sources 4. computer image sequence with many light reflections and curved objects Corner Detection 30

32 Tests 1. Corner stability result: The corner stability result reveals the number of stable corners identified throughout the sequence. 2. First frame corner matches result: This quantity indicates the number of corners found in the initial frame that appeared in the subsequent frames. 3. Corner displacement result: The corner displacement result reveals the displacement of a corner in the nth frame from its position in the initial frame (assuming that the corner considered appeared in the n-th frame). The above measures were collected for each detector without added noise. In a second experiment the performance was evaluated in the presence of additive gaussian noise of various levels. Corner Detection 31

33 Measures of Matching Gradient Vector Matcher (GVM): To compare two corners for a possible match, form vectors v and w at each corner from the quantities [f, f x, f y ] T. The GVM measure is the scalar m(v, w) = v w v w Product Moment Coefficient Matcher (PMCM): Let t i and p i be intensity values in two images located at the corners to be tested for a match. The PMCM value is c = n i=1 (t i t)(p i p) n i=1 (t i t) 2 n i=1 (p 1 c 1 i p) 2 c and m are both insensitive to lighting changes and describe the structure of regions around potential corners. Corner Detection 32

34 Dog Sequence Results (a) Using KLT (b) Using Harris (c) Using Kitchen-Rosenfeld (d) Using SUSAN Figure 5: The best 100 corners extracted from the indoor static dog sequence. (a) KLT corner detector. (b) Harris corner detector. (c) Kitchen-Rosenfeld corner detector. (d) SUSAN corner detector. The best 100 corners extracted from the indoor static dog sequence. (a) KLT corner detector. (b) Harris corner detector. (c) Kitchen-Rosenfeld corner detector. (d) SUSAN corner detector. Corner Detection 33

35 (c) Using Kitchen-Rosenfeld (d) Using SUSAN Figure 5: The best 100 corners extracted from the indoor static dog sequence. (a) KLT corner detector. (b) Harris corner detector. (c) Kitchen-Rosenfeld corner detector. (d) SUSAN corner detector. Dog Sequence Results (no noise) (a) (b) (c) Using GVM (Thresh=0.009) (d) (e) (f) Using PMCM (Thresh = 0.7) Figure 6: Corner detector performance-test for the 'static dog' sequence (the best 100 corners as seen by each Cornerdetector are performance-test extracted from for the each static frame). dog (a) sequence: Percentage (a) Percentage stable corners stable corners (using (using GVM). GVM). (b) Corner (b) Corner displacement displacement (GVM). (c) Number of first-frame matches (GVM). (d) Percentage stable corners (using PMCM). (e) Corner (GVM). displacement (c) Number of (PMCM). first-frame (f) matches Number (GVM). of first-frame (d) Percentage matches stable (PMCM). corners (using PMCM). (e) Corner displacement (PMCM). (f) Number of first-frame matches (PMCM). Corner Detection 34

36 Dog Sequence Results (with noise) Dog Sequence (with noise) (a) (b) (c) Using GVM (Thresh=0.004) (d) (e) (f) Using PMCM (Thresh = 0.8) Figure 7: Performance of the corner detectors when applied to the static dog sequence at varied noise levels Performance (noise of variance the corner ranging detectors from when 0 applied 25). (a) to the Percentage static dogstable sequence corners at varied (using noisegvm). levels (noise (b) Corner variancedisplacement ranging from 0 (GVM). (c) Number of first-frame matches (GVM). (d) Percentage stable corners (using PMCM). (e) Corner 25). (a) displacement Percentage stable (PMCM). corners (f)(using Number GVM). of first-frame (b) Cornermatches displacement (PMCM). (GVM). (c) Number of first-frame matches (GVM). (d) Percentage stable corners (using PMCM). (e) Corner displacement (PMCM). (f) Number of first-frame matches (PMCM). Building Sequence Results Corner Detection 35

37 (noise variance ranging from 0 25). (a) Percentage stable corners (using GVM). (b) Corner displacement (GVM). (c) Number of first-frame matches (GVM). (d) Percentage stable corners (using PMCM). (e) Corner displacement (PMCM). (f) Number of first-frame matches (PMCM). Building Sequence Results (a) Using KLT (b) Using Harris (c) Using Kitchen-Rosenfeld (d) Using SUSAN Figure 8: The best 150 corners extracted from the outdoor static building sequence. (a) Using KLT corner detector. (b) Using Harris corner detector. (c) Using Kitchen-Rosenfeld corner detector. (d) Using SUSAN The best 150 corners extracted from the outdoor static building sequence. (a) Using KLT corner detector. (b) Using Harris corner corner detector. detector. (c) Using Kitchen-Rosenfeld corner detector. (d) Using SUSAN corner detector. 14 Corner Detection 36

38 Building Sequence Results (no noise) (a) (b) (c) Using GVM (Thresh=0.009) (d) (e) (f) Using PMCM (Thresh = 0.7) Figure 9: Performance-test for the static building sequence (the best 150 corners as seen by each detector are Performance-test extracted from for the each static frame). building (a) sequence Percentage (the best stable 150corners corners(using as seengvm). by each(b) detector Corner aredisplacement extracted from(gvm). each frame). (c) Number of first-frame matches (GVM). (d) Percentage stable corners (using PMCM). (e) Corner displacement (a) Percentage (PMCM). stable (f) Number corners of (using first-frame GVM). matches (b) Corner (PMCM). displacement (GVM). (c) Number of first-frame matches (GVM). (d) Percentage stable corners (using PMCM). (e) Corner displacement (PMCM). (f) Number of first-frame matches (PMCM). Building Sequence Results (with noise) Corner Detection 37

39 extracted from each frame). (a) Percentage stable corners (using GVM). (b) Corner displacement (GVM). (c) Number of first-frame matches (GVM). (d) Percentage stable corners (using PMCM). (e) Corner displacement (PMCM). (f) Number of first-frame matches (PMCM). Building Sequence Results (with noise) Building (with noise) (a) (b) (c) Using GVM (Thresh=0.004) (d) (e) (f) Using PMCM (Thresh = 0.8) Figure 10: Performance for the static building sequence at varied noise levels (noise variance ranging from 0 25). (a) Percentage stable corners using GVM. (b) Corner displacement using the GVM. (c) Number of first Performance for the static building sequence (noise variance ranging from 0 25). (a) Percentage stable corners using GVM. (b) frame corner matches using GVM. (d) Percentage stable corners using PMCM matcher. (e) Corner Corner displacement using thepmcm. GVM. (c) (f) Number of of first first frame corner matches matches using using GVM. PMCM. (d) Percentage stable corners using PMCM matcher. (e) Corner displacement using the PMCM. (f) Number of first frame corner matches using PMCM. 15 Corner Detection 38

40 Lab Sequence Results (a) Using KLT (b) Using Harris (c) Using Kitchen-Rosenfeld (b) Using SUSAN Figure 11: The best 200 corners extracted from the static lab sequence. (a) Using KLT corner detector. (b) Using Harris corner detector. (c) Using Kitchen-Rosenfeld corner detector. (d) Using SUSAN corner detector. The best 200 corners extracted from the static lab sequence. (a) Using KLT corner detector. (b) Using Harris corner detector. (c) Using Kitchen-Rosenfeld corner detector. (d) Using SUSAN corner detector. Corner Detection 39

41 Figure 11: The best 200 corners extracted from the static lab sequence. (a) Using KLT corner detector. (b) Using Harris corner detector. (c) Using Kitchen-Rosenfeld corner detector. (d) Using SUSAN corner detector. Lab Sequence Results (no noise) (a) (b) (c) Using GVM (Thresh=0.004) (d) (e) (f) Using PMCM (Thresh = 0.8) Figure 12: Corner detector performance-test for the 'static lab' sequence (the best 200 corners as seen by each Corner detector are performance-test extracted from foreach the static frame). lab (a) sequence. Percentage (a) Percentage stable corners stable corners (using (using GVM). GVM). (b) Corner (b) Corner displacement (GVM). (c) Number of first-frame matches (GVM). (d) Percentage stable corners (using PMCM). (e) Corner (GVM). (c) Number of first-frame matches (GVM). (d) Percentage stable corners (using PMCM). (e) Corner displacement (PMCM). displacement (PMCM). (f) Number of first-frame matches (PMCM). (f) Number of first-frame matches (PMCM). Corner Detection 40

42 Computer Sequence Sequence Results (a) Using KLT (b) Using Harris (c) Using Kitchen-Rosenfeld (d) Using SUSAN Figure 13: The best 250 corners extracted from the static computer sequence. (a) Using KLT corner detector. The best (b) 250 Using corners Harris extracted corner from detector. the static (c) Using computer Kitchen-Rosenfeld sequence. (a) Using corner KLTdetector. corner detector. (d) Using (b) SUSAN Using Harris corner corner detector. detector. (c) Using Kitchen-Rosenfeld corner detector. (d) Using SUSAN corner detector. Corner Detection 41

43 (c) Using Kitchen-Rosenfeld (d) Using SUSAN Figure 13: The best 250 corners extracted from the static computer sequence. (a) Using KLT corner detector. (b) Using Harris corner detector. (c) Using Kitchen-Rosenfeld corner detector. (d) Using SUSAN corner detector. Computer Sequence Results (no noise) (a) (b) (c) Using GVM (Thresh=0.004) (d) (e) (f) Using PMCM (Thresh = 0.8) Figure 14: Corner detector performance-test for the 'static computer' sequence (the best 250 corners as seen by Corner each detector detector performance-test are extracted for the from static each computer frame). sequence. (a) Percentage (a) Percentage stable corners stable corners (using (using GVM). GVM). (b) (b) Corner Corner displacement (GVM). (c) Number of first-frame matches (GVM). (d) Percentage stable corners (using PMCM). displacement (e) Corner (GVM). displacement (c) Number (PMCM). of first-frame (f) Number matches of first-frame (GVM). (d) matches Percentage (PMCM). stable corners (using PMCM). (e) Corner displacement (PMCM). (f) Number of first-frame matches (PMCM). Corner Detection 42

44 References 1. An Empirical Study of Corner Detection to Extract Buildings from VHR Satellite Images, L. Martinez-Fonte, S. Gautama and W. Philips, IEEE-ProRisc, November 2004, Veldhoven, The Netherlands. Proceedings of ProRisc 2004, pp P. Tissainayagam and D. Suter. Assessing the performance of corner detectors for point feature tracking applications. Image and Vision Computing, 22(8): , August L. Kitchen and A. Rosenfeld, Gray level corner detection, Pattern Recognition Letters, pp , A. Noble, Finding Corners, Image and Vision Computing Journal 6(2): , Corner Detection 43

Assessing the Performance of Corner Detectors for Point Feature Tracking Applications

Assessing the Performance of Corner Detectors for Point Feature Tracking Applications Assessing the Performance of Corner Detectors for Point Feature Tracking Applications Abstract ** P. Tissainayagam raj@tns.nec.com.au Transmissions Systems Division NEC (Australia) Pty. Ltd. 649-655 Springvale

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Line, edge, blob and corner detection

Line, edge, blob and corner detection Line, edge, blob and corner detection Dmitri Melnikov MTAT.03.260 Pattern Recognition and Image Analysis April 5, 2011 1 / 33 Outline 1 Introduction 2 Line detection 3 Edge detection 4 Blob detection 5

More information

Corner Detection. GV12/3072 Image Processing.

Corner Detection. GV12/3072 Image Processing. Corner Detection 1 Last Week 2 Outline Corners and point features Moravec operator Image structure tensor Harris corner detector Sub-pixel accuracy SUSAN FAST Example descriptor: SIFT 3 Point Features

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM ABSTRACT Mahesh 1 and Dr.M.V.Subramanyam 2 1 Research scholar, Department of ECE, MITS, Madanapalle, AP, India vka4mahesh@gmail.com

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

3D from Photographs: Automatic Matching of Images. Dr Francesco Banterle

3D from Photographs: Automatic Matching of Images. Dr Francesco Banterle 3D from Photographs: Automatic Matching of Images Dr Francesco Banterle francesco.banterle@isti.cnr.it 3D from Photographs Automatic Matching of Images Camera Calibration Photographs Surface Reconstruction

More information

A New Class of Corner Finder

A New Class of Corner Finder A New Class of Corner Finder Stephen Smith Robotics Research Group, Department of Engineering Science, University of Oxford, Oxford, England, and DRA (RARDE Chertsey), Surrey, England January 31, 1992

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Feature Selection. Ardy Goshtasby Wright State University and Image Fusion Systems Research

Feature Selection. Ardy Goshtasby Wright State University and Image Fusion Systems Research Feature Selection Ardy Goshtasby Wright State University and Image Fusion Systems Research Image features Points Lines Regions Templates 2 Corners They are 1) locally unique and 2) rotationally invariant

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

EECS 556 Image Processing W 09

EECS 556 Image Processing W 09 EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Computer Vision 558 Corner Detection Overview and Comparison

Computer Vision 558 Corner Detection Overview and Comparison Computer Vision 558 Corner Detection Overview and Comparison Alexandar Alexandrov ID 9823753 May 3, 2002 0 Contents 1 Introduction 2 1.1 How it started............................ 2 1.2 Playing with ideas..........................

More information

Wikipedia - Mysid

Wikipedia - Mysid Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Visual Tracking (1) Pixel-intensity-based methods

Visual Tracking (1) Pixel-intensity-based methods Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Feature Detection and Matching

Feature Detection and Matching and Matching CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS4243) Camera Models 1 /

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1 CHAPTER 1 INTRODUCTION Table of Contents Page No. 1 INTRODUCTION 1.1 Problem overview 2 1.2 Research objective 3 1.3 Thesis outline 7 2 1. INTRODUCTION 1.1 PROBLEM OVERVIEW The process of mapping and

More information

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition )

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition ) Keypoint detection n n Many applications benefit from features localized in (x,y) (image registration, panorama stitching, motion estimation + tracking, recognition ) Edges well localized only in one direction

More information

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe AN EFFICIENT BINARY CORNER DETECTOR P. Saeedi, P. Lawrence and D. Lowe Department of Electrical and Computer Engineering, Department of Computer Science University of British Columbia Vancouver, BC, V6T

More information

Local features: detection and description. Local invariant features

Local features: detection and description. Local invariant features Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms

More information

Lecture Image Enhancement and Spatial Filtering

Lecture Image Enhancement and Spatial Filtering Lecture Image Enhancement and Spatial Filtering Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu September 29, 2005 Abstract Applications of

More information

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Morphological Corner Detection

Morphological Corner Detection Morphological Corner Detection Robert Laganière School of Information Technology and Engineering University of Ottawa Ottawa, Ont. CANADA K1N 6N5 Abstract This paper presents a new operator for corner

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the

More information

Features. Places where intensities vary is some prescribed way in a small neighborhood How to quantify this variability

Features. Places where intensities vary is some prescribed way in a small neighborhood How to quantify this variability Feature Detection Features Places where intensities vary is some prescribed way in a small neighborhood How to quantify this variability Derivatives direcitonal derivatives, magnitudes Scale and smoothing

More information

Edge Detection. Ziv Yaniv School of Engineering and Computer Science The Hebrew University, Jerusalem, Israel.

Edge Detection. Ziv Yaniv School of Engineering and Computer Science The Hebrew University, Jerusalem, Israel. Edge Detection Ziv Yaniv School of Engineering and Computer Science The Hebrew University, Jerusalem, Israel. This lecture summary deals with the low level image processing task of edge detection. Edges

More information

Large-Scale 3D Point Cloud Processing Tutorial 2013

Large-Scale 3D Point Cloud Processing Tutorial 2013 Large-Scale 3D Point Cloud Processing Tutorial 2013 Features The image depicts how our robot Irma3D sees itself in a mirror. The laser looking into itself creates distortions as well as changes in Prof.

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian. Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11 Announcement Edge and Corner Detection Slides are posted HW due Friday CSE5A Lecture 11 Edges Corners Edge is Where Change Occurs: 1-D Change is measured by derivative in 1D Numerical Derivatives f(x)

More information

Local Image preprocessing (cont d)

Local Image preprocessing (cont d) Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Automatic Image Alignment (feature-based)

Automatic Image Alignment (feature-based) Automatic Image Alignment (feature-based) Mike Nese with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2006 Today s lecture Feature

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Ulrik Söderström 16 Feb Image Processing. Segmentation

Ulrik Söderström 16 Feb Image Processing. Segmentation Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

Local Image Features

Local Image Features Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Image Features. Work on project 1. All is Vanity, by C. Allan Gilbert,

Image Features. Work on project 1. All is Vanity, by C. Allan Gilbert, Image Features Work on project 1 All is Vanity, by C. Allan Gilbert, 1873-1929 Feature extrac*on: Corners and blobs c Mo*va*on: Automa*c panoramas Credit: Ma9 Brown Why extract features? Mo*va*on: panorama

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature

More information

School of Computing University of Utah

School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

Digital Image Processing Chapter 11: Image Description and Representation

Digital Image Processing Chapter 11: Image Description and Representation Digital Image Processing Chapter 11: Image Description and Representation Image Representation and Description? Objective: To represent and describe information embedded in an image in other forms that

More information

IMAGE PROCESSING >FILTERS AND EDGE DETECTION FOR COLOR IMAGES UTRECHT UNIVERSITY RONALD POPPE

IMAGE PROCESSING >FILTERS AND EDGE DETECTION FOR COLOR IMAGES UTRECHT UNIVERSITY RONALD POPPE IMAGE PROCESSING >FILTERS AND EDGE DETECTION FOR COLOR IMAGES UTRECHT UNIVERSITY RONALD POPPE OUTLINE Filters for color images Edge detection for color images Canny edge detection FILTERS FOR COLOR IMAGES

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Image features. Image Features

Image features. Image Features Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Kanade Lucas Tomasi Tracking (KLT tracker)

Kanade Lucas Tomasi Tracking (KLT tracker) Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,

More information

Tracking Computer Vision Spring 2018, Lecture 24

Tracking Computer Vision Spring 2018, Lecture 24 Tracking http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 24 Course announcements Homework 6 has been posted and is due on April 20 th. - Any questions about the homework? - How

More information

Displacement estimation

Displacement estimation Displacement estimation Displacement estimation by block matching" l Search strategies" l Subpixel estimation" Gradient-based displacement estimation ( optical flow )" l Lukas-Kanade" l Multi-scale coarse-to-fine"

More information

Video Processing for Judicial Applications

Video Processing for Judicial Applications Video Processing for Judicial Applications Konstantinos Avgerinakis, Alexia Briassouli, Ioannis Kompatsiaris Informatics and Telematics Institute, Centre for Research and Technology, Hellas Thessaloniki,

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

CS-465 Computer Vision

CS-465 Computer Vision CS-465 Computer Vision Nazar Khan PUCIT 9. Optic Flow Optic Flow Nazar Khan Computer Vision 2 / 25 Optic Flow Nazar Khan Computer Vision 3 / 25 Optic Flow Where does pixel (x, y) in frame z move to in

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5 Binary Image Processing CSE 152 Lecture 5 Announcements Homework 2 is due Apr 25, 11:59 PM Reading: Szeliski, Chapter 3 Image processing, Section 3.3 More neighborhood operators Binary System Summary 1.

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement

More information

Problems with template matching

Problems with template matching Problems with template matching The template represents the object as we expect to find it in the image The object can indeed be scaled or rotated This technique requires a separate template for each scale

More information

Local features: detection and description May 12 th, 2015

Local features: detection and description May 12 th, 2015 Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Recall: Edge Detection Image processing

More information

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach: a set of texels in some regular or repeated pattern

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Patch-based Object Recognition. Basic Idea

Patch-based Object Recognition. Basic Idea Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest

More information

(10) Image Segmentation

(10) Image Segmentation (0) Image Segmentation - Image analysis Low-level image processing: inputs and outputs are all images Mid-/High-level image processing: inputs are images; outputs are information or attributes of the images

More information

CS664 Lecture #18: Motion

CS664 Lecture #18: Motion CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use

More information