Towards the completion of assignment 1

Similar documents
Computer Vision 558 Corner Detection Overview and Comparison

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Corner Detection. GV12/3072 Image Processing.

A New Class of Corner Finder

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM

Morphological Corner Detection

Edge Detection. Ziv Yaniv School of Engineering and Computer Science The Hebrew University, Jerusalem, Israel.

Visual Tracking (1) Feature Point Tracking and Block Matching

Line, edge, blob and corner detection

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

TECHNIQUES OF VIDEO STABILIZATION

Feature Tracking and Optical Flow

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Feature Tracking and Optical Flow

Multi-stable Perception. Necker Cube

Visual Tracking (1) Pixel-intensity-based methods

Using temporal seeding to constrain the disparity search range in stereo matching

Features. Places where intensities vary is some prescribed way in a small neighborhood How to quantify this variability

Kanade Lucas Tomasi Tracking (KLT tracker)

Corner Detection using Difference Chain Code as Curvature

INTEREST OPERATORS IN CLOSE-RANGE OBJECT RECONSTRUCTION

Assessing the Performance of Corner Detectors for Point Feature Tracking Applications

Computer Vision Lecture 20

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov

Computer Vision Lecture 20

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

A Summary of Projective Geometry

HIGH-SPEED IMAGE FEATURE DETECTION USING FPGA IMPLEMENTATION OF FAST ALGORITHM

EE795: Computer Vision and Intelligent Systems

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Image processing and features

Computer Vision for HCI. Topics of This Lecture

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Outline 7/2/201011/6/

Edge and corner detection

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

Computer Vision Lecture 20

Peripheral drift illusion

Subpixel Corner Detection Using Spatial Moment 1)

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Local Feature Detectors

Segmentation and Grouping

Edge and local feature detection - 2. Importance of edge detection in computer vision

Optical flow and tracking

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Wikipedia - Mysid

Kanade Lucas Tomasi Tracking (KLT tracker)

The SIFT (Scale Invariant Feature

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Stereo Vision. MAN-522 Computer Vision

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Hand-Eye Calibration from Image Derivatives

Tracking Computer Vision Spring 2018, Lecture 24

Capturing, Modeling, Rendering 3D Structures

Local Image preprocessing (cont d)

Tracking in image sequences

Problems with template matching

IMPACT OF SUBPIXEL PARADIGM ON DETERMINATION OF 3D POSITION FROM 2D IMAGE PAIR Lukas Sroba, Rudolf Ravas

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

CS664 Lecture #18: Motion

Lecture: Edge Detection

Autonomous Navigation for Flying Robots

Curve Matching and Stereo Calibration

WAVELET TRANSFORM BASED FEATURE DETECTION

CHAPTER 1 INTRODUCTION

OBJECT TRACKING AND RECOGNITION BY EDGE MOTOCOMPENSATION *

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Motion Estimation. There are three main types (or applications) of motion estimation:

Local invariant features

Video Processing for Judicial Applications

MORPHOLOGICAL EDGE DETECTION AND CORNER DETECTION ALGORITHM USING CHAIN-ENCODING

Comparison Between The Optical Flow Computational Techniques

Effects Of Shadow On Canny Edge Detection through a camera

Scale Invariant Feature Transform

A Statistical Consistency Check for the Space Carving Algorithm.

A New Corner Detector Supporting Feature Extraction for Automatic 3D Reconstruction

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Exploring Curve Fitting for Fingers in Egocentric Images

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Biomedical Image Analysis. Point, Edge and Line Detection

Image Analysis. Edge Detection

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Motion Estimation and Optical Flow Tracking

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Feature Detectors - Canny Edge Detector

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Scale Invariant Feature Transform

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Structure from Motion

Ulrik Söderström 16 Feb Image Processing. Segmentation

Introduction à la vision artificielle X

Introduction to Medical Imaging (5XSA0)

Lecture 6: Edge Detection

Transcription:

Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI

COMPSCI 773 Feature Point Detection Why study feature point detection? Automatize the tracking of points of interest in image sequences For stereo vision feature point detection is followed by feature point matching Also for stereo image rectification FPD is the first step towards correspondence estimation in connection with the Fundamental Matrix theory We will see several potential techniques Harris (Plessey) detector KLT detector Susan detector

What is a corner? The point at which the direction of the boundary of object changes abruptly Intersection point between two or more edge segments In both images, the object is a continuous image area with a constant (or nearly constant) brightness or colour. The corner is at the intersection point between two (left image) or more edge segments (right image). Left image shows an example of a typical shape of the function of brightness in the neighborhood of a corner. A more complicated brightness function is depicted in right image.

What is a corner? The two figures show an artificial and a real image, respectively, with the corners indicated in red.

Feature Point detection The corner detectors should satisfy the following criteria: All (or most) the true feature points should be detected. No false feature points should be detected. Feature points should be well localized. Feature point detector should be robust with respect to noise. Feature point detector should be efficient. 2 families of corner detectors 1. Algorithms that work directly with the values of brightness of images (without segmenting the image in advance Usually based on the study of derivatives (orientation, magnitude) of greylevel or color image 2. Algorithms that extract object boundaries first and analyze its shape afterwards Boundaries often assumed to be extracted by edge-detectors Usually based on the analysis of the curvature of boundaries Group 2 seems to offer less reliability (use edge detectors for boundary extraction is not working well) and slower solutions.

The Harris detector www.ee.surrey.ac.uk/research/vssp/ravl/share/doc/ravl/auto/basic/class/ravlimagen.cornerdetectorharrisc.html In 1987, Harris proposed a new type of detector: the Plessey algorithm [1]. Computations based entirely on the first differential quantities. The detector computes the derivatives, by using the n x n first-differential approximations and the Gaussian smoothing kernel G(x, y) of a standard deviation. The following 2x2 symmetric matrix is considered at each image point (pixel) of the image. By evaluating the eigenvalues of the matrix M, we can detect the image feature by following rules: 1. If both eigenvalues are small, the intensity of the windowed image region approximately constant (homogeneous region). is 2. If one eigenvalues is high and the other is low, this indicates an edge. 3. If both eigenvalues are sufficiently large, the point is declared to be a corner. In the Harris implementation, the corner is calculated as the ratio: Trace( M ) R P = Det( M ) Thus, a point is marked as a corner if the value of is less than the threshold and is the local minimum. Improvement: Use instead with k=0.04 Corners are local maxima of the cornerness function

The KLT detector In [3], Shi and Tomasi proposed the Kanade-Lucas-Tomasi (KLT) Tracker, an algorithm that selects features which are optimal by construction. The basic principle of the KLT is that a good feature is one that can be tracked well, so tracking should not be separated from feature extraction (we will see the KLT tracker later on). A good feature is a textured patch with high intensity variation in both x and y directions, such as a corner. Denote the intensity function by g(x, y) and consider the local intensity variation matrix: Z = g g 2 x x g A patch defined by (say) a 25x25 window is accepted as a candidate feature if in the center of the window both eigenvalues of Z, exceed a predefined threshold t:. min( α, β ) y > t A separate parameter sets the minimum distance between (the centers of the) features. The (maximum) number of features to be tracked, N, is specified by the user. In the first frame of a sequence, the candidate features are ranked according to their strength defined by min( α, β ) Then the N strongest features are selected if available. (If not, all candidates are used.) g x g g y 2 y

The Susan detector The SUSAN detector differs from the others was published by Smith and Brady in 1997 [6] as it does not use any derivatives (no curvature or edge information requested). The acronym SUSAN stands for Smallest Univalue Segment Assimilating Nucleus. 1. The detector successively processes the points of the input image. 2. The point that is being examined is called nucleus. 3. The decision whether or not a point (nucleus) is a corner is based on examining a circular neighborhood centered around the nucleus. 4. The points from the neighborhood whose brightness is approximately the same as the brightness of the nucleus form the area referred to as USAN, standing for Univalue Segment Assimilating Nucleus. In Figure 2, each mask from Figure 1 is depicted with its USAN shown in grey. The shape of USAN conveys important information about the structure of the image in the region around the nucleus. For analysing the shape of USAN, its area and centroid are computed. 5. Compare the brightness difference between the nucleus and its neighbors (pixels within the same circular mask). To do so, for each pixel of the mask, estimate the difference: where t is a brightness difference threshold; r 0 denote a nucleus and r a point in its neighborhood; c stands for the output of the comparison; and I(x) is the brightness of any pixel x. c( r, r 0 ) = 1 0 if I( r) otherwise I( r 0 ) t

The Susan detector 6. The above comparison is done for each pixel r within the mask, and the USAN s area n (or count of pixels belonging to the USAN) can be defined by the equation: n( r ) = c( r, r0 ) 0 r 7. n is compared with a geometric threshold g, which is set to the half of the mask area (total pixels in the mask). For detecting the corners, the following function is introduced: g n( r0 ) if n( r0 ) < g R( r0 ) = 0 otherwise 8. The corners are detected at the points where the function R(r 0 ) has its local maxima. This is a clear formulation of the SUSAN principle: the smallest USAN areas give the greatest values of R(r 0 ) that are crucial for the detection.

The Susan detector (figures) Figure 1: Five circular masks at different places on a simple image. Figure 2: Five circular masks with similarity coloring; USANs are shown as the grey parts of the masks.

The Susan detector Unfortunately, there exist some special cases in which the process described above fails. In the figure left below, The USAN is not continuous, but separated in two small regions. It s obvious that the nucleus is not a corner, even though the function shows it is the local maxima. In another special case (Figure right below), the nucleus lies in a long thin area, which depicts the USAN is also very small. However, the value of is high, which contradicts the fact that the point in question is not a corner.

The Susan detector (improvement) To overcome these problems, the authors propose to apply two rules: 1. Find the centre of gravity of the USAN and its distance from the nucleus. The point cannot become a corner if the distance is small. 2. The continuity of USAN is required. All the pixels within the mask, lying in the straight line pointing outwards from the nucleus in the direction of the centre of gravity must be part of the USAN for a corner to be detected.

Results Harris, t=0.5 KLT, n=25 7*7 window Susan, t=15 Harris, t=0.01 KLT, n=75 7*7 window Susan, t=35

References [1] C.G. Harris, Determination of Ego-motion from Matched Points, in 3rd Alvey Vision Conference, pages 189-192, 1987. [2] Chris Harris and Mike Stephens, A Combined Corner and Edge Detector, Plessey Research Roke Manor, UK. 1988. [3] J. Shi and C. Tomasi, Good Features to Track, IEEE Conference on Computer Vision and Pattern Recognition, pages 593-600, 1994. [4] B.D. Lucas and T. Kanade, An Iterative Image Registration Technique with an Application to Stereo Vision, International Joint Conference on Artificial Intelligence, pages 674-679, 1981. [5] Stain Birchfield, KLT: An Implementation of the Kanade-Lucas-Tomasi Feature Tracker, and website: http://www.ces.clemson.edu/~stb/klt/. [6] S.M. Smith and J.M. Brady, SUSAN - A New Approach to Low Level Image Processing, International Journal of Computer Vision, 23(1): 44-78, May 1997. [7] F. Mokhtarian and R. Suomela, Robust Image Corner Detection through Curvature Scale Space, IEEE Transactions on Pattern Analysis and Machine Intelligence, VOL. 20, 1998. [8] G. Scott and H. Longuet-Higgins, An Algorithm for associating the features of two patterns, in Proc. Royal Society London, v.b244, pages 21-26, 1991. [9] E. Sojka, Reconstructing Three-dimensional Objects from the Images Produced by a Moving Camera, 21st International Workshop Advanced Simulation of System ASIS, Krnov, Czech Republic, 65-72, 1999. [10] S. Pollard, J. Mayhew and J. Frisby, PMF: A Stereo Correspondence Algorithm Using a Disparity Gradient Limit, Perception, (14), 1985. [11] S. Ullman, the Interpretation of Visual Motion, MIT Press, Cambridge MA, 1979. [12] M. Pilu, Uncalibrated Stereo Correspondence by Singular Value Decomposition, HP laboratories Bristol, HPL-97-96, 1997.