OBJECT TRACKING AND RECOGNITION BY EDGE MOTOCOMPENSATION *

Similar documents
MOTION COMPENSATION IN BLOCK DCT CODING BASED ON PERSPECTIVE WARPING

Towards the completion of assignment 1

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Outline 7/2/201011/6/

Local Feature Detectors

Edge and corner detection

Corner Detection using Difference Chain Code as Curvature

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

Local Image preprocessing (cont d)

Human Motion Detection and Tracking for Video Surveillance

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Filtering Images. Contents

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Designing Applications that See Lecture 7: Object Recognition

Lecture 7: Most Common Edge Detectors

The SIFT (Scale Invariant Feature

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

A New Approach to Computation of Curvature Scale Space Image for Shape Similarity Retrieval

CAP 5415 Computer Vision Fall 2012

Local Features: Detection, Description & Matching

Color, Edge and Texture

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

Object Tracking using HOG and SVM

Lecture 6: Edge Detection

Research and application of volleyball target tracking algorithm based on surf corner detection

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Adaptative Elimination of False Edges for First Order Detectors

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Motion Estimation and Optical Flow Tracking

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai

Other Linear Filters CS 211A

Logo Matching and Recognition for Avoiding Duplicate Logos

An Edge-Based Approach to Motion Detection*

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

HOUGH TRANSFORM CS 6350 C V

A Novel Algorithm for Color Image matching using Wavelet-SIFT

Iris Recognition for Eyelash Detection Using Gabor Filter

Image Analysis. Edge Detection

Local invariant features

Object Detection by Point Feature Matching using Matlab

Problems with template matching

Lecture 10 Detectors and descriptors

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Face Detection and Recognition in an Image Sequence using Eigenedginess

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

HISTOGRAMS OF ORIENTATIO N GRADIENTS

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES. D. H. Ballard Pattern Recognition Vol. 13 No

Subpixel Corner Detection Using Spatial Moment 1)

SIFT - scale-invariant feature transform Konrad Schindler

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

2D Image Processing Feature Descriptors

Object Recognition Algorithms for Computer Vision System: A Survey

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Histogram of Oriented Gradients for Human Detection

An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners

Three-Dimensional Face Recognition: A Fishersurface Approach

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Human Activity Recognition Using Multidimensional Indexing

Patch-based Object Recognition. Basic Idea

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Image processing and features

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

OBJECT RECOGNITION: OBTAINING 2-D RECONSTRUCTIONS FROM COLOR EDGES. G. Bellaire**, K. Talmi*, E. Oezguer *, and A. Koschan*

Edge and local feature detection - 2. Importance of edge detection in computer vision

Short Survey on Static Hand Gesture Recognition

Implementing the Scale Invariant Feature Transform(SIFT) Method

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

Scale Invariant Segment Detection and Tracking

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

Fine Classification of Unconstrained Handwritten Persian/Arabic Numerals by Removing Confusion amongst Similar Classes

Combination of Accumulated Motion and Color Segmentation for Human Activity Analysis

AFFINE INVARIANT CURVE MATCHING USING NORMALIZATION AND CURVATURE SCALE-SPACE. V. Giannekou, P. Tzouveli, Y. Avrithis and S.

School of Computing University of Utah

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

Automatic Image Alignment (feature-based)

An Algorithm for Blurred Thermal image edge enhancement for security by image processing technique

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Morphological Corner Detection

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

Solution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur

Inspection of Rubber Cap Using ISEF (Infinite Symmetric Exponential Filter): An Optimal Edge Detection Method

Motion Tracking and Event Understanding in Video Sequences

Features. Places where intensities vary is some prescribed way in a small neighborhood How to quantify this variability

Lecture 15: Segmentation (Edge Based, Hough Transform)

Transcription:

OBJECT TRACKING AND RECOGNITION BY EDGE MOTOCOMPENSATION * L. CAPODIFERRO, M. GRILLI, F. IACOLUCCI Fondazione Ugo Bordoni Via B. Castiglione 59, 00142 Rome, Italy E-mail: licia@fub.it A. LAURENTI, G. JACOVITTI INFOCOM Dpt., University of Rome La Sapienza Via Eudossiana18, 00184 Rome, Italy E-mail: gjacov@infocom.ing.uniroma1.it In the present contribution we describe an integrated technique of contour tracking based on motion compensation of salient points localized on edges, followed by a procedure of contour updating. Search of salient points is made by examination of the edge orientation in function of their curvilinear abscissa, and bifurcations are detected by colour variations. 1. Introduction One of the fundamental issues of content analysis of video sequences is moving object detection and tracking. Object tracking is useful for many different tasks, including surveillance, vehicle velocity estimation, biometrics, scene indexing and retrieval, etc. Conventional techniques for object tracking and recognition are mainly based on inter-frame differences, motion field analysis, template matching, etc. A distinct class of methods is based on tracking some specific features, such as salient points and contours. Salient point based object tracking is substantially aimed at improving robustness against disturbing factors such as illumination changes, scale and rotation, shape modification, out-of-plane rotation, etc In fact, joint tracking of multiple isolated points allows to counteract the effects of geometrical changes, provided that such points are well recognized and localized (see for instance [5]). Contour tracking provides additional possibilities, such as dynamic segmentation and shape evolution detection and classification. In the present contribution we describe an integrated technique of contour tracking based on motion compensation of salient points localized on edges, followed by a procedure of contour updating. The procedure constitutes an * This material is based upon work supported by the IST programme of the EU in the project IST- 2000-32795 SCHEMA 1

2 upgrade of the methods explored in [2] and [3], which takes also into account results of [1] and [4]. A problem associated to the contour tracking technique consists of properly breaking out single edge patterns pertaining to different objects. This is a central feature for higher level processing, where structural content analysis is performed. In essence, such a segmentation of edges requires detection of bifurcations (T patterns) occurring on contours when different objects overlap. Solutions to this problem based on differential operators are usually prone to false or missing detection. For this reason, we follow here a more robust approach based on comparison of colours aside edge trajectories. Another problem consists of selecting salient points having the property of being unambiguously moto-compensated. Unlike the method followed in [3] where circular harmonic operators have been employed, here the evolution of the curvature along the contour trajectories is measured. Salient points are defined as isolated maxima of the curvature diagram in function of the curvilinear abscissa measured on each edge. Such a set of salient points allows for contour moto-compensation using a technique similar to the watersheding described in [4]. In essence, contours are updated frame by frame by comparing edges passing on the moto-compensated points and using proper relaxation criteria. Some tricks for handling events such as edge occlusion and discovery, fusion and division are also employed. 2. Salient points detection method The content analysis of the image starts with the extraction of edge contours. This is done by applying smoothed complex gradient operator constituted by a zero-radial first-angular order Gauss Laguerre Circular Harmonic Function (GL- CHF), whose expression in polar coordinates for radial order k=0 and angular order n is: n n 2 ( r n, 0 ( 1) 1 r ) s jnϑ g s ( r, ϑ) = e e (1) 2 π s s where s is a scaling factor. These operators, which are obtained by complex differentiation of the Gaussian kernel, are polar separable and give complex outputs tuned to specific n-fold symmetric patterns. In particular, first-angular order operators are tuned to edges, whose strength is measured by the magnitude and orientation angle by phase. We apply this operator to each colour layer, and, in order to capture every gradient maximum irrespective of its colour, a unique multispectral complex gradient image is formed in a non-linear way by selecting in each point of the image support the complex edge value having the largest magnitude. Complex Multispectral Edges (CME) are then extracted by selecting in the multispectral complex gradient image the largest magnitude points pertaining to the trajectories of the Laplacian zero-crossing.

The second step of our procedure is aimed to break out contours pertaining to different objects. Preliminary, we select possible terminal salient points looking for the local maxima of the Euclidean Distance between the magnitudes of the first-angular order GL-CHF components calculated on two consecutive points of the generic edge. Thus, indicating shortly with W i R, W i G, W i B the complex edge values for the three colour channels R, G and B along each contour at the i-th position, we calculate the maxima of the following quantity: R R G G B B ( 1 ) ( 1 ) ( 1 ) 2 2 2 DE = W W + W W + W W (2) W i i i i i i This detector is not sufficiently robust for our purpose. So, in order to confirm the presence of a bifurcation we pass to a second decision stage, where we verify the occurrence of local maxima of the Euclidean Distance between consecutive colour values, say R i, G i and B i, on both sides of the CME, taken in the original image: ( ) ( ) ( ) 2 2 2 sx, dx i i 1 i i 1 i i 1 DE = R R + G G + B B (3) If a local maximum occurs at least on one side of a contour, then a break point is marked and labelled as a terminal point. In order to limit the number of detected salient points for each contour, an adaptive threshold is applied. An example of result of this CME based contour segmentation technique is shown in fig. 1. 3 Figure 1. Example of terminal salient points detection. The subsequent step consists of locating along each contour salient points, defined as the ones having largest curvature. Instead of using conventional geometric techniques based on search of vertices of edge trajectories we extract here local orientations by means of the phase of the CME. Consider that the CME is based on smoothed operators, whose smoothing effect is easily regulated with the scale parameter s. In particular, we detect salient points as those

4 corresponding to abrupt and strong phase changes. An example is shown in fig. 2 where the phase diagram calculated along the contour just above the head clearly indicates the occurrence of a vertex as salient point. An example of result of this CME based salient point extraction is shown in fig. 3. phase [rad] point n. Figure 2. Diagram of the phase values along a generic contour in presence of a corner point.

5 Figure 3. Example of corner salient points detection. 3. Object tracking In essence, the present approach is based on detection of contours on single frames of a sequence and search of correspondences between frames. To do that, we adopt here the strategy of linking contours by means of selected points whose correspondence is reliably established. Salient points do possess such a property. Once salient points of a generic contour in the i-th frame are motioncompensated in the (i+1)-th frame. To improve robustness of motocompensation and to reject false edge correspondences a capture mask is applied on the edge on frame i and compared to the estimated edge on frame i+1. The capture mask is defined around the same contour, and it is motion compensated according to the salient point motion (fig.4). Then, if the contour passing through the motion-compensated points in the the (i+1)-th frame is contained in the motion compensated capture mask, it is accepted as motion-compensated contour. If it is not entirely contained in the capture mask, it is rejected. This allows moderate contour modification frame by frame along the sequence. The colour based segmentation procedure described above permits to relate contours to objects and therefore to perform object tracking. In fig.5, the tracking of an object in a sequence using the whole edge motocompensation procedure is shown. This sequence presents a quite critical situation, since the considered object (ball) does not present emergent salient points, and bifurcations occur due to the presence of the background contours. Nevertheless, long term tracking is experienced.

6 4. Conclusion Analysis of the contour curvature not only allows to locate salient points useful for object tracking, but also provides simple means for object classification. In particular, the persistence of specific structures through a sequence of frames is simply assessed by verifying the shape of its curvature plot. Applications of object tracking and recognition as well as of dynamic events detection based on the presented technique are currently under study. Frame i Frame i+1 v s Contour on frame i Frame i+1 Capture mask generation Capture mask projection on frame i+1 v s Retraced contour on frame i+1 Comparison with the mask Figure 4. Capture mask.

Figure 5. Result of tracking of an object (ball) in children sequence. 7

8 References 1. Mokhtarian, F. and R. Suomela, Robust Image Corner Detection through Curvature Scale Space,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1376-1381, 1998. 2. G. Andreani, L. Capodiferro, S. Puledda, G. Iacovitti, Decomposition of still and video images with edge segments, in Wavelets Appl. in Signal and Image Processing VII, Proc. SPIE s 3813, pp. 957-965, July 1999. 3. L. Capodiferro, F. Davoli, G. Iacovitti, "String based analysis of image content", Content Based multimedia Indexing Workshop, CBMI 2001, 19-21 September 2001, Brescia, Italy 4. 2H.T. Nguyen, M.Worring Multifeature object tracking using a modelfree approach Proc. of the IEEE Conf. On Computer Vision and Pattern Recognition, Hilton Head, USA, 2000. 5. D.G. Loewe, Object Recognition from Local Scale-Invariant Features Proc. Of the International Conference on Computer Vision, Corfu (Sept.1999).