Pixel-Pair Features Selection for Vehicle Tracking

Similar documents
ONLINE SELECTION OF DISCRIMINATIVE PIXEL-PAIR FEATURE FOR TRACKING

Tracking Using Online Feature Selection and a Local Generative Model

Keywords:- Object tracking, multiple instance learning, supervised learning, online boosting, ODFS tracker, classifier. IJSER

ON-LINE BOOSTING BASED REAL-TIME TRACKING WITH EFFICIENT HOG

SCALE INVARIANT TEMPLATE MATCHING

Beyond Semi-Supervised Tracking: Tracking Should Be as Simple as Detection, but not Simpler than Recognition

Generic Face Alignment Using an Improved Active Shape Model

A Novel Smoke Detection Method Using Support Vector Machine

Human Motion Detection and Tracking for Video Surveillance

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds

Robustifying the Flock of Trackers

Preliminary Local Feature Selection by Support Vector Machine for Bag of Features

Tracking. Hao Guan( 管皓 ) School of Computer Science Fudan University

Evaluation of Hardware Oriented MRCoHOG using Logic Simulation

A Keypoint Descriptor Inspired by Retinal Computation

Object Tracking using HOG and SVM

Color Image Segmentation

Ensemble Tracking. Abstract. 1 Introduction. 2 Background

2 Cascade detection and tracking

Multi-Object Tracking Based on Tracking-Learning-Detection Framework

Expanding gait identification methods from straight to curved trajectories

BRIEF Features for Texture Segmentation

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction

A robust method for automatic player detection in sport videos

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation

A Study on Similarity Computations in Template Matching Technique for Identity Verification

Scene Text Detection Using Machine Learning Classifiers

IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION. Maral Mesmakhosroshahi, Joohee Kim

Person identification from spatio-temporal 3D gait

Detecting People in Images: An Edge Density Approach

Object detection using non-redundant local Binary Patterns

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Real-Time Document Image Retrieval for a 10 Million Pages Database with a Memory Efficient and Stability Improved LLAH

Multi-Scale Kernel Operators for Reflection and Rotation Symmetry: Further Achievements

Camera-Based Document Image Retrieval as Voting for Partial Signatures of Projective Invariants

Hand gesture recognition with Leap Motion and Kinect devices

Visual Tracking with Online Multiple Instance Learning

Multi-Camera Occlusion and Sudden-Appearance-Change Detection Using Hidden Markovian Chains

Recognizing Apples by Piecing Together the Segmentation Puzzle

Haresh D. Chande #, Zankhana H. Shah *

Keypoint Recognition with Two-Stage Randomized Trees

Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement

IN computer vision develop mathematical techniques in

Efficient Detector Adaptation for Object Detection in a Video

VEHICLE MAKE AND MODEL RECOGNITION BY KEYPOINT MATCHING OF PSEUDO FRONTAL VIEW

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Human Detection and Tracking for Video Surveillance: A Cognitive Science Approach

Detection and recognition of moving objects using statistical motion detection and Fourier descriptors

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Gesture Recognition using Temporal Templates with disparity information

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods

Online Spatial-temporal Data Fusion for Robust Adaptive Tracking

A Novel Target Algorithm based on TLD Combining with SLBP

AUTOMATED BALL TRACKING IN TENNIS VIDEO

A New Algorithm for Shape Detection

Beyond Bags of Features

Adaptive Feature Extraction with Haar-like Features for Visual Tracking

Selection of Scale-Invariant Parts for Object Class Recognition

International Journal of Modern Engineering and Research Technology

A Cascade of Feed-Forward Classifiers for Fast Pedestrian Detection

Boosting Object Detection Performance in Crowded Surveillance Videos

A New Strategy of Pedestrian Detection Based on Pseudo- Wavelet Transform and SVM

Motion Estimation for Video Coding Standards

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai

Study of Viola-Jones Real Time Face Detector

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

A Novel Image Semantic Understanding and Feature Extraction Algorithm. and Wenzhun Huang

arxiv: v1 [cs.cv] 24 Feb 2014

Component-based Face Recognition with 3D Morphable Models

Object Category Detection: Sliding Windows

Segmentation and Tracking of Partial Planar Templates

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

Subject-Oriented Image Classification based on Face Detection and Recognition

3D Digitization of a Hand-held Object with a Wearable Vision Sensor

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi

Mingle Face Detection using Adaptive Thresholding and Hybrid Median Filter

Chapter 3 Image Registration. Chapter 3 Image Registration

Moving Object Tracking in Video Sequence Using Dynamic Threshold

Car tracking in tunnels

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis

Effects Of Shadow On Canny Edge Detection through a camera

Time-to-Contact from Image Intensity

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation

Color Local Texture Features Based Face Recognition

On Road Vehicle Detection using Shadows

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

Oriented Filters for Object Recognition: an empirical study

Improving Part based Object Detection by Unsupervised, Online Boosting

Out-of-Plane Rotated Object Detection using Patch Feature based Classifier

An algorithm of lips secondary positioning and feature extraction based on YCbCr color space SHEN Xian-geng 1, WU Wei 2

Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart

Dot Text Detection Based on FAST Points

Visual Detection and Species Classification of Orchid Flowers

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Combining Appearance and Topology for Wide

Designing Applications that See Lecture 7: Object Recognition

Toward Part-based Document Image Decoding

Comparing Classification Performances between Neural Networks and Particle Swarm Optimization for Traffic Sign Recognition

A 3D Vision based Object Grasping Posture Learning System for Home Service Robots

Transcription:

2013 Second IAPR Asian Conference on Pattern Recognition Pixel-Pair Features Selection for Vehicle Tracking Zhibin Zhang, Xuezhen Li, Takio Kurita Graduate School of Engineering Hiroshima University Higashihiroshima, Hiroshima 739-8521, JAPAN Email: {m110601, m130646, tkurita}@hiroshima-u.ac.jp Shinya Tanaka Nissan Motor Company Ltd. 1, Morinosatoaoyama, Atsugi-shi, Kanagawa 243-0123, JAPAN Email: shinya tanaka@mail.nissan.co.jp Abstract This paper proposes a novel tracking algorithm to cope with the appearance variations of vehicle in the natural environments. The algorithm utilizes the discriminative features named pixel-pair features for estimating the similarity between the template image and candidate matching images. Pixelpair features have been proved to be robust for illumination changes and partial occlusions of the training object. This paper improves the original feature selection algorithm to increase the tracking performance in other appearance changes (such as shape deformation, drifting and view angle change). The new feature selection algorithm incrementally selects the discriminative pixelpair feature whose matching error between the target and the background is lower than a given threshold. Also the roulette selection method based on the edge values is utilized to increase the possibility to select more informative feature points. The selected features therefore are considered to be robust for shape deformation and view angle changes. Compared with the original feature selection algorithm, our algorithm shows excellent robustness in a variety of videos which include illumination changes, shape deformation, drifting and partial occlusion. I. INTRODUCTION Recently, visual tracking of objects becomes a very hot issue in the fields of computer vision, because it plays a pivotal role in various computer vision applications, such as video surveillance system, autonomous driving, intelligent transport systems, human computer interaction and robotics. Therefore, the efficient and robust visual tracking of objects in unconstrained environments is very important for these applications. Visual tracking is considered as a task of estimating over time the position of interesting objects in videos or image sequences [1]. The main challenge to design robust visual tracking methods which should address the appearance variation of objects that can occur in the natural environments, such as pose change, illumination changes, shape deformation, view angle change, partial occlusion of objects. A variety of visual tracking algorithms have been presented to solve these difficulties. Lately, most of tracking algorithms employ classification techniques, such as boosting and support vector machines. Avidan [2] has redefined the tracking problem as a binary classification problem. Namely, the algorithm is used for discriminating the tracked object from the background. In this method, features are extracted necessarily from both the objects and the background. Then, a classifier is trained by the extracted features. Therefore, the trained classifier can be used to discriminate between the object and the background. For instance, Grabner [3] [5] trained a AdaBoost classifier to classify an image patch with the object in the correct position and image patches with the object in the incorrect position. Thus, the position of the object could be estimated in a higher precision. In addition, he updated feature weights on-line to achieve a stable and robust visual tracking. Because of a large number of computations are necessary in training and updating classifiers, in the method proposed by Collins [6], the training of the classifiers is replaced with the selection of the discriminative features. Nishida et al. [7], [8] proposed the algorithm for object tracking in which pixel-pair features are used to discriminate between an image patch with an object in the correct position and image patches with an object in an incorrect position. The pixel-pair feature is determined by a relative difference in the intensities of two pixels [9] [11]. It can be considered to be robust for illumination changes. By adopting pixelpair feature for our tracking algorithm, it is expected that the tracking becomes robust for illumination changes of the tracking vehicle. The discriminative pixel-pairs between the target and the background are selected and used in the tracking. The relatively good pixel-pairs are selected from the pool of the randomly selected pixel-pairs. In the proposed tracking algorithm, we also adopt the pixelpair features and develop a new pixel-pair features selection algorithm to improve the robustness of the selected features than the original pixel-pair features selection algorithm [7], [8]. The following three components are introduced in the feature selection of the proposed tracking algorithm. By these improvements of the feature selection algorithm, the robustness of our algorithm for other appearance changes of vehicle during tracking, such as shape deformation, drifting and view angle changes can be improved. 1) Threshold: Threshold is introduced to restrain the matching error of the selected pixel-pair features. Only the features with high discriminative power are used in the tracking and the less discriminative features are discarded. 2) Edge values: The pixels of the pixel-pair are selected by using the roulette selection based on edge values. The possibility of the selection of the informative feature points (salient features such as corner or edge pixels) in the pixelpairs can be increased and the robustness of the tracking for appearance changes of the tracking target can be improved. 978-4799-2190-4/13 $26.00 2013 IEEE DOI 10.1109/ACPR.2013.95 471

Frame t Tracked position and scale for frame t Template Positive image patch I Negative image patch J Frame t Search best-match image patch Object position and scale : I(p) - I(q) >= T P : J(p) >= J(q) Search region Candidate image patches with different sizes : I(p) - I(q) <= -T P : J(p) < J(q) Fig. 2. Pixel-Pair Feature. Fig. 1. Tracking procedure. 3) Incremental selection: To select only the discriminative pixel-pair features and discard the less discriminative pixelpairs, we adopt the incremental feature selection instead of the batch selection which is used in the Nishida s algorithm [7], [8]. The pixel-pairs with better discriminative power are incrementally selected one by one until the necessary number of pixel-pairs is obtained. The rest of the paper is organized as follows. In the next section, we introduce the discriminative pixel-pair feature and the Nishida s original discriminative pixel-pair feature selection algorithm. In section III, we give the details of our proposed pixel-pair feature selection algorithm. In section IV, we demonstrate our algorithm in challenging videos which are captured from a specific vehicle-mounted camera and show the results of the experiments. The last section is the conclusion of this paper. II. TRACKING BASED ON PIXEL-PAIR FEATURES A. Tracking procedure The tracking problem is defined as a binary classification task of detecting an image patch with the target in the correct position for a new video frame. The tracking procedure is depicted in figure 1. Since we are interested in tracking a specified target, firstly, we suppose the target has already been detected from the t th video frame. Thus, the position and scale of tracked vehicle are obtained. Secondly, the candidate image patches with different sizes in a surrounding search region in the (t) th frame are matched by using the selected pixel-pair features in the template. Finally, the best-match region can be considered as the position of target in the frame (t ). B. Pixel-Pair Feature In this paper, we adopt pixel-pair features [7], [8] as the primitive features for evaluating the similarity between the template image and all candidate matching images since they are robust to the illumination changes. The pixel-pair feature was proposed as an extension version of the statistical reach feature(srf) [9]. In the pixel-pair features used in the proposed tracking, the constraints on the distance between pixel-pairs are relaxed and the pixel-pairs are selected depending on the classification accuracy of the training samples. The definition of the pixel-pair feature and the similarity index c(i, J) between a positive image I and a negative image J are described as follows in figure 2. Assume the size of input images are W H. The grid Γ is defined as a set of the coordinates of pixels in the images I and J. Namely, Γ:={(i, j) i =1,...,W,j =1,...,H}. (1) The images of size W H is considered as the intensity function defined on Γ. For arbitrary pair (p, q) of grid points in Γ, thevalueppf(p q; T p ) as follows: 1 I(p) I(q) T p ppf(p q; T p ):= 1 I(p) I(q) T p (2) φ otherwise Here, T p (> 0) is the threshold of the difference of intensities. The pair (p, q) of grid points is defined as a pixel-pair feature when the value ppf(p q; T p ) φ. In the following, we apply the term ppf(p q) rather to ppf(p q; T p ), unless there is any confusion. We suppose the number of required pixel-pair features is N. By choosing a set of pairs (p, q) with selection policy s, a pixel-pair feature set RP s is defined as follows: RP s (p, q, I,T p,n):={(p, q) ppf(p q) φ}, (3) where {p, q Γ Γ}, p = {p 1,...,p N }, and q = {q 1,...,q N }. The incremental sign b(p q) for the negative image patch J is defined for calculating the similarity between image patches I and J as follows: { 1 J(p) J(q) b(p q) := (4) 1 otherwise For arbitrarily pixel-pair (p, q) RP s, the single-pair similarity r(p, q, J) between image patches I and J is defined as the number of pixel-pairs which have the same feature values. r(p, q, J) ={ppf(p q) =b(p q)} (5) The similarity index c s (I, J,RP s ) between image patches I and J, measured by using the selected pixel-pair feature set 472

The first frame Extract training samples Positive sample Fig. 3. RP s is defined as follows: c s (I, J,RP s )= Negative samples Training samples. (p,q) RP s r(p, q, J) RP s Select pixel-pair features where RP s indicates the number of elements of a pixel-pair feature set RP s. C. Preparation of training samples The training samples extraction and discriminative pixelpair features selection are carried out from the first video frame. We assume the initial position and scale of target vehicle are given by a vehicle detector. As described in figure 3, firstly, from this frame, the image patch with the target in the correct position is assumed to be a positive image patch I, negative image patches J 1, J 2,...,J F are obtained by taking region of the same size as the positive image patch from its surrounding background. Secondly, these obtained image patches are used as the training samples to design the binary classifier for tracking. Finally, the discriminative pixel-pair features for distinguishing between the positive image patch I and negative image patches Js are extracted. Therefore, the selected pixel-pair features can be used to search the bestmatch image region in the subsequent video frames. D. Nishida s feature selection algorithm In the Nishida s original feature selection algorithm, at first, more pixel-pair features than needed are randomly generated by using pixel gray values in the template image. Then the top ranked pixel-pairs are selected and they are used for tracking. To evaluate the goodness of the pixel-pairs, the matching errors of the pixel-pairs are calculated and they are ordered in the sorted list from low matching errors to high matching errors. Finally, the required amounts of pixel-pair features are selected from the beginning of the sorted list. III. PROPOSED FEATURE SELECTION ALGORITHM In the Nishida s feature selection method, top ranked pixelpairs features are selected in batch processing. If the number of discriminative pixel-pairs is not enough in the randomly generated candidates, non-discriminative pixel-pairs may be included in the selected pixel-pair features and these features probably disturb the stable tracking. To ensure the discriminative power of each pixel-pair, the goodness of each pixel-pair should be evaluated and only the pixel-pairs with discriminative power should be used. For arbitrarily pixel-pair (p, q) RP s, we redefined the single-pair similarity r(p, q, J) as the single-pair matching (6) similarity m(p,q,i, J) between the positive image patch I and a negative image patch J as follows: { 1 ppf(p q) =b(p q) m(p, q, I, J) := (7) 0 ppf(p q) b(p q) Hence, matching error E ( (p, q) RP s, I, {J i } 1 F ) of a single-pair (p, q) which evaluates the discriminability of this pair between the positive image patch I and all the negative image patches {J 1, J 2,...,J F }, is defined as follows: E ( (p, q) RP s, I, {J i } 1 ) F F = m(p, q, I, J i ) (8) Here, we use matching error of a pixel-pair feature to represent the discriminative power of this feature. For arbitrarily pixel-pair (p, q) RP s, the value of the matching error of this pixel-pair is equal to the number of negative image patches which are successfully matched with the positive image patch by this pixel-pair feature. Therefore, the pixel-pair feature with the lowest matching error must have highest discriminative power. Apart from this, we have known the robustness of pixelpair features for illumination changes. In order to cope with other appearance variations of the target vehicle, such as shape deformation, drifting, view angle change that may occur during tracking, we introduce three additional components in our proposed feature selection algorithm for tracking. A. Threshold In the Nishida s original feature selection algorithm, a set of pixel-pair features is randomly selected. Since the effect of each selected pixel-pair feature for the tracking results is not checked, some of these features may be generated from the surrounding background and some of them may have very low discriminative power. When the matching error of a pixel-pair is lower than the half of the number of negative image patches, this pair can correctly discriminate the object and the background. In order to obtain these good pixel-pair features that are vital enough to influence the tracking results, a threshold is employed to restrain the matching errors of the generated pixel-pairs. Only when the matching error of the pixel-pair feature is lower than this threshold, this pixel-pair feature can be selected. Figure 4 shows the matching errors of randomly selected pixel-pair features. The matching errors are sorted in the increasing order. If we assume the threshold as the value which is equal to the half of the number of negative image patches, it is noticed that many of the randomly selected pixel-pair features have low discriminative power and the number of good features is not so much. So the left part of pixel-pairs should be reserved as the good features and the right part should be discarded. B. Edge values Pixels such as corners are expected to be robust for shape deformation, drifting and view angle changes. To increase the possibility of the selection of such informative pixels in i=1 473

Matching error 120 100 80 60 40 20 0 1 21 200 The index of pixel-pairs sorted according to matching errors Fig. 4. Matching errors of randomly selected pixel-pairs. Gray-scale image Edge image A generated pixel-pair Fig. 6. Tracking result: illumination change. Edge values in the list Random numbers Selected point p Selected point q Fig. 5. Pixel-pairs generation by using the proposed roulette selection method over edge values. the pixel-pairs, the roulette selection of pixels based on their edge values is introduced in the proposed feature selection algorithm. The roulette selection non-uniformly selects pixels. The pixel selection probability is based on the computed edge value. Therefore, the roulette selection can prioritize pixels with larger edge values. Since the edge values of points which are in the edges and corners of the object are larger than that are in the other parts of the object, the pixels which are located in the corners or edges of the tracking target are selected more often. The proposed roulette selection method is summarized as follows and is depicted in figure 5. As the initialization step, all edge values of positive image patch I are created and are stored in a list in which each element has its edge value. To select a pixel, a random number is generated and the pixel is chosen by the generated random number corresponding to the region of edge value of this pixel in the created list. Then we repeat this process to select the other pixel. Thus, the pixelpair can be selected by these two pixels and the feature value of this pixel-pair also can be computed. C. Incremental selection To ensure high discriminative power of the selected pixelpair features, we use incremental feature selection method instead of the batch selection method used in the Nishida s original feature selection algorithm. The matching error of the pixel-pair generated by the roulette selection based on the edge values is checked and only the pixel-pair whose matching error is less than the specified threshold is selected and is used for the tracking. The pixel-pair whose matching error is greater than the threshold is neglected. By this incremental selection, we can expect that all the selected pixel-pair features have high discriminative power and the possibility of selection of the informative pixels such as corners of the tracking target is increased. The implementation of the proposed pixel-pair feature selection algorithm is described as follows: At first, we generate a pixel-pair feature using the roulette selection based on edge values of positive image patch. Then we calculate the matching error of this pixel-pair and compare this matching error with a given threshold. If the value of matching error is smaller than the threshold, we reserve this pixel-pair feature for tracking, otherwise, we discard it. We repeat these operations, incrementally select pixel-pair feature satisfying above condition until the number of necessary pixelpairs is obtained. IV. EXPERIMENTS In this section, the proposed feature selection algorithm is demonstrated in challenging videos including illumination change, shape deformation, drifting, and partial occlusion. These testing videos are all captured from a specific vehiclemounted camera. The total length of these videos is 64 minutes. For demonstrating the robustness of the proposed feature selection algorithm, we compared our proposed feature s- election algorithm with Nishida s original feature selection algorithm. Therefore, the parameters of the proposed algorithm are chosen as the same as Nishida s original algorithm as follows: the number of negative image patches F is set to 120 (the negative image patches are extracted from the search region within ±5 pixels of the positive image patch region in the horizontal and vertical two directions); the number of pixel-pair features N is set to 200; threshold of intensity difference T p is set to 30 (range: 0-255); the threshold of matching error E T is set to 45 (range: 020). Figure 6 shows the result in the case of illumination change (the scale of object is also changed). The yellow rectangle indicates the tracked vehicle position. Both tracking algorithms successfully tracked the target. Since these two tracking algorithms both adopt pixel-pair features, the results for these two algorithms are almost the same. These results show that the pixel-pair feature is robust for illumination changes. 474

Fig. 7. Tracking result 1: shape deformation and drifting. Fig. 8. Tracking result 2: shape deformation and drifting. Figure 7 and figure 8 show the results in the cases of shape deformation caused by drifting in the different scenes. Both objects were successfully tracked when the proposed tracking algorithm was used, while the Nishida s original tracking algorithm failed to track the target continuously. Figure 9 shows the result for a partially occluded vehicle. Due to pixel-pair feature is also robust for partial occlusion, both tracking algorithms exhibit the significant performance of their robustness for partial occlusion. The testing videos include 83 video sequences with 62 different target vehicles. The target vehicles can be success- fully tracked in the 71 testing video sequences although we used only one template image in each experiment. 4 failure video sequences occurred in complex environments and the remaining 8 videos were failed owing to the scale of target was excessively smaller than the scale of template. However, these failures could be improved by incrementally update the template image. In our proposed feature selection algorithm, a set of pixelpair features is selected by training samples just in the first frame, and the matching in the every frame is the same as Nishida s algorithm. Therefore, the computational complexity of our proposed feature selection algorithm can be considered to be equal to the computational complexity of Nishida s original algorithm. V. CONCLUSION In this paper, we proposed a novel discriminative feature selection algorithm for vehicle tracking. In this proposed algorithm, pixel-pair features that are considered to be robust to illumination changes are used to discriminate the image patch with the object in the correct position and the image patches with the object in the incorrect position. We utilized the roulette selection of pixels based on the edge values and the matching error of the generated pixel-pair is kept under the specified threshold. By incrementally selecting these pixelpairs, we can improve the robustness of pixel-pair features in appearance variation of vehicle. The proposed algorithm showed a significant performance with high robustness to cope with the appearance changes. Especially for shape deformation caused by drifting and view angle change of the vehicle, our algorithm has more robustness than the Nishida s original feature selection algorithm. REFERENCES [1] E. Maggio and A. Cavallaro, Video Tracking: Theory and Practice. John Wiley & Sons, 2011. [2] S. Avidan, Ensemble Tracking, in IEEE PAMI, Vol. 29, No. 2, pp.261 271, 2007. [3] H. Grabner, M. Grabner, and H. Bischof, Real-Time Tracking via Online Boosting, in Proc. BMVC 2006, pp.47 56, 2006. [4] H. Grabner and H. Bischof, On-line Boosting and Vision, in Proc. CVPR 2006, pp.260 267, 2006. [5] H. Grabner, C. Leistner, and H. Bischof, Semi-Supervised On-Line Boosting for Robust Tracking, in Proc. ECCV 2008, pp.234 247, 2008. [6] R. T. Collins, Y. Liu, and M. Leordeanu, Online Selection of Discriminative Tracking Features, in IEEE PAMI, Vol. 27, No. 10, pp.1631 1643, 2005. [7] K. Nishida, T. Kurita, and M. Higashikubo, Online Selection of Discriminative Pixel-Pair Feature for Tracking, in Proc. SPPRA 2010, 2010. [8] K. Nishida, T. Kurita, Y. Ogiuchi, and M. Higashikubo, Visual Tracking Algorithm Using Pixel-Pair Feature, in Proc. ICPR 2010, pp.1808 1811, 2010. [9] R. Ozaki, Y. Satoh, K. Iwata, and K. Sakane, Satistical Reach Feature Method and Its Application to Template Matching, in Proc. MVA 2009, pp.174 177, 2009. [10] S. Kaneko, I. Murase, and S. Igarashi, Robust image registration by increment sign correlation, Pattern Recognition, Vol.35, No.10, pp.2223 2234, 2002. [11] M. Özuysal, P. Fua, and V. Lepetit, Fast Keypoint Recognition in Ten Lines of Code, in Proc. CVPR 2007, pp.1 8, 2007. Fig. 9. Tracking result: partial occlusion. 475