HOG-based Pedestriant Detector Training

Similar documents
Pedestrian Detection and Tracking in Images and Videos

A New Strategy of Pedestrian Detection Based on Pseudo- Wavelet Transform and SVM

Category vs. instance recognition

Selective Search for Object Recognition

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Classification and Detection in Images. D.A. Forsyth

Category-level localization

Classification of objects from Video Data (Group 30)

ECG782: Multidimensional Digital Signal Processing

Human detection solution for a retail store environment

Development in Object Detection. Junyuan Lin May 4th

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation

Bus Detection and recognition for visually impaired people

Human detections using Beagle board-xm

Discriminative classifiers for image recognition

Feature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.

Find that! Visual Object Detection Primer

Multiple-Person Tracking by Detection

Machine Learning for. Artem Lind & Aleskandr Tkachenko

Human detection using histogram of oriented gradients. Srikumar Ramalingam School of Computing University of Utah

Computer Vision. Exercise Session 10 Image Categorization

Window based detectors

Combining PGMs and Discriminative Models for Upper Body Pose Detection

Person Detection in Images using HoG + Gentleboost. Rahul Rajan June 1st July 15th CMU Q Robotics Lab

Preliminary Local Feature Selection by Support Vector Machine for Bag of Features

Object Detection with Discriminatively Trained Part Based Models

High Level Computer Vision. Sliding Window Detection: Viola-Jones-Detector & Histogram of Oriented Gradients (HOG)

Previously. Window-based models for generic object detection 4/11/2011

International Journal Of Global Innovations -Vol.4, Issue.I Paper Id: SP-V4-I1-P17 ISSN Online:

Pedestrian Detection Using Structured SVM

Seminar Heidelberg University

Object Detection Design challenges

GPU-based pedestrian detection for autonomous driving

Tracking. Hao Guan( 管皓 ) School of Computer Science Fudan University

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE

Object Category Detection: Sliding Windows

Robust PDF Table Locator

Ceiling Analysis of Pedestrian Recognition Pipeline for an Autonomous Car Application

Human Motion Detection and Tracking for Video Surveillance

Deformable Part Models

Histograms of Oriented Gradients for Human Detection p. 1/1

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

DPM Demo Kit User s Manual Version: dpm_dk_um_1_0_1.doc

Human-Robot Interaction

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Car Detecting Method using high Resolution images

DATA MINING INTRODUCTION TO CLASSIFICATION USING LINEAR CLASSIFIERS

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking


Object Category Detection: Sliding Windows

Comparison of Object Detection Algorithms on Maritime Vessels Mark Chua 1, David W. Aha 2, Bryan Auslander 3, Kalyan Gupta 3, and Brendan Morris 1

Recognition of Animal Skin Texture Attributes in the Wild. Amey Dharwadker (aap2174) Kai Zhang (kz2213)

Detection and Handling of Occlusion in an Object Detection System

An Implementation on Histogram of Oriented Gradients for Human Detection

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Computer Science Faculty, Bandar Lampung University, Bandar Lampung, Indonesia

Fast and Stable Human Detection Using Multiple Classifiers Based on Subtraction Stereo with HOG Features

Feature Based Registration - Image Alignment

Tri-modal Human Body Segmentation

Templates and Background Subtraction. Prof. D. Stricker Doz. G. Bleser

Detecting Object Instances Without Discriminative Features

Robotics Programming Laboratory

The Pennsylvania State University. The Graduate School. College of Engineering ONLINE LIVESTREAM CAMERA CALIBRATION FROM CROWD SCENE VIDEOS

Image classification Computer Vision Spring 2018, Lecture 18

Announcements. CS 188: Artificial Intelligence Spring Generative vs. Discriminative. Classification: Feature Vectors. Project 4: due Friday.

Exploiting scene constraints to improve object detection algorithms for industrial applications

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO

CS5670: Computer Vision

Linear combinations of simple classifiers for the PASCAL challenge

Object Tracking using HOG and SVM

Implementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking

Generic Object-Face detection

Action recognition in videos

Recent Researches in Automatic Control, Systems Science and Communications

Pedestrian Detection with Occlusion Handling

Implementation of Human detection system using DM3730

The Population Density of Early Warning System Based On Video Image

2 OVERVIEW OF RELATED WORK

Classifiers and Detection. D.A. Forsyth

GPU Accelerated Number Plate Localization in Crowded Situation

Efficient Acquisition of Human Existence Priors from Motion Trajectories

Classification: Feature Vectors

2D Image Processing Feature Descriptors

CAP 6412 Advanced Computer Vision

CS 223B Computer Vision Problem Set 3

Object Recognition II

EE795: Computer Vision and Intelligent Systems

Image Classification pipeline. Lecture 2-1

HISTOGRAMS OF ORIENTATIO N GRADIENTS

Introduction to object recognition. Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and others

CSE/EE-576, Final Project

CSE 573: Artificial Intelligence Autumn 2010

Object detection using Region Proposals (RCNN) Ernest Cheung COMP Presentation

Human detection based on Sliding Window Approach

Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Contents Machine Learning concepts 4 Learning Algorithm 4 Predictive Model (Model) 4 Model, Classification 4 Model, Regression 4 Representation

THE SPEED-LIMIT SIGN DETECTION AND RECOGNITION SYSTEM

EE795: Computer Vision and Intelligent Systems

Transcription:

HOG-based Pedestriant Detector Training evs embedded Vision Systems Srl c/o Computer Science Park, Strada Le Grazie, 15 Verona- Italy http: // www. embeddedvisionsystems. it Abstract This paper describes how to train a pedestrian detection classifier based on HOG and how to use the training results to confgure the logipdet IP core. logipdet is HOG/SVM-based pedestrian detector optimized for Xilinx R devices. The core is developed for real-time vision-based embedded applications such as Automotive Driving Assistance. The algorithm follows a discriminative approach combining a HOG descriptor and a SVM classifier. logipdet is a configurable detector suitable for different classes of objects. Pedestrian is just the most popular example. The training process presented in this paper is valid in general, for different objects, and not only to train the logipdet IP core. An example of C++ source code for training the pedestrian detector is also provided. The IP core is sourced from evs embedded Vision Systems Srl and is part of the logicbricks TM IP core library provided by Xylon doo. It can be evaluated on the logiadak Automotive Driving Assistance Kit. It is a Xilinx Zynq-7000 All Programmable SoC based development platform for advanced vision-based automotive Driver Assistance (DA) applications. For more information about the logipdet and logiadak visit http://www.logicbricks.com/. Keywords: logipdet, logiadak, Pedestrian Detection, HOG, Histogram of Oriented Gradient, SVM, Support Vector Machines, Machine Learning, Driving Assistance, FPGA IP core, Zynq-7000 All Programmable SoC 1

1. Elements of an HOG pedestrian detector The approach to pedestrian detection we are considering in this paper has three main components. 5 10 15 20 25 A feature descriptor. We describe an image region with a high-dimensional descriptor: in this case, the descriptor is the histogram of gradients (HOG) feature [1]. The feature is based on evaluating normalized local histograms of image gradient orientations on a dense grid (see Figure 1). The image window is divided into small spatial regions (cells). A local 1D histogram of gradient orientations over the pixels of each cell is estimated. Grouping neighboring cells into larger spatial regions, called blocks, and performing local contrast normalization on each block allows to gain a certain invariance to illumination and shadowing. The combination of all the block normalized histograms forms the descriptor. A learning method We learn to classify an image region (described using HOG features) as a pedestrian or not. For this, we will be using support vector machines (SVMs) and a large training dataset of image regions containing pedestrians (positive examples) or not containing pedestrians (negative examples). A sliding window detector Using the classifier, we can tell if an image region looks like a pedestrian or not. The final step is to run this classifier as a sliding window detector on an input image in order to detect all instances of pedestrians in that image. The process is repeated on successively scaled copies of the image so that objects can be detected at various sizes and ranges. Usually, non-maximal neighborhood suppression is applied to the output to remove multiple detections of the same object. 2. Pedestrian dataset The first step is creating the dataset, which consists of a set of images of positive and negative examples. 2

Figure 1: HOG descriptor 3

2.1. Positive images 30 35 40 45 50 55 Positive images are those containing pedestrians. For a successful training, positives must well represent the complexity of the object. In this case, we need images of pedestrians in different contexts and illumination conditions, with different appearances, poses and clothes. Partial occlusions may also be included. In addition, adding left-right reflections of the selected images is an easy way to enrich the dataset. On the other hand, there is no need to complicate training if not necessary. Therefore, since independence from scale is achieved scaling the input image, it is best if pedestrians inside positive images appear to be the same size and centered. You may consider adding images from different points of view, such as top and side views. However, if the appearance of the object varies too much the final classifier may behave poorly. If you run in a case like that, please do consider to train a classifier for each point of view. For example, cameras mounted on the top of trucks will have a different perspective of pedestrians than cameras mounted on passenger cars. Therefore, a special training for truck mounted cameras may be in order. A very good example of dataset for pedestrians is the INRIA person dataset, which can be downloaded at the website http://pascal.inrialpes.fr/data/ human/. It is composed by 1239 images of pedestrians, together with their leftright reflections (2478 images total), cropped from a varied set of photos. The people are usually standing, but appear in any orientation and against a wide variety of backgrounds, including crowds. Some positives images of the dataset are shown in Figure 1. These images have 64128 pixels resolution, and include about 16 pixels of margin around the person on all four sides. As explained by the authors of HOG paper [1], this border provides a significant amount of context that helps detection. 4

Figure 2: Some positive images from INRIA person dataset 2.2. Negative images Negative images are those not containing pedestrians. Theoretically, they should contain anything but pedestrians, since the classifier must be able to 60 distinguish pedestrians from any other objects. Of course, this set of images is potentially infinite. In practice, we can restrict to contexts where pedestrians are likely to appear, considering the final application. If pedestrian detection is applied in the automotive, for example, a set of images taken from road scenarios are good candidates. 65 The set of negatives of the INRIA person dataset is composed of 1218 photos of different scenes, urban and not urban, and of generic objects (wheels, flowers, etc. see Figure 3). A set of negatives is compulsory to generate a pedestrian detector that works in any context. To improve performances, it is often necessary to augment the set of negatives of the INRIA person dataset with images 70 from the final application. Figure 3: Some negative images from INRIA person dataset 3. Pedestrian detection training process The training process involves a series of steps. 5

75 80 First, a preliminary classifier is estimated using all positive images and a fixed set of patches sampled randomly from the person-free training photos in the dataset. The HOG descriptors of the samples are projected as points in a high dimensional space, and then a support vector machine constructs a hyperplane in this space, where positive and negative samples are separated. A good separation is achieved by the hyper-plane that has the largest distance from the nearest training data point of each class, so-called functional margin (see Figure 4). Figure 4: Example of linear Support Vector Machine. White dots represent samples from positives and black dots samples from negatives.the dashed line indicates the maximized margin. 85 Second, with this preliminary classifier, the complete set of negatives in the training dataset is exhaustively searched for false positives (called hard examples). The classifier is then re-trained using this augmented set (initial number + hard examples) to produce a more accurate detector. The operation of selection of hard examples and retraining is called bootstrap. According to [2], less than two bootstrap rounds would lead to performances that depend heavily on the initial training set. Moreover, at least two bootstrap rounds are necessary to reach the full performance of the standard combination HOG + linear SVM. 6

90 4. Pedestrian detection training code To help in training logipdet and perform the processes described above, we make available a software bundle downloadable at [enter website]. The code is released as a Qt project. For installation instructions, see the README file included into the bundle. The bundle contains: 95 100 105 1. A revised version of HOG feature extractor function, released as a dll library, that includes all the approximations used in the logipdet core. It s necessary to be consistent with the FPGA implementation for both training and testing. 2. A Qt project including the source code for SVM training and testing (trainhog utility). 3. Example batch files (running under MS Window OS) showing how to parametrize the training tool. The project is developed using the ver 6.1 of SVM light (http://pascal. inrialpes.fr/soft/olt). It is a variant of the original SVM light by Thorsten Joachims: it adds the capability of handling binary files instead of text files, with a consequent relevant speed up of the performance. Please note that, to operate correctly, the bundled code requires the following libraries installed on your PC: OpenCV 2.4.10 for HOG descriptor 110 115 Qt library version greater than 4 OpenCV is downloadable at http://opencv.org/downloads.html while Qt library is downloadable at https://www.qt.io/download/. The compilation of the project generates the executable program train- Hog.exe providing a command line user interface. The utility can be used for training, testing and measuring the performance of the trained classifier. It can be configured with several parameters that controls the training process. The most important ones are reported below: 7

Number of positive examples. It is always best to use all positives available. 120 125 130 135 140 145 Number of initial negative examples. This is the number of negatives of the initial classifier. Usually we take 5 times the number of positives. Number of hard negatives. This is the maximum number of samples that the classifier estimates incorrectly, to be extracted from the negative set. These samples will be inserted in the dataset for re-training. You should see that moving forward with bootstrap rounds, it will take much longer to find the required number of hard examples. If, after two bootstraps, a relevant number of false positives is still extracted for each image it means that the training is not working. Number of bootstrap runs. At least two bootstrap rounds are necessary to get a classifier [2]. Moreover, we experienced that more than three bootstrap rounds usually do not improve performances. Thus we recommend to set this parameter to three runs. Regularization parameter (C). A soft-margin SVM is used: we allow the classifier decision to make a few mistakes (some samples - outliers or noisy examples - are inside or on the wrong side of the margin). This regularization parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C (>1), the optimization will pick a smaller-margin hyper-plane if that hyper-plane gets all the training points classified correctly. Conversely, a very small value of C (< 0.001) will cause the optimizer to look for a larger-margin separating hyper-plane, even though that hyper-plane misclassifies more points. Since the dataset is usually very big, the default value we recommend is 0.01. Cost factor. Describes how much training errors on positive examples outweigh errors on negative examples. This parameter is used to compensate the fact that the number of negatives is much bigger than the number 8

150 of positives. A value of 1 means that positives and negatives have the same weight, a value of m means that each positive weights m-times more than the negatives. Usually it is best to raise its value as the number of negatives increases. Figure 5: Different hyper-planes are picked for the same training datatset depending on the C value 155 160 The result of the training is a text file containing all the coefficients (C0,..,Cn,) of the hyper-plane, plus the bias term. The detection window size of logipdet is 128x64 pixels corresponding to 15x7 blocks. So, n = 15x7x32 = 3360. For the logiadak (ver 2.0 or 3.0) users it is pretty easy to configure the logipdet IP core with their own trained classifier. They have just to rename the generated text file in SVM.txt and copy it into the /logipdet folder of the SD card. Switching on the board the pedestrian detection demo application will automatically initialize the logipdet core with the user s classifier loaded from file. To help the user in parameterizing the trainhog we provide a set of batches files as example for training and testing the classifier: train.bat Example of how to use the trainhog utility in training mode. testdisplay.bat Example of how to use the trainhog utility in test mode. The output is the set of input images with the result of detection in overlay. 9

165 testprecisionrecall.bat Example of how to use the trainhog utility to com- pute the precision and recall metrics (described later). 5. Pedestrian detector performances estimation 170 175 To evaluate the performances of a detector one requires ground truth data, i.e. a set of images or videos labeled with pedestrian information. The evaluation works in terms of precision and recall, which are explained in the following paragraph. For classification tasks, the terms true positives (tp), true negatives (tn), false positives (fp), and false negatives (fn) compare the results of the classifier under test with the ground truth. The terms positive and negative refer to the classifier s prediction (sometimes known as the expectation), and the terms true and false refer to whether that prediction corresponds to the ground truth (sometimes known as the observation). Precision and recall are then defined as: Precision = tp/(tp + fp) Recall = tp/(tp + fn) 180 In simple terms, high recall means that the classifier returned most of the relevant results, while high precision means that the classifier returned substantially more relevant results than irrelevant. The best classifier is the one that maximizes both recall and precision. 6. Pedestrian detector: how to improve performances 185 In this paper, we provided an example code for training a pedestrian detector based on HOG features and linear SVM. The performances of the resulting classifier may not fit the final application. Is it possible to improve performance? The answer is yes. Here are the items you may want to try: Add positive and negative examples from the scenario of your application. 10

190 Test the classifier on the target scenario and select positive and negative examples that the classifier is not able to classify correctly (hard examples), then add these examples for re-training. Modify the parameters of the training process. 195 200 Use all constraints you may have on the scene, or due to the setup. For example, if the camera is mounted fixed on a vehicle you can add region of interests, i.e. test the classifier only in the regions of the image where pedestrians are likely to be present. In fact, due to perspective, pedestrians far away from the camera can only appear on a restrict area. After some trials, you may find that the classifier does not improve any longer, or it is actually getting worse. This probably means that you reached the maximum performance of the classifier. References [1] N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in: In CVPR, 2005, pp. 886 893. 205 [2] S. Walk, N. Majer, K. Schindler, B. Schiele, New features and insights for pedestrian detection., in: CVPR, IEEE, 2010, pp. 1030 1037. 11