Tightly-Coupled LIDAR and Computer Vision Integration for Vehicle Detection

Size: px
Start display at page:

Download "Tightly-Coupled LIDAR and Computer Vision Integration for Vehicle Detection"

Transcription

1 Tightly-Coupled LIDAR and Computer Vision Integration for Vehicle Detection Lili Huang, Student Member, IEEE, and Matthew Barth, Senior Member, IEEE Abstract In many driver assistance systems and autonomous driving applications, both LIDAR and computer vision (CV) sensors are often used to detect vehicles. LIDAR provides excellent range information to different objects. However, it is difficult to recognize these objects as vehicles from range information alone. On the other hand, computer vision imagery allows for better recognition, but does not provide high-resolution range information. In this paper, a tightlycoupled LIDAR/CV integrated system is proposed for vehicle detection. This sensing system is mounted on the front of a test vehicle, facing forward. The LIDAR sensor estimates possible vehicle positions. This information is then transformed into the image coordinates. Different Regions of Interest (ROIs) in the imagery are defined based on the LIDAR object hypotheses. An Adaboost object classifier is then utilized to detect the vehicle in ROIs. A classifier error correction approach is used to choose an optimal position of the detected vehicle. Finally, the vehicle s position and dimensions are derived from both the LIDAR and image data. The main contribution of this paper is that the complementary advantages of two sensors are utilized. The LIDAR scanning data are applied for classifier correction. And the output of the Adaboost classifier is used to provide distance and dimension information. Experimental results are presented to illustrate that this LIDAR/CV system is reliable. It can be used in applications such as traffic surveillance and roadway navigation tasks. I. INODUCTION In recent years, numerous research projects have been carried out in developing Driver Assist Systems (DAS). The purpose of a DAS is to observe the environment on the road and help drivers obtain traffic information with high accuracy and efficiency. A DAS can also be extended to automated driving tasks. One of the key tasks of a DAS is detecting and tracking the surrounding cars around the DAS-equipped vehicle. In terms of sensors, one of the more popular sensing techniques is using computer vision system. Current visionbased vehicle detection techniques use different features to segment Regions Of Interest (ROI) and detect objects in those regions. A texture-distance-size combined detection technique is proposed in [3], which uses wavelet transformations to extract ROI features. Khammari et al. presented a vision-based vehicle detection system using a gradient and Adaboost classification technique [4]. Vehicle profile symmetry and shadows underneath a vehicle are used to detect forward vehicles in [5, 6]. Most of these systems that This work is supported by the University of California Transportation Center (UCTC) Doctoral Dissertation Fellowship. Lili Huang and Matthew Barth are with Department of Electrical Engineering, and also with the Center for Environmental Research and Technology, University of California Riverside, Riverside, CA, 9252, U.S.A. lhuang@ee.ucr.edu; barth@ee.ucr.edu. Camera ALASCA XT LIDAR ALASCA XT LIDAR Fig.. The mobile sensing platform. This probe vehicle carries one camera, as well as two ALASCA XT LIDAR sensors made by IBEO. rely solely on computer vision techniques suffer from light intensity variations and limited fields of view. Further, it is also difficult to extract accurate range information, which is often an important parameter for DAS and automated driving tasks []. LIDAR range sensing technique is another widely used technique for DAS and automated driving. This kind of sensing technique provides high ranging accuracy, wide fields of view, with low data-processing requirements [2]. However, a major challenge for LIDAR-based systems is that their reliability depends on the distance and reflectivity of different objects. Moreover, LIDAR often suffers from noise issues, making it difficult to distinguish between different kinds of objects (e.g., vehicles, pedestrians, roadway infrastructure, etc.) Sensor fusion techniques have been used for years to combine sensory data from disparate sources. Haselhoff et al. [7] uses radar as one of the sensing modalities. LIDAR is used in [8, 9], where the detection is performed in LIDAR space, followed by object classification in CV space. In all the existing work, the distance and dimension information is provided only by the range sensor, and the computer vision image is used only for object classification. In the following sections, this kind of sensor fusion system is called the classic LIDAR-camera fusion system. In our research, we propose a tightly coupled LIDAR/CV system, where the LIDAR scanning points are used for hypothesizing regions of interest and for providing error correction to the classifier, and the vision image provides object classification information. LIDAR object points are first transformed into image space, and regions of interest are generated. An Adaboost classifier based on computer vision

2 systems is then used to detect vehicles in the image space. Dimensions and distance information of the detected vehicles are then known in body-frame coordinates. This approach uses information from both LIDAR and CV, thus provides a more complete and accurate map of surrounding vehicles in comparison to the single sensors used separately. One of the key features of this technique is that it uses LIDAR data to correct the Adaboost classification pattern. Moreover, the Adaboost algorithm is utilized both for vehicle detection and for vehicle distance and dimension extraction. Then the classification results provide compensatory information to the LIDAR measurements. This paper is organized as follows: an overview of the vehicle detection system is given in Section II, followed by a description of the LIDAR system. Section III describes the vision based system. The vehicle detection algorithm is introduced in Section IV. Experimental results are provided in Section V, followed by conclusions and future work in Section VI. II. OVERVIEW OF THE VEHICLE DETECTION SYSTEM Vehicle detection is the first and fundamental step for any DAS or automated driving applications. With sensors mounted on a moving platform, the detected data change rapidly, making it difficult to extract objects of interest. In our approach, we compensate for spatial motion of the moving platform. A. Multi-Module Architecture The input of our vehicle detection system consists of two LIDAR sensors and a single camera, as is shown in Fig.. A pair of IBEO ALASCA XT LIDAR sensors is mounted on the front bumper. Each LIDAR has four planes of detection. One LIDAR provides range information at 25Hz, with a scanning range up to 200m, 240 o horizontal field of view, a 3.2 o vertical field of view, and a depth resolution of 4 cm [0]. The areas covered by the two LIDAR sensor overlap with each other. The camera is placed inside of the vehicle, i.e., behind the rear-view mirror. The field of view of this camera is fully covered by the LIDAR ranging space. Fig. 2 represents the flow chart of the proposed vehicle detection system. It consists of four subsystems: a LIDARbased subsystem, a coordinate transformation subsystem, a vision-based subsystem, and the vehicle detection subsystem. The LIDAR-based subsystem as well as the coordinate transformation subsystem is introduced in this section. The offline vision training subsystem and the coordinate transformation subsystem will be introduced in the next section. B. LIDAR-Based Subsystem The LIDAR Data Acquisition module uses the IBEO external control unit (ECU) to communicate, collect and combine data from the two LIDAR sensors. The Prefiltering and Clustering module aims to transform the scan data from distances and angles to position, and cluster the incoming data into segments using a Point- Distance-Based Methodology (PDBM) []. If there exists Lidar-based Subsystem Lidar Data Acquisition Image Capture Training Preprocessing Pre-filtering and Clustering Haar Training Feature Extraction Lidar to Image Coordinate Transformation ROI (Image coordinate) Vision Subsystem (offline) Fig. 2. Classifier ROI Generation (Lidar Coordinate) Coordinate Image to Lidar Transformation Coordinate Transformation Subsystem Tentative Detection Result Vehicle Detection Subsystem Adaboost Error Correction Flow chart of the mobile sensing system. Vehicle Detection any segment consisting of less than three points, and the distance of this segment is greater than the discard threshold, these points are considered as noise. Then the segment will be discarded. The Feature Extraction module extracts the primary features in the cluster. The main feature information in one segment is its geometrical representation, such as lines and circles. One of the advantages of these geometrical features is that they occupy far less space than storing all the scanned points. A vehicle may have any possible orientations. The contour of a vehicle is constructed by four sides: front, back, left-side and right-side. The LIDAR sensor can capture one side, or two neighboring sides, as is shown in Fig. 3. Fig. 3. The orientation of the vehicle significantly changes its appearance in the scan data frame. The blue rectangular shows the position of the LIDAR. When the object is close to the probe vehicle, the extracted feature provides enough information for object classification. However, if the target is far away, it may be represented by only one or two scanning points. For those objects with only a few LIDAR points, it s difficult to get reliable size, location, or orientation information from the scan data alone. Note that the computer vision image also contains size and orientation information, which can be extracted by the object classification technique. Therefore, the LIDAR scan data and Adaboost output are complimentary to each other. The ROI Generation module calculates the position of the ROI bounding boxes in LIDAR coordinates. It is worth mentioning that the ROI bound is not defined by the LIDAR points alone, since the scan points on one target may not be able to represent its full dimension. In our algorithm, the width and length of ROI are defined by both LIDAR data points and the maximum dimension of a potential vehicle. Each ROI is defined as a rectangular area in the image. The bottom of the rectangle is the flat ground. The top of the rectangle is set to be the maximum height of a car. The left

3 and right edges are obtained from the furthest left and right scanning points in a cluster, as well as the typical width of a car. C. Coordinate Transformation Subsystem The LIDAR to Image Coordinate Transformation module transforms each of the LIDAR points into the image frame. The relative position and orientation between the sensors should be obtained for the transformation. We proposed a unique multi-planar LIDAR and CV calibration algorithm in [2], which calculates the geometry transformation matrix between the multiple invisible beams of the LIDAR sensors and the camera. This calibration approach is applied in our test. There are several coordinate systems in the overall system: camera, image, and LIDAR reference systems. A camera can be represented by the pinhole model. One 3D point in the camera coordinate system is denoted as P c =[X c Y c Z c ] T, and is projected to a pixel p =[uv] T in the image coordinate frame. The pinhole model is give in [3]: sp A [R t] P c () where s is an arbitrary scale factor. A is the camera intrinsic matrix, and (R, t) are called extrinsic parameters. The LIDAR sensor reports distance and direction of each scan point in LIDAR coordinates. The scanning output is denoted by P l =[X l Y l Z l ] T. Suppose a point P is denoted by P c =[X c Y c Z c ] T in the camera coordinates, and P l =[X l Y l Z l ] T in the LIDAR coordinates. The transformation from LIDAR coordinate to camera coordinate is given as: P c = R c l P l + t c l (2) where (R c l, tc l ) are the rotation and translation parameters which relate the LIDAR coordinate system to the camera coordinate system. In the multi-sensor calibration process, the transformation and rotation coefficients (R c l, tc l ) in equation (2) are obtained off-line. This calibration method requires the LIDAR and camera to observe a checkerboard simultaneously. A few checkerboard poses are observed and recorded. The calibration approach has two stages: the first stage is to obtain a closed-form solution, and the second stage refines the parameters by the maximum likelihood criterion [2]. During the road test, each LIDAR detection point P l is transformed into camera coordinate as P c. P c is then transformed into point p c in image plane by solving equation (). Fig. 4 illustrates the LIDAR scan points and its transformation result in the image reference systems. After the LIDAR to image transformation coefficients are calculated, the ROI generated in the LIDAR-based subsystem is converted into the image frame. A 0% larger ROI is calculated due to the inaccuracy of the transformation from LIDAR data to image data. The images with LIDAR points and ROIs are shown in Fig. 5. The Image to LIDAR Coordinate Transformation module is called to correct the Adaboot classification result. More Fig. 4. The LIDAR scan points. Points in the LIDAR coordinate system. Each green dot is one scan point. These points are transformed to the image frame. Each blue star in the image is one LIDAR point. Vehicles in the red circles are the enlarged image of the detected area. Fig. 5. Complete vision of several objects captured by the LIDAR and the video camera. LIDAR data show that the distance between the sensors and the vehicle in the red circle is greater than 90 meters. There s only one reflection point from this vehicle. ROIs. details will be given in Section III and Section IV. The transformation coefficients are obtained by solving equation () and (2). III. VISION-BASED SYSTEM Object classification from the hypothesized ROIs is required for vehicle detection purpose. The feature representations are used for object classifiers by an Adaboost algorithm [4]. Viola et al. proposed in [4] that the object is detected based on a boosted cascade of feature classifiers, which performs feature extraction and combines features as simple weak classifiers to a strong one. The Adaboost classifier should be off-line trained using target as well as non-target images. In our system, the target images, i.e., the rear view of the vehicles, are called positive samples; while the non-vehicles are named as negative samples. Fig. 6 illustrates some of the positive and negative samples in the training datasets. The training samples are taken from both the Caltech vehicle image dataset [7] and the video collected by our test vehicle. The positive samples include passenger cars, vans, trucks and trailers. The negative sample sets include roads, traffic signs, buildings, plants and pedestrians. A. Training Preprocessing Both the positive and the negative samples are passed to the vision system are color images. In order to remove the effects of various illumination conditions and camera differences, grey-level transformation is required as a preprocessing step. In our system, the grey-level normalization

4 Fig. 6. Positive and negative samples used in Adaboost training. method is applied to the whole image, which transforms grey-level of the image to be in [0, ] domain. The color image is transformed by [7]: I (x, y) = I(x, y) I min (3) I max I min where I(x, y) is the intensity of pixel (x, y), I min and I max are the minimum and maximum values in this image, respectively. I (x, y) is the normalized grey-level value. The second step is to normalize size of all the positive samples. This step is implemented before training since different resolution may cause different number of features to be counted. Size of the normalized positive samples determines minimum size of the object that can be detected [5]. In our test, the normalized image size is set to be pixels. B. Haar Training In the vision-based system, Haar-like features are used to classify objects. This approach combines more complex classifiers in a cascade which quickly discard the background regions while spending more computation on the Haar-like area [4]. More specifically, we use 4 feature prototypes for the Haar training [6]. These features represent the characteristic properties like edge, line, and symmetry. The features prototypes can be grouped into three sets:. Edge features: the difference between the sum of the pixels within two rectangular regions. Here black areas with have negative weights, and white areas with have positive weights. After the weak classifier has been trained at each stage, the classifier is able to detect almost all the targets of interest while rejecting certain non-target objects. Then a cascade of classifiers is generated to form a decision tree. In our training process, there are totally 5 stages. Each stage was trained to eliminate 60% of the non-vehicle patterns, and the hit rate in each stage is set to be Therefore, the total false alarm rate for this cascade classifier is supposed to be e 06, and the hit rate should be around IV. MOVING VEHICLE DETECTION SYSTEM The Adaboost algorithm designs a strong classifier that can detect multiple objects in the given image. However, there is no guarantee that this strong classifier is optimal for the object detection. In contrast to the classic Adaboost algorithm, in our test there is only one vehicle in each ROI defined by the LIDAR clustering algorithm. Therefore, it is not necessary for the Adaboost algorithm to detect several possible targets in one ROI. The classification correction technique is proposed to utilize the LIDAR scanning data to reduce the redundancy in the Adaboost detection results. There are two kinds of redundancy errors in the classification results. Fig. 7 gives some examples of these two cases. Both have detected more than one object, while the ground truth is that there is only one vehicle. One kind of error is that the Adaboost detects two possible targets, while the area of the smaller one is almost covered by the larger one, as is shown in Fig. 7. Another error shown in Fig. 7 is that all the detected areas belong to the same object, while none of them covers the full body of the target. 2. Line features: the sum within two outside rectangular regions subtracted from the sum in the center rectangular. Fig. 7. Two kinds of redundancy errors in Adaboost detection. 3. Center-surround features: the difference between the sum of the center rectangular and the outside area. Suppose in the i-th ROI, there exists a LIDAR point cluster R i, which has the following features in the LIDAR coordinate system: c LIDAR i as the center of the cluster, wi LIDAR as the width of the object, and li LIDAR as the possible length of the vehicle. On the image side, there are n detected target candidates denoted as d, d 2,..., d n. Initially,

5 d, d 2,..., d n are transformed from image coordinate to camera coordinate, then to LIDAR coordinates using the equation () and (2). Suppose the points in LIDAR coordinates are denoted as D, D 2,..., D n. The j-th candidate D j has center c DETECT i,j, width wi,j DETECT and height h DETECT i,j. Then in the LIDAR coordinate frame, two vectors are defined as m LIDAR i = (c LIDAR i,wi LIDAR ) and m DETECT i = (c DETECT i,,wi, DETECT,..., c DETECT i,n,wi,n DETECT ). Here m LIDAR i is the information measured by LIDAR, and m DETECT i consists of measurements in LIDAR coordinate systems which are transformed from the image reference system. The coefficient w i for mapping multiple target areas to the LIDAR information satisfies: w i = argmin m LIDAR i j=:n w i,j m DETECT i,j (4) where. is the Euclidean norm. w i is then used as a weight to recalculate the detected area, which is a combination of all the detected objects in one ROI. The LIDAR scanning points and the Adaboost results are then combined to generate a complete map of vehicles. A summary of the vehicle detection process is given here: Given LIDAR points cluster R i with features (c LIDAR i,wi LIDAR,L LIDAR i ), detected target candidate d, d 2,..., d n in the image; if no object detected in the ROI then enlarge ROI and search again else define m LIDAR i =(c LIDAR i,wi LIDAR ) transform d,..., d n in the image coordinate frame to LIDAR reference frame define m DETECT i =(c DETECT i,,wi, DETECT,...) calculate the weight vector which minimize m LIDAR i j=:n w i,jm DETECT i,j end if The detected vehicle is located at j=:n w i,jc DETECT i,j. The vehicle detection system proposed in this paper can be summarized as: ) The Adaboost classifier training is done offline with both positive and negative samples. 2) ROI is defined by the LIDAR scan data. No more than one vehicle is assumed to exist in each ROI. 3) Use the Adaboost classifier to make a preliminary vehicle detection. 4) LIDAR data is used to correct the Adaboost redundancy error, and to merge the detect area in one ROI. 5) Combine the Adaboost detected area (in LIDAR coordinate) and the LIDAR output to generate a complete vehicle distance and dimension map. V. EXPERIMENT RESULTS To evaluate the performance of the proposed system, a dataset of 377 images was used for training and testing. These images are taken from both the Caltech vehicle image dataset [7] and the video samples collected by the probe vehicle. The dataset contains 43 vehicles and 75 negative samples. Another test dataset consists of 526 images with synchronized scanning data are used for performance evaluation. The test dataset was recorded in a local parking lot on different days during different seasons. The Hit rate (HR), false alarm rate (FAR) and region detection rate (RDA) are used to evaluate the performance of this system. Here HR is the number of detected vehicles over the total number of vehicles. RDA denotes the percentage of real vehicle detection rate, where most area of the vehicle is covered by the rectangle, and there is only one rectangle that covers this object. Therefore, a target that is hit may not be region really detected; and a region detected object is always a hit. The higher the RDA is, the more accurate the detection result will be. Fig. 8 illustrates several cases for hit, false alarm, and region detection. In this figure, represents the target region in the image, and is the detected region by the classifier. 2 Fig. 8. Three target detection cases. The first row is region detected, the second row is hit but not region detected, and the third row is false alarm. Table gives the detection performance of the Adaboost classifier (detection with camera only), the classic LIDARcamera sensor fusion system, and our LIDAR-CV detecterror correction-combination approach. Here in both the classic sensor fusion technique and our approach, LIDAR data are utilized for ROI generation. The difference is, in our approach LIDAR data help correct the classification result. Table shows that our approach both improved the hit rate and reduced the false alarm rate in comparison with the Adaboost classifier. Compared with the classic LIDARcamera fusion system, our approach improves the region detect rate from 84.85% to 89.32%, since for each hit but not accurately covered object the LIDAR scanning data helps to recomputed position of the target. Most of the overlapping or partial target detection areas are merged during the LIDAR correction process. Fig. 9 is the final results of our vehicle detection system. The left column is the LIDAR scan points, and the right column illustrates the detection result with information from the sensor fusion system. In Fig. 9, all the vehicles are detected and are marked 2

6 TABLE I DETECTION RESULT Type HR FAR RDA Adaboost 84.7% 3.27% 78.00% Classic LIDAR-camera Fusion 9.33%.78% 84.85% Our Approach 9.33%.78% 89.32% with a red rectangle. Fig. 9 shows that the classifier found two vehicles only, which are bounded in a red rectangle. The other two ROIs (shown in blue rectangles) are classified to have non-vehicle objects. In fact, one of them is a trash can. The other is a vehicle at a distance of meters. This vehicle is too far away and too small in the image for the classifier to recognize. sensors, and the object is classified from the image data. Then the data from these two complementary sensors are combined for classifier correction and vehicle detection. The experimental results have indicated that, when compared with image-based and classic sensor fusion based vehicle detection systems, our approach has a higher hit rate and a lower false alarm rate. This system is quite useful for modeling and prediction of the traffic conditions over a variety of roadways. Thus it may be used in future autonomous navigation systems. Our future work will concentrate on modeling the confidence area of ROIs, and updating the vehicle dimension information during the tracking process. The occlusion problem and side view vehicle detection will also be considered as future research. REFERENCES y(m ) x(m) Fig. 9. LIDAR scan points and the final vehicle detection results. During the test, we find out that the hit rate decreases when the distance between our probe vehicle and the target vehicles increases. In this paper, the targets are detected frame by frame. Therefore, the target vehicles may not be recognized by the classifier in certain frames even if it was recognized in the last frame. Vehicle tracking technique help solve this problem. By running a Kalman-filter based tracking algorithm, the target will be detected in the initial frame or in several initial frames, then be tracked in the following frames. This approach both improves the detection accuracy and reduces the required amount of calculation. HR and RDA will be further improved by vehicle tracking. VI. CONCLUSIONS AND FUTURE WORK In this paper, we have proposed a novel vehicle detection system based on tightly integrating LIDAR and camera images. Distances to the objects are defined by the LIDAR [] M. Y. Kim, H. Cho and H. Lee, An Active Trinocular Vision System for Sensing Mobile Robot Navigation Environments, IEEE Intelligent Robots and Systems Proceedings, pp , [2] Young-Kee Jung and Yo-Sung Ho, Traffic Parameter Extraction Using Video-based Vehicle Tracking, IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems, pp , 999. [3] Gat M. Benady, A. Shashua, A monocular Vision Advance Warning System for the Automotive Aftermarket, pp. -8, [4] A. Khammari, F. Nashashibi, Y. Abramson and C. Laurgeau, Vehicle Detection Combining Gradient Analysis and Adaboost Classification, Proceedings of the 8th International IEEE Conf. on Intelligent Transportation Systems, pp , Sep [5] Y. Du and N. Papanikolopoulos, Real-time Vehicle Following Through a Novel Symmetry-based Approach, IEEE International Conference on Robotics and Automation, vol. 4, pp , 997. [6] C. Hoffmann, T. Dang and C. Stiller, Vehicle Detection Fusing 2D Visual Features, IEEE Intelligent Vehicles Symposium, pp , [7] A. Haselhoff, A. Kummert and G. Schneider, Radar-Vision Fusion with an Application to Car-Following Using an Improved Adaboost Detection Algorithm, IEEE Intelligent Transportation Systems Conference, pp , [8] C. Premebida, G. Monterio, U. Nunes and P. Peixoto, A Lidar and Vision-based Approach for Pedestrian and Vehicle Detection and Tracking, IEEE Intelligent Transportation Systems Conference, pp , [9] S. Wender, S. Clemen, N. Kaempchen and K. Dietmayer, Vehicle Detection with Three Dimensional Object Models, IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp , [0] The Laser Scanner Product Overview, [] Qilong Zhang and Robert Pless, Extrinsic Calibration of a Camera and Laser Range Finder (improves camera calibration), IEEE Proceedings of International Conference on Intelligent Robots and Systems, pp , Sep [2] Lili Huang and Matthew Barth, A Novel Multi-layer LIDAR and Computer Vision Calibration Procedure Using 2D Patterns for Automatic Navigation, IEEE Intelligent Vehicle Symposium, [3] Zhengyou Zhang, A Flexible New Technique for Camera Calibration, IEEE Trans. On Pattern Analysis and Machine Intelligence, pp , [4] Paul Viola and Michael J. Jones, Rapid Object Detection using a Boosted cascade of Simple Features, IEEE Computer Vision and Pattern Recognition Proceeding, pp. 5-58, 200. [5] Tutorial OpenCV Haartraining, [6] Rainer Lienhart, Alexander Kuranov, and Vadim Pisarevsky, Empirical Analysis of Detection Cascades of Boosted Classifiers for Rapid Object Detection, 25th Pattern Recognition Symposium, pp , Sep [7] Caltech Vehicle Image Dataset,

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

Postprint.

Postprint. http://www.diva-portal.org Postprint This is the accepted version of a paper presented at 14th International Conference of the Biometrics Special Interest Group, BIOSIG, Darmstadt, Germany, 9-11 September,

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

Vehicle Detection Method using Haar-like Feature on Real Time System

Vehicle Detection Method using Haar-like Feature on Real Time System Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.

More information

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module www.lnttechservices.com Table of Contents Abstract 03 Introduction 03 Solution Overview 03 Output

More information

FACE DETECTION BY HAAR CASCADE CLASSIFIER WITH SIMPLE AND COMPLEX BACKGROUNDS IMAGES USING OPENCV IMPLEMENTATION

FACE DETECTION BY HAAR CASCADE CLASSIFIER WITH SIMPLE AND COMPLEX BACKGROUNDS IMAGES USING OPENCV IMPLEMENTATION FACE DETECTION BY HAAR CASCADE CLASSIFIER WITH SIMPLE AND COMPLEX BACKGROUNDS IMAGES USING OPENCV IMPLEMENTATION Vandna Singh 1, Dr. Vinod Shokeen 2, Bhupendra Singh 3 1 PG Student, Amity School of Engineering

More information

Designing Applications that See Lecture 7: Object Recognition

Designing Applications that See Lecture 7: Object Recognition stanford hci group / cs377s Designing Applications that See Lecture 7: Object Recognition Dan Maynes-Aminzade 29 January 2008 Designing Applications that See http://cs377s.stanford.edu Reminders Pick up

More information

Scene Text Detection Using Machine Learning Classifiers

Scene Text Detection Using Machine Learning Classifiers 601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department

More information

Face detection, validation and tracking. Océane Esposito, Grazina Laurinaviciute, Alexandre Majetniak

Face detection, validation and tracking. Océane Esposito, Grazina Laurinaviciute, Alexandre Majetniak Face detection, validation and tracking Océane Esposito, Grazina Laurinaviciute, Alexandre Majetniak Agenda Motivation and examples Face detection Face validation Face tracking Conclusion Motivation Goal:

More information

Laserscanner Based Cooperative Pre-Data-Fusion

Laserscanner Based Cooperative Pre-Data-Fusion Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel

More information

Mouse Pointer Tracking with Eyes

Mouse Pointer Tracking with Eyes Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating

More information

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,

More information

Pattern Feature Detection for Camera Calibration Using Circular Sample

Pattern Feature Detection for Camera Calibration Using Circular Sample Pattern Feature Detection for Camera Calibration Using Circular Sample Dong-Won Shin and Yo-Sung Ho (&) Gwangju Institute of Science and Technology (GIST), 13 Cheomdan-gwagiro, Buk-gu, Gwangju 500-71,

More information

Face Tracking in Video

Face Tracking in Video Face Tracking in Video Hamidreza Khazaei and Pegah Tootoonchi Afshar Stanford University 350 Serra Mall Stanford, CA 94305, USA I. INTRODUCTION Object tracking is a hot area of research, and has many practical

More information

Detection and Classification of Vehicles

Detection and Classification of Vehicles Detection and Classification of Vehicles Gupte et al. 2002 Zeeshan Mohammad ECG 782 Dr. Brendan Morris. Introduction Previously, magnetic loop detectors were used to count vehicles passing over them. Advantages

More information

RADAR-VISION FUSION FOR VEHICLE DETECTION BY MEANS OF IMPROVED HAAR-LIKE FEATURE AND ADABOOST APPROACH

RADAR-VISION FUSION FOR VEHICLE DETECTION BY MEANS OF IMPROVED HAAR-LIKE FEATURE AND ADABOOST APPROACH RADAR-VISION FUSION FOR VEHICLE DETECTION BY MEANS OF IMPROVED HAAR-LIKE FEATURE AND ADABOOST APPROACH Anselm Haselhoff 1, Anton Kummert 1, and Georg Schneider 2 1 Faculty of Electrical, Information, and

More information

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009 181 A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods Zahra Sadri

More information

Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki

Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki 2011 The MathWorks, Inc. 1 Today s Topics Introduction Computer Vision Feature-based registration Automatic image registration Object recognition/rotation

More information

RGBD Face Detection with Kinect Sensor. ZhongJie Bi

RGBD Face Detection with Kinect Sensor. ZhongJie Bi RGBD Face Detection with Kinect Sensor ZhongJie Bi Outline The Existing State-of-the-art Face Detector Problems with this Face Detector Proposed solution to the problems Result and ongoing tasks The Existing

More information

A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera

A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera Claudio Caraffi, Tomas Vojir, Jiri Trefny, Jan Sochman, Jiri Matas Toyota Motor Europe Center for Machine Perception,

More information

Spatio temporal Segmentation using Laserscanner and Video Sequences

Spatio temporal Segmentation using Laserscanner and Video Sequences Spatio temporal Segmentation using Laserscanner and Video Sequences Nico Kaempchen, Markus Zocholl and Klaus C.J. Dietmayer Department of Measurement, Control and Microtechnology University of Ulm, D 89081

More information

Face and Nose Detection in Digital Images using Local Binary Patterns

Face and Nose Detection in Digital Images using Local Binary Patterns Face and Nose Detection in Digital Images using Local Binary Patterns Stanko Kružić Post-graduate student University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture

More information

Pedestrian Detection Using Multi-layer LIDAR

Pedestrian Detection Using Multi-layer LIDAR 1 st International Conference on Transportation Infrastructure and Materials (ICTIM 2016) ISBN: 978-1-60595-367-0 Pedestrian Detection Using Multi-layer LIDAR Mingfang Zhang 1, Yuping Lu 2 and Tong Liu

More information

Object Category Detection: Sliding Windows

Object Category Detection: Sliding Windows 04/10/12 Object Category Detection: Sliding Windows Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Today s class: Object Category Detection Overview of object category detection Statistical

More information

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO Makoto Arie, Masatoshi Shibata, Kenji Terabayashi, Alessandro Moro and Kazunori Umeda Course

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU

FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU 1. Introduction Face detection of human beings has garnered a lot of interest and research in recent years. There are quite a few relatively

More information

2 Cascade detection and tracking

2 Cascade detection and tracking 3rd International Conference on Multimedia Technology(ICMT 213) A fast on-line boosting tracking algorithm based on cascade filter of multi-features HU Song, SUN Shui-Fa* 1, MA Xian-Bing, QIN Yin-Shi,

More information

INTELLIGENT transportation systems have a significant

INTELLIGENT transportation systems have a significant INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 205, VOL. 6, NO. 4, PP. 35 356 Manuscript received October 4, 205; revised November, 205. DOI: 0.55/eletel-205-0046 Efficient Two-Step Approach for Automatic

More information

Detection of a Single Hand Shape in the Foreground of Still Images

Detection of a Single Hand Shape in the Foreground of Still Images CS229 Project Final Report Detection of a Single Hand Shape in the Foreground of Still Images Toan Tran (dtoan@stanford.edu) 1. Introduction This paper is about an image detection system that can detect

More information

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using

More information

Out-of-Plane Rotated Object Detection using Patch Feature based Classifier

Out-of-Plane Rotated Object Detection using Patch Feature based Classifier Available online at www.sciencedirect.com Procedia Engineering 41 (2012 ) 170 174 International Symposium on Robotics and Intelligent Sensors 2012 (IRIS 2012) Out-of-Plane Rotated Object Detection using

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Object Detection Design challenges

Object Detection Design challenges Object Detection Design challenges How to efficiently search for likely objects Even simple models require searching hundreds of thousands of positions and scales Feature design and scoring How should

More information

Vision-based Frontal Vehicle Detection and Tracking

Vision-based Frontal Vehicle Detection and Tracking Vision-based Frontal and Tracking King Hann LIM, Kah Phooi SENG, Li-Minn ANG and Siew Wen CHIN School of Electrical and Electronic Engineering The University of Nottingham Malaysia campus, Jalan Broga,

More information

Vehicle Detection Using Android Smartphones

Vehicle Detection Using Android Smartphones University of Iowa Iowa Research Online Driving Assessment Conference 2013 Driving Assessment Conference Jun 19th, 12:00 AM Vehicle Detection Using Android Smartphones Zhiquan Ren Shanghai Jiao Tong University,

More information

Vision. OCR and OCV Application Guide OCR and OCV Application Guide 1/14

Vision. OCR and OCV Application Guide OCR and OCV Application Guide 1/14 Vision OCR and OCV Application Guide 1.00 OCR and OCV Application Guide 1/14 General considerations on OCR Encoded information into text and codes can be automatically extracted through a 2D imager device.

More information

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR Bijun Lee a, Yang Wei a, I. Yuan Guo a a State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,

More information

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

Depth Estimation Using Monocular Camera

Depth Estimation Using Monocular Camera Depth Estimation Using Monocular Camera Apoorva Joglekar #, Devika Joshi #, Richa Khemani #, Smita Nair *, Shashikant Sahare # # Dept. of Electronics and Telecommunication, Cummins College of Engineering

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

CSc Topics in Computer Graphics 3D Photography

CSc Topics in Computer Graphics 3D Photography CSc 83010 Topics in Computer Graphics 3D Photography Tuesdays 11:45-1:45 1:45 Room 3305 Ioannis Stamos istamos@hunter.cuny.edu Office: 1090F, Hunter North (Entrance at 69 th bw/ / Park and Lexington Avenues)

More information

Face Detection on OpenCV using Raspberry Pi

Face Detection on OpenCV using Raspberry Pi Face Detection on OpenCV using Raspberry Pi Narayan V. Naik Aadhrasa Venunadan Kumara K R Department of ECE Department of ECE Department of ECE GSIT, Karwar, Karnataka GSIT, Karwar, Karnataka GSIT, Karwar,

More information

Face Detection Using Look-Up Table Based Gentle AdaBoost

Face Detection Using Look-Up Table Based Gentle AdaBoost Face Detection Using Look-Up Table Based Gentle AdaBoost Cem Demirkır and Bülent Sankur Boğaziçi University, Electrical-Electronic Engineering Department, 885 Bebek, İstanbul {cemd,sankur}@boun.edu.tr

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Automatic Initialization of the TLD Object Tracker: Milestone Update

Automatic Initialization of the TLD Object Tracker: Milestone Update Automatic Initialization of the TLD Object Tracker: Milestone Update Louis Buck May 08, 2012 1 Background TLD is a long-term, real-time tracker designed to be robust to partial and complete occlusions

More information

Adaptive Feature Extraction with Haar-like Features for Visual Tracking

Adaptive Feature Extraction with Haar-like Features for Visual Tracking Adaptive Feature Extraction with Haar-like Features for Visual Tracking Seunghoon Park Adviser : Bohyung Han Pohang University of Science and Technology Department of Computer Science and Engineering pclove1@postech.ac.kr

More information

Image processing techniques for driver assistance. Razvan Itu June 2014, Technical University Cluj-Napoca

Image processing techniques for driver assistance. Razvan Itu June 2014, Technical University Cluj-Napoca Image processing techniques for driver assistance Razvan Itu June 2014, Technical University Cluj-Napoca Introduction Computer vision & image processing from wiki: any form of signal processing for which

More information

Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart

Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart Mehrez Kristou, Akihisa Ohya and Shin ichi Yuta Intelligent Robot Laboratory, University of Tsukuba,

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Window based detectors

Window based detectors Window based detectors CS 554 Computer Vision Pinar Duygulu Bilkent University (Source: James Hays, Brown) Today Window-based generic object detection basic pipeline boosting classifiers face detection

More information

On Road Vehicle Detection using Shadows

On Road Vehicle Detection using Shadows On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu

More information

Face Alignment Under Various Poses and Expressions

Face Alignment Under Various Poses and Expressions Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.

More information

Object Category Detection: Sliding Windows

Object Category Detection: Sliding Windows 03/18/10 Object Category Detection: Sliding Windows Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Goal: Detect all instances of objects Influential Works in Detection Sung-Poggio

More information

Unconstrained License Plate Detection Using the Hausdorff Distance

Unconstrained License Plate Detection Using the Hausdorff Distance SPIE Defense & Security, Visual Information Processing XIX, Proc. SPIE, Vol. 7701, 77010V (2010) Unconstrained License Plate Detection Using the Hausdorff Distance M. Lalonde, S. Foucher, L. Gagnon R&D

More information

Person Detection in Images using HoG + Gentleboost. Rahul Rajan June 1st July 15th CMU Q Robotics Lab

Person Detection in Images using HoG + Gentleboost. Rahul Rajan June 1st July 15th CMU Q Robotics Lab Person Detection in Images using HoG + Gentleboost Rahul Rajan June 1st July 15th CMU Q Robotics Lab 1 Introduction One of the goals of computer vision Object class detection car, animal, humans Human

More information

Skin and Face Detection

Skin and Face Detection Skin and Face Detection Linda Shapiro EE/CSE 576 1 What s Coming 1. Review of Bakic flesh detector 2. Fleck and Forsyth flesh detector 3. Details of Rowley face detector 4. Review of the basic AdaBoost

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Real Time Stereo Vision Based Pedestrian Detection Using Full Body Contours

Real Time Stereo Vision Based Pedestrian Detection Using Full Body Contours Real Time Stereo Vision Based Pedestrian Detection Using Full Body Contours Ion Giosan, Sergiu Nedevschi, Silviu Bota Technical University of Cluj-Napoca {Ion.Giosan, Sergiu.Nedevschi, Silviu.Bota}@cs.utcluj.ro

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Detecting People in Images: An Edge Density Approach

Detecting People in Images: An Edge Density Approach University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 27 Detecting People in Images: An Edge Density Approach Son Lam Phung

More information

Text Block Detection and Segmentation for Mobile Robot Vision System Applications

Text Block Detection and Segmentation for Mobile Robot Vision System Applications Proc. of Int. Conf. onmultimedia Processing, Communication and Info. Tech., MPCIT Text Block Detection and Segmentation for Mobile Robot Vision System Applications Too Boaz Kipyego and Prabhakar C. J.

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Study of Viola-Jones Real Time Face Detector

Study of Viola-Jones Real Time Face Detector Study of Viola-Jones Real Time Face Detector Kaiqi Cen cenkaiqi@gmail.com Abstract Face detection has been one of the most studied topics in computer vision literature. Given an arbitrary image the goal

More information

Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots

Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots Vo Duc My Cognitive Systems Group Computer Science Department University of Tuebingen

More information

Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera

Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera Galiev Ilfat, Alina Garaeva, Nikita Aslanyan The Department of Computer Science & Automation, TU Ilmenau 98693 Ilmenau ilfat.galiev@tu-ilmenau.de;

More information

Subject-Oriented Image Classification based on Face Detection and Recognition

Subject-Oriented Image Classification based on Face Detection and Recognition 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Traffic Sign Localization and Classification Methods: An Overview

Traffic Sign Localization and Classification Methods: An Overview Traffic Sign Localization and Classification Methods: An Overview Ivan Filković University of Zagreb Faculty of Electrical Engineering and Computing Department of Electronics, Microelectronics, Computer

More information

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction Chieh-Chih Wang and Ko-Chih Wang Department of Computer Science and Information Engineering Graduate Institute of Networking

More information

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair

More information

HOG-based Pedestriant Detector Training

HOG-based Pedestriant Detector Training HOG-based Pedestriant Detector Training evs embedded Vision Systems Srl c/o Computer Science Park, Strada Le Grazie, 15 Verona- Italy http: // www. embeddedvisionsystems. it Abstract This paper describes

More information

https://en.wikipedia.org/wiki/the_dress Recap: Viola-Jones sliding window detector Fast detection through two mechanisms Quickly eliminate unlikely windows Use features that are fast to compute Viola

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Fast and Stable Human Detection Using Multiple Classifiers Based on Subtraction Stereo with HOG Features

Fast and Stable Human Detection Using Multiple Classifiers Based on Subtraction Stereo with HOG Features 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-13, 2011, Shanghai, China Fast and Stable Human Detection Using Multiple Classifiers Based on

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Count Vehicle Over Region of Interest Using Euclidean Distance

Count Vehicle Over Region of Interest Using Euclidean Distance Count Vehicle Over Region of Interest Using Euclidean Distance Bagus Priambodo Information System University of Mercu Buana Nur Ani Information System University of Mercu Buana Abstract This paper propose

More information

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Model-based Visual Tracking:

Model-based Visual Tracking: Technische Universität München Model-based Visual Tracking: the OpenTL framework Giorgio Panin Technische Universität München Institut für Informatik Lehrstuhl für Echtzeitsysteme und Robotik (Prof. Alois

More information

Viola-Jones with CUDA

Viola-Jones with CUDA Seminar: Multi-Core Architectures and Programming Viola-Jones with CUDA Su Wu: Computational Engineering Pu Li: Information und Kommunikationstechnik Viola-Jones Face Detection Overview Application Of

More information

Progress Report of Final Year Project

Progress Report of Final Year Project Progress Report of Final Year Project Project Title: Design and implement a face-tracking engine for video William O Grady 08339937 Electronic and Computer Engineering, College of Engineering and Informatics,

More information

Face Recognition Pipeline 1

Face Recognition Pipeline 1 Face Detection Face Recognition Pipeline 1 Face recognition is a visual pattern recognition problem A face recognition system generally consists of four modules as depicted below 1 S.Li and A.Jain, (ed).

More information

A Two-Stage Template Approach to Person Detection in Thermal Imagery

A Two-Stage Template Approach to Person Detection in Thermal Imagery A Two-Stage Template Approach to Person Detection in Thermal Imagery James W. Davis Mark A. Keck Ohio State University Columbus OH 432 USA {jwdavis,keck}@cse.ohio-state.edu Abstract We present a two-stage

More information

Stereo Vision Based Advanced Driver Assistance System

Stereo Vision Based Advanced Driver Assistance System Stereo Vision Based Advanced Driver Assistance System Ho Gi Jung, Yun Hee Lee, Dong Suk Kim, Pal Joo Yoon MANDO Corp. 413-5,Gomae-Ri, Yongin-Si, Kyongi-Do, 449-901, Korea Phone: (82)31-0-5253 Fax: (82)31-0-5496

More information

A robust method for automatic player detection in sport videos

A robust method for automatic player detection in sport videos A robust method for automatic player detection in sport videos A. Lehuger 1 S. Duffner 1 C. Garcia 1 1 Orange Labs 4, rue du clos courtel, 35512 Cesson-Sévigné {antoine.lehuger, stefan.duffner, christophe.garcia}@orange-ftgroup.com

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

Detecting Pedestrians Using Patterns of Motion and Appearance

Detecting Pedestrians Using Patterns of Motion and Appearance Detecting Pedestrians Using Patterns of Motion and Appearance Paul Viola Michael J. Jones Daniel Snow Microsoft Research Mitsubishi Electric Research Labs Mitsubishi Electric Research Labs viola@microsoft.com

More information

Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

Automatic camera to laser calibration for high accuracy mobile mapping systems using INS Automatic camera to laser calibration for high accuracy mobile mapping systems using INS Werner Goeman* a+b, Koen Douterloigne a, Sidharta Gautama a a Department of Telecommunciations and information processing

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Project Report for EE7700

Project Report for EE7700 Project Report for EE7700 Name: Jing Chen, Shaoming Chen Student ID: 89-507-3494, 89-295-9668 Face Tracking 1. Objective of the study Given a video, this semester project aims at implementing algorithms

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Obstacle Detection From Roadside Laser Scans. Research Project

Obstacle Detection From Roadside Laser Scans. Research Project Obstacle Detection From Roadside Laser Scans by Jimmy Young Tang Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, in partial

More information

Detecting Pedestrians Using Patterns of Motion and Appearance

Detecting Pedestrians Using Patterns of Motion and Appearance International Journal of Computer Vision 63(2), 153 161, 2005 c 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands. Detecting Pedestrians Using Patterns of Motion and Appearance

More information