Vision-based Frontal Vehicle Detection and Tracking

Similar documents
On Road Vehicle Detection using Shadows

Background/Foreground Detection 1

Road Sign Detection and Tracking from Complex Background

Three-Dimensional Computer Vision

Practical Image and Video Processing Using MATLAB

A New Configuration of Adaptive Arithmetic Model for Video Coding with 3D SPIHT

Edge and local feature detection - 2. Importance of edge detection in computer vision

Embedded Descendent-Only Zerotree Wavelet Coding for Image Compression

Filtering Images. Contents

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Study on the Signboard Region Detection in Natural Image

parco area delle Scienze, 181A via Ferrata, , Parma 27100, Pavia

Motion Detection Algorithm

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

River flow lane detection and Kalman filteringbased B-spline lane tracking

Understanding Tracking and StroMotion of Soccer Ball

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Vehicle Detection Method using Haar-like Feature on Real Time System

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

Digital Image Processing COSC 6380/4393

Lane Markers Detection based on Consecutive Threshold Segmentation

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Efficient Acquisition of Human Existence Priors from Motion Trajectories

CS 787: Assignment 4, Stereo Vision: Block Matching and Dynamic Programming Due: 12:00noon, Fri. Mar. 30, 2007.

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Designing Applications that See Lecture 7: Object Recognition

CS4733 Class Notes, Computer Vision

차세대지능형자동차를위한신호처리기술 정호기

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

Motion Detection. Final project by. Neta Sokolovsky

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Pedestrian Detection with Improved LBP and Hog Algorithm

An Efficient Algorithm for Forward Collision Warning Using Low Cost Stereo Camera & Embedded System on Chip

Gesture Recognition using Temporal Templates with disparity information

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

2 OVERVIEW OF RELATED WORK

Using Edge Detection in Machine Vision Gauging Applications

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Research of Traffic Flow Based on SVM Method. Deng-hong YIN, Jian WANG and Bo LI *

Development of an Automated Fingerprint Verification System

Stereo Vision Image Processing Strategy for Moving Object Detecting

Marcel Worring Intelligent Sensory Information Systems

Texture Image Segmentation using FCM

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Car tracking in tunnels

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE

Schedule for Rest of Semester

A New Technique of Extraction of Edge Detection Using Digital Image Processing

A New Algorithm for Shape Detection

Preceding Vehicle Detection and Tracking Adaptive to Illumination Variation in Night Traffic Scenes Based on Relevance Analysis

CITS 4402 Computer Vision

Chapter 11 Image Processing

Practice Exam Sample Solutions

Tracking driver actions and guiding phone usage for safer driving. Hongyu Li Jan 25, 2018

AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION

Lecture 7: Most Common Edge Detectors

Lecture 6: Edge Detection

A Paper presentation on REAL TIME IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

INTELLIGENT transportation systems have a significant

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm.

Sobel Edge Detection Algorithm

Coarse-to-Fine Search Technique to Detect Circles in Images

CSE 252A Computer Vision Homework 3 Instructor: Ben Ochoa Due : Monday, November 21, 2016, 11:59 PM

Research and application of volleyball target tracking algorithm based on surf corner detection

Detection of Moving Objects in Colour based and Graph s axis Change method

Multi-scale Techniques for Document Page Segmentation

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Detection and Classification of Vehicles

Numerical Recognition in the Verification Process of Mechanical and Electronic Coal Mine Anemometer

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

International Journal of Advanced Research in Computer Science and Software Engineering

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

EE795: Computer Vision and Intelligent Systems

Model-Based Segmentation of Impression Marks

(Sample) Final Exam with brief answers

HCR Using K-Means Clustering Algorithm

CS 490: Computer Vision Image Segmentation: Thresholding. Fall 2015 Dr. Michael J. Reale

Computers and Mathematics with Applications. Vision-based vehicle detection for a driver assistance system

Text Information Extraction And Analysis From Images Using Digital Image Processing Techniques

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Factorization with Missing and Noisy Data

Road-Sign Detection and Recognition Based on Support Vector Machines. Maldonado-Bascon et al. et al. Presented by Dara Nyknahad ECG 789

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

EE795: Computer Vision and Intelligent Systems

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Scene Text Detection Using Machine Learning Classifiers

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Overlay Text Detection and Recognition for Soccer Game Indexing

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

PERFORMANCE MEASUREMENTS OF FEATURE TRACKING AND HISTOGRAM BASED TRAFFIC CONGESTION ALGORITHMS

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Transcription:

Vision-based Frontal and Tracking King Hann LIM, Kah Phooi SENG, Li-Minn ANG and Siew Wen CHIN School of Electrical and Electronic Engineering The University of Nottingham Malaysia campus, Jalan Broga, Semenyih, 43500, Selangor Tel : 03-89248350, Fax : 03-89248071 E-mail : {keyx7khl,jasmine.seng, Kenneth.Ang, keyx8csw}@nottingham.edu.my ABSTRACT This paper presents a vision-based driver assistance system composing of vehicle detection using knowledge-based method and vehicle tracking using Kalman filtering. First, a preceding vehicle is localized by a proposed detection scheme, consisting of shadow detection and brake lights detection. Second, the possible vehicle region is extracted for verification. Symmetry analysis includes contour and brake lights symmetries are performed and followed by an asymmetry contour analysis in order to obtain vehicle s center. The center of vehicle is tracked continuously using Kalman filtering within a predicted subwindow in consecutive frames. It reduces the scanning process and maximizes the computational speed of vehicle detection. Simulation results demonstrate good performance of the proposed system. Keywords, Tracking, Intelligent and Driver Assistance System 1.0 INTRODUCTION detection and tracking approach has become increasingly important as a driver assistance system, leading towards the advancement of autonomous vehicular technology. The detection of preceding vehicles in sudden break, slow-down, and uniform driving speed has to be responded immediately from a driver to prevent any casualties. Due to the reducing cost of optical sensors and increasing speed of microprocessor, vision-based vehicle detection may be a feasible solution to improve the security of vehicle driving and to increase efficient use of driving space. Additionally, the assistance of vehicle tracking may further reduce the processing time of vehicle detection and less interruption of environmental noise. Therefore, a fast and effective vehicle scanning technique is highly required to automatically position a moving vehicle in the frontal view of traffic scene for the sake of realizing the autonomous vehicle driving. Various vehicle detection and tracking techniques were revealed in for a past decade. proposed a vehicle detection scheme based on voting approach for the global contour symmetry axis and noise filtering. It was followed by vehicle segmentation to extract a leading vehicle with a bounding box and continuously track the vehicle with local symmetry axis detection. On the other hand, detected vehicle in a traffic scene regarding to a high vertical symmetry using multiresolution method. They computed symmetries on binarized Sobel modules, almost-vertical edges and almost-horizontal edges (. At the same time, localized preceding vehicles by constructing a geometric model and its energy function about the information of vehicle s shape and symmetry. However, these vehicle detection techniques depending on solely symmetry detection might not generate good detection results and it was time-consuming to search the entire image for symmetry axis detection. Besides, proposed a preceding vehicle tracking and detection system including road area finding, vehicle footprint extraction and vehicle bounding box extraction. implemented a multi sensor fusion approach for vehicle detection. Shadow and symmetry detection were performed and these information were combined for vehicle tracking. proposed a preceding vehicle detection method based on multi-characteristics fusion by first identifying the preceding vehicles using the bottom shadows and then verifying the region of interest through the fusion of vehicle margin characteristics, texture characteristics and vertical edge symmetry. Finally, vehicle location can be acquired by means of vehicle margin information. Nonetheless, shadow segmentation given in these papers provided poor segmentation results. Moreover, recognized the preceding vehicle on highways using edges correlation method. It needed the difference between the current image and few previous frames in the past and hence, computation of the correlation image was a time-consuming task. presented an integrated framework of on-road vehicle detection through knowledge fusion such as appearance, geometry and motion information over image sequences. It needed prior training process of vehicle appearance on the system. Furthermore, detected vehicles based on the gradient based methods and Adaboost classification. Off-line learning process was required for the vehicle classification. performed an on-road vehicle motion analysis on a traffic scene, including two steps, i.e. incoming vehicle detection and vehicle motion analysis. It would be false detection if the preceding vehicle remained constant speed or halt.

Image Sequences <Track = 1> <Track = 0> Localization Horizon Localization YES NO UPDATE Low Intensity Mapping Brake Light Verification Edge Region Extraction Shadow Edge Grouping X original image. Vertical mean distribution, showing in Figure 2(b) is measured by averaging the gray values of each row on the I(x,. The threshold value for horizon line is obtained through a minimum search along the vertical mean curve, where the first minimum occurs from the upper curve is the regional dividing line. A road image (R roi) is generated in Figure 2(a) after the application of horizon line threshold, where all vertical coordinates below the threshold are discarded. This stage is only applicable temporary if the condition <Track = 0> is true. Subsequent Frames Kalman Filtering Prediction Sub-window Drawing Tracking Figure 1: The proposed vehicle detection and tracking system. Image Row Horizon In this paper, a preceding vehicle detection and tracking system is proposed in Figure 1. The proposed system composes of three main modules: (i) proposed vehicle detection based on shadow and brake lights detection, (ii) proposed vehicle verification, and (iii) vehicle tracking using Kalman filtering. In the vehicle detection module, a frontal vehicle is detected by first localizing the shadow edge and verifying with red components which usually refers to brake lights. The possible regions are then extracted with a bounding box for vehicle verification. During the verification process, symmetry axes for contour and brake lights are first detected and it is followed by asymmetry contour analysis. Eventually, the center parameter of a vehicle is updated using Kalman filtering in order to predict a sub-window, which limits the scanning process of the vehicle detection. Once the tracking mode is turned on, our proposed system undergoes closed-loop vehicle detection. It reduces the computational time of vehicle detection. If the vehicle is miss-tracked, vehicle detection will then be reprocessed on the entire image. 2.0 VEHICLE DETECTION detection plays a critical role to automatically localize a vehicle in a preceding view of image. Its purposes are to improve the vehicle safety and stay a certain distance away from a frontal vehicle avoiding severe crash. In the proposed vehicle detection method, it can be divided into three stages, i.e. horizon localization, shadow detection, and brake lights detection. 2.1 Horizon Localization A horizon line is localized using vertical mean distribution to split a traffic scene image I(x, into sky region and road region where x and y depict the row and column for the Figure 2: (a) R roi, (b) Vertical mean distribution. 2.2 Shadow Shadow is usually found underneath a vehicle in an image. In this stage, a shadow detection method is proposed to quickly scan for the low intensity values existed in the image. The proposed shadow detection method has three steps, i.e. edge detection, low intensity mapping and edge grouping. 2.2.1 Edge Mean Value First, a mean filter is applied to the R roi map to smooth image pixel values and enlarge shadow segments. Next, edge detection with a Sobel filter is performed on the blurring image to obtain the contour of a vehicle as well as its shadow. Subsequently, a certain edge threshold (T 1) is applied to the gradient edge image to produce a binarized edge map BEM(x,, as demonstrated in Figure 3. Figure 3: Binarized edge map (BEM). 2.2.2 Low Intensity Mapping Shadow is always existed in an image with low intensity values. To produce a low intensity map LIM(x,

automatically, the mean value (μ) and standard deviation (σ) of I(x, is initially computed and it is then determined as shown in Figure 4(a) with the equation (1), 1, if I ( x, µ < σ LIM ( x, = (1) 0, otherwise Subsequently, the possible shadow map PSM(x, illustrated in Figure 4(b) is obtained by merging the BEM(x, and LIM(x, with an AND operation. 1, if LIM ( x, == 1 AND BEM ( x, == 1 PSM ( x, = (2) 0, otherwise Figure 6: (a) Red component map, (b) Possible vehicle region (PVR). 3.0 VEHICLE VERIFICATION In order to detect a preceding vehicle, only the largest area of PVR is concerned and verified for the tracking system. A bounding box is applied to the possible vehicle region based on the edge width obtained from PSEM(x,. Figure 7 shows the PVR extraction with the largest area. The extraction region is then undergone symmetry and asymmetry analysis for PVR verification. Figure 4: (a) LIM(x,, (b) PSM(x,. 2.2.3 Edge Grouping After the acquisition of PSM(x,, edge grouping stage is performed to acquire the possible shadow edge map PSEM(x,. First, a noise elimination technique is applied to group non-zero connectivity edges with a pixel count threshold (T 2) and at the same time, discard the nonconnectivity pixels that having less than T 2 pixels count. To generate PSEM(x,, only the bottom edge of the connected pixels are remained since shadows are always existed underneath a vehicle. Hence, the remaining shadow edges are labeled accordingly and it is shown in Figure 5. 3.1 Symmetry Analysis Figure 7: PVR extraction. Two symmetry analysis are performed, i.e. contour symmetry and brake lights symmetry detection. Assuming that the extraction region has ith row and jth column of image size. A binary contour (C PVR) is first constructed by using Sobel filter as shown in Figure 8(a). For the non-zero pixels ( i, j 1), the contour symmetry axis is calculated respective to the consecutive pixel i, j ) at the same row. ( 2 j1 + j2 center = 2 if CPVR ( i, j2 ) == 1 AND + + ContourSym ( center ) j1 j2 (3) Csym arg max ContourSym ( (4) j Figure 5: Possible shadow edge map (PSEM). 2.3 Brake Lights Brake lights have much red components compared to other vehicle s parts. Figure 6(a) depicts the result of red component map. PSEM(x, is checked by accumulate the number of red component pixels for each labeled edge. If the edge has more than T 3 number of red component pixels, it is defined as a possible vehicle region (PVR) as shown in Figure 6(b). where ContourSym is an accumulator and C sym is the contour symmetry axis, in which the maximum peak occurs in Figure 8(b). On the other hand, brake lights are always located on the left and right corner of a vehicle symmetrically. Figure 8(c) is the detected binary brake lights (B PVR) on the extraction region. Same symmetry voting scheme is applied to the brake lights symmetry analysis with an accumulator called BrakeSym. The brake lights symmetry axis is defined as: Bsym = arg max BrakeSym( (5) j

where B sym is the maximum peak obtained in the accumulator demostrated in Figure 8(d). To verify the validity of PVR, the condition below must be obey: 1, if Bsym Csym ε Track = (6) 0, otherwise where ε is the difference between brake lights symmetry axis and contour symmetry axis. (a) (b) axis, we might find the center coordination as (C sym, D asym) of the PVR. (a) (b) Image Row (c) D asym Votes C sym Votes Figure 9: (a) Edge difference map, (b) D PVR, (c) ContourAsym. 4.0 VEHICLE TRACKING (c) Votes Image Column (d) B sym tracking is supplementary to support a driver assistance system for fast vehicle detection. As shown in Figure 10, vehicle tracking algorithm is implemented in a closed loop if <Track = 1> is true. Once the system lost track, vehicle detection will be carried out on the whole image. Figure 8: (a) Binary Contour (C PVR), (b) ContourSym, (c) Binary Brake lights (B PVR), (d) BrakeSym. 3.2 Asymmetry Analysis Image Column <Track = 1> State Center(X,Y) Verification Kalman Filter Parameter Update Region Extraction Sub-window Prediction As can be seen in Figure 8(a), the edges of vehicle region are mostly displayed in vertical, but not in horizontal. Instead of analyzing the grayscale symmetry, the vehicle region is turned into an edge difference map by differencing all the columns to its first column of the PVR where it is shown in Figure 9(a). Next, a Sobel fitler is applied to the difference map to get the horizontal contour (D PVR), as shown in Figure 9(b). For the non-zero pixels ( i 1,, the contour symmetry axis is calculated regarding to the consecutive pixel ( i 2, at the same column as below: D PVR ( i, = G( i, G( i,1) (7) i1 + i2 center = 2 if DPVR ( i2, == 1 AND i1 i + + ContourAsym( center ) D arg max ContourAsym( i) asym = i where ContourAsym is an accumulator and D asym is the contour asymmetry axis. By combining the symmetry and asymmetry 2 (8) (9) <Track = 0> Figure 10: tracking system. Kalman filter is popular in vision tracking system due to its prediction characteristic based on previous information. The linear state equation and measurement equations are defined as : x k+ 1 = Ak + 1, k xk + Bk+ 1, kuk + wk (10) y k = H k xk + vk (11) where the state space (x) contains the X-position and Y- position of vehicle s center; A k+1,k is the transition matrix bringing state x k from time k to k+1, B k+1,k is the matrix connecting the acceleration vector u k to the state variables; w k is known as process noise; y k is the measurement output; H k is the observation model that maps the true state space to the observed space; and v k is the measurement noise. The initial conditions for xˆ k and Pk are given as: x = E[ ] (12) ˆ0 x0 P0 = E[( x0 E[ x0 ])( x0 E[ x0 ] )] (13) T In Figure 11, Pk is a priori estimate error covariance and P k is a posteriori estimate error covariance. The estimated state, x k is

updated with the estimated gain G k. The predicted state x k - is used to draw a sub-window on the next frame. Prediction Figure 11: Kalman filtering method. 5.0 RESULTS AND DISCUSSIONS Correction In this section, some results were presented to evaluate the performance of the proposed algorithm. Two video sequences were captured under sunny day condition at around 12p.m. using 6 Megapixels Canon IXUS 65. Video #1 was taken in a highway environment while video #2 was taken in a rural area. Initial values of the following parameters, T 1 =50, and T 2=T 3=12 were set in these experiments. The estimation of X-position and Y-position for a vehicle s center was demonstrated in Figure 12. For some brief explanation, desired point was the expected center on the vehicle region while the actual point was the position determined from the vehicle verification. Moreover, the predicted point was the position forecasted using the Kalman filtering. As can be observed in Figure 12(a) for video #1, the actual X-position and Y-position were detected approximately along with the desired one within tolerance ±5 pixels. Same performance was obtained in Figure 12(b) for video #2 with the proposed vehicle verification techniques. As shown in Figure 13 & 14, it showed a good performance of the proposed vehicle detection and tracking system since a preceding vehicle was well detected and tracked consecutively. The center of vehicle was well detected with a * symbol while denoted the predicted center tracked with Kalman filter. In Table 1, the detection time taken in assistance with vehicle tracking was (~90%) much faster than only vehicle detection in the traffic scene. The recommended safe distance is the summation of mean tracking time distance [~0.12*speed] and vehicle stopping distance [speed^2/20] regarding to the current speed as reported in (June, 2000). Vide o Table 1: Time taken for vehicle detection and tracking Total + Frame Time (s) Tracking Time (s) s #1 170 1.24~1.8 2 #2 190 1.25~1.8 2 6.0 CONCLUSION Mean Time(s) 0.11~0.13 0.12 0.14~0.16 0.15 A vision-based preceding vehicle detection and tracking system has been presented in this paper. Frontal vehicle is first detected based on shadow and brake lights detection. A bounding box extraction is applied to the possible vehicle region for symmetry verification. Brake lights symmetry and contour symmetry analysis are implemented and followed by contour asymmetry analysis in order to obtain center of a possible vehicle region. It is then passed to the tracking system to predict the movement of a vehicle. The advantages of this vehicle detection and tracking system are, (i) the reduction of vehicle searching time, and (ii) the increase performance of the vehicle detection since the searching area is bounded. However, this method cannot be applied during night or poor lighting condition and far vehicle detection. REFERENCES

(a) X-Position Y-Position X-Position Y-Position (b) Figure 12: The estimation of X-position and Y-position for (a) Video #1, (b) Video #2. Figure 13: Results of vehicle detection and tracking for video #1 (a) Frame 5, (b) Frame 50, (c) Frame 100, (d) Frame 140, (e) Frame 170 Figure 14: Results of vehicle detection and tracking for video #2 (a) Frame 5, (b) Frame 50, (c) Frame 100, (d) Frame 140, (e) Frame 190