Vision-based Frontal and Tracking King Hann LIM, Kah Phooi SENG, Li-Minn ANG and Siew Wen CHIN School of Electrical and Electronic Engineering The University of Nottingham Malaysia campus, Jalan Broga, Semenyih, 43500, Selangor Tel : 03-89248350, Fax : 03-89248071 E-mail : {keyx7khl,jasmine.seng, Kenneth.Ang, keyx8csw}@nottingham.edu.my ABSTRACT This paper presents a vision-based driver assistance system composing of vehicle detection using knowledge-based method and vehicle tracking using Kalman filtering. First, a preceding vehicle is localized by a proposed detection scheme, consisting of shadow detection and brake lights detection. Second, the possible vehicle region is extracted for verification. Symmetry analysis includes contour and brake lights symmetries are performed and followed by an asymmetry contour analysis in order to obtain vehicle s center. The center of vehicle is tracked continuously using Kalman filtering within a predicted subwindow in consecutive frames. It reduces the scanning process and maximizes the computational speed of vehicle detection. Simulation results demonstrate good performance of the proposed system. Keywords, Tracking, Intelligent and Driver Assistance System 1.0 INTRODUCTION detection and tracking approach has become increasingly important as a driver assistance system, leading towards the advancement of autonomous vehicular technology. The detection of preceding vehicles in sudden break, slow-down, and uniform driving speed has to be responded immediately from a driver to prevent any casualties. Due to the reducing cost of optical sensors and increasing speed of microprocessor, vision-based vehicle detection may be a feasible solution to improve the security of vehicle driving and to increase efficient use of driving space. Additionally, the assistance of vehicle tracking may further reduce the processing time of vehicle detection and less interruption of environmental noise. Therefore, a fast and effective vehicle scanning technique is highly required to automatically position a moving vehicle in the frontal view of traffic scene for the sake of realizing the autonomous vehicle driving. Various vehicle detection and tracking techniques were revealed in for a past decade. proposed a vehicle detection scheme based on voting approach for the global contour symmetry axis and noise filtering. It was followed by vehicle segmentation to extract a leading vehicle with a bounding box and continuously track the vehicle with local symmetry axis detection. On the other hand, detected vehicle in a traffic scene regarding to a high vertical symmetry using multiresolution method. They computed symmetries on binarized Sobel modules, almost-vertical edges and almost-horizontal edges (. At the same time, localized preceding vehicles by constructing a geometric model and its energy function about the information of vehicle s shape and symmetry. However, these vehicle detection techniques depending on solely symmetry detection might not generate good detection results and it was time-consuming to search the entire image for symmetry axis detection. Besides, proposed a preceding vehicle tracking and detection system including road area finding, vehicle footprint extraction and vehicle bounding box extraction. implemented a multi sensor fusion approach for vehicle detection. Shadow and symmetry detection were performed and these information were combined for vehicle tracking. proposed a preceding vehicle detection method based on multi-characteristics fusion by first identifying the preceding vehicles using the bottom shadows and then verifying the region of interest through the fusion of vehicle margin characteristics, texture characteristics and vertical edge symmetry. Finally, vehicle location can be acquired by means of vehicle margin information. Nonetheless, shadow segmentation given in these papers provided poor segmentation results. Moreover, recognized the preceding vehicle on highways using edges correlation method. It needed the difference between the current image and few previous frames in the past and hence, computation of the correlation image was a time-consuming task. presented an integrated framework of on-road vehicle detection through knowledge fusion such as appearance, geometry and motion information over image sequences. It needed prior training process of vehicle appearance on the system. Furthermore, detected vehicles based on the gradient based methods and Adaboost classification. Off-line learning process was required for the vehicle classification. performed an on-road vehicle motion analysis on a traffic scene, including two steps, i.e. incoming vehicle detection and vehicle motion analysis. It would be false detection if the preceding vehicle remained constant speed or halt.
Image Sequences <Track = 1> <Track = 0> Localization Horizon Localization YES NO UPDATE Low Intensity Mapping Brake Light Verification Edge Region Extraction Shadow Edge Grouping X original image. Vertical mean distribution, showing in Figure 2(b) is measured by averaging the gray values of each row on the I(x,. The threshold value for horizon line is obtained through a minimum search along the vertical mean curve, where the first minimum occurs from the upper curve is the regional dividing line. A road image (R roi) is generated in Figure 2(a) after the application of horizon line threshold, where all vertical coordinates below the threshold are discarded. This stage is only applicable temporary if the condition <Track = 0> is true. Subsequent Frames Kalman Filtering Prediction Sub-window Drawing Tracking Figure 1: The proposed vehicle detection and tracking system. Image Row Horizon In this paper, a preceding vehicle detection and tracking system is proposed in Figure 1. The proposed system composes of three main modules: (i) proposed vehicle detection based on shadow and brake lights detection, (ii) proposed vehicle verification, and (iii) vehicle tracking using Kalman filtering. In the vehicle detection module, a frontal vehicle is detected by first localizing the shadow edge and verifying with red components which usually refers to brake lights. The possible regions are then extracted with a bounding box for vehicle verification. During the verification process, symmetry axes for contour and brake lights are first detected and it is followed by asymmetry contour analysis. Eventually, the center parameter of a vehicle is updated using Kalman filtering in order to predict a sub-window, which limits the scanning process of the vehicle detection. Once the tracking mode is turned on, our proposed system undergoes closed-loop vehicle detection. It reduces the computational time of vehicle detection. If the vehicle is miss-tracked, vehicle detection will then be reprocessed on the entire image. 2.0 VEHICLE DETECTION detection plays a critical role to automatically localize a vehicle in a preceding view of image. Its purposes are to improve the vehicle safety and stay a certain distance away from a frontal vehicle avoiding severe crash. In the proposed vehicle detection method, it can be divided into three stages, i.e. horizon localization, shadow detection, and brake lights detection. 2.1 Horizon Localization A horizon line is localized using vertical mean distribution to split a traffic scene image I(x, into sky region and road region where x and y depict the row and column for the Figure 2: (a) R roi, (b) Vertical mean distribution. 2.2 Shadow Shadow is usually found underneath a vehicle in an image. In this stage, a shadow detection method is proposed to quickly scan for the low intensity values existed in the image. The proposed shadow detection method has three steps, i.e. edge detection, low intensity mapping and edge grouping. 2.2.1 Edge Mean Value First, a mean filter is applied to the R roi map to smooth image pixel values and enlarge shadow segments. Next, edge detection with a Sobel filter is performed on the blurring image to obtain the contour of a vehicle as well as its shadow. Subsequently, a certain edge threshold (T 1) is applied to the gradient edge image to produce a binarized edge map BEM(x,, as demonstrated in Figure 3. Figure 3: Binarized edge map (BEM). 2.2.2 Low Intensity Mapping Shadow is always existed in an image with low intensity values. To produce a low intensity map LIM(x,
automatically, the mean value (μ) and standard deviation (σ) of I(x, is initially computed and it is then determined as shown in Figure 4(a) with the equation (1), 1, if I ( x, µ < σ LIM ( x, = (1) 0, otherwise Subsequently, the possible shadow map PSM(x, illustrated in Figure 4(b) is obtained by merging the BEM(x, and LIM(x, with an AND operation. 1, if LIM ( x, == 1 AND BEM ( x, == 1 PSM ( x, = (2) 0, otherwise Figure 6: (a) Red component map, (b) Possible vehicle region (PVR). 3.0 VEHICLE VERIFICATION In order to detect a preceding vehicle, only the largest area of PVR is concerned and verified for the tracking system. A bounding box is applied to the possible vehicle region based on the edge width obtained from PSEM(x,. Figure 7 shows the PVR extraction with the largest area. The extraction region is then undergone symmetry and asymmetry analysis for PVR verification. Figure 4: (a) LIM(x,, (b) PSM(x,. 2.2.3 Edge Grouping After the acquisition of PSM(x,, edge grouping stage is performed to acquire the possible shadow edge map PSEM(x,. First, a noise elimination technique is applied to group non-zero connectivity edges with a pixel count threshold (T 2) and at the same time, discard the nonconnectivity pixels that having less than T 2 pixels count. To generate PSEM(x,, only the bottom edge of the connected pixels are remained since shadows are always existed underneath a vehicle. Hence, the remaining shadow edges are labeled accordingly and it is shown in Figure 5. 3.1 Symmetry Analysis Figure 7: PVR extraction. Two symmetry analysis are performed, i.e. contour symmetry and brake lights symmetry detection. Assuming that the extraction region has ith row and jth column of image size. A binary contour (C PVR) is first constructed by using Sobel filter as shown in Figure 8(a). For the non-zero pixels ( i, j 1), the contour symmetry axis is calculated respective to the consecutive pixel i, j ) at the same row. ( 2 j1 + j2 center = 2 if CPVR ( i, j2 ) == 1 AND + + ContourSym ( center ) j1 j2 (3) Csym arg max ContourSym ( (4) j Figure 5: Possible shadow edge map (PSEM). 2.3 Brake Lights Brake lights have much red components compared to other vehicle s parts. Figure 6(a) depicts the result of red component map. PSEM(x, is checked by accumulate the number of red component pixels for each labeled edge. If the edge has more than T 3 number of red component pixels, it is defined as a possible vehicle region (PVR) as shown in Figure 6(b). where ContourSym is an accumulator and C sym is the contour symmetry axis, in which the maximum peak occurs in Figure 8(b). On the other hand, brake lights are always located on the left and right corner of a vehicle symmetrically. Figure 8(c) is the detected binary brake lights (B PVR) on the extraction region. Same symmetry voting scheme is applied to the brake lights symmetry analysis with an accumulator called BrakeSym. The brake lights symmetry axis is defined as: Bsym = arg max BrakeSym( (5) j
where B sym is the maximum peak obtained in the accumulator demostrated in Figure 8(d). To verify the validity of PVR, the condition below must be obey: 1, if Bsym Csym ε Track = (6) 0, otherwise where ε is the difference between brake lights symmetry axis and contour symmetry axis. (a) (b) axis, we might find the center coordination as (C sym, D asym) of the PVR. (a) (b) Image Row (c) D asym Votes C sym Votes Figure 9: (a) Edge difference map, (b) D PVR, (c) ContourAsym. 4.0 VEHICLE TRACKING (c) Votes Image Column (d) B sym tracking is supplementary to support a driver assistance system for fast vehicle detection. As shown in Figure 10, vehicle tracking algorithm is implemented in a closed loop if <Track = 1> is true. Once the system lost track, vehicle detection will be carried out on the whole image. Figure 8: (a) Binary Contour (C PVR), (b) ContourSym, (c) Binary Brake lights (B PVR), (d) BrakeSym. 3.2 Asymmetry Analysis Image Column <Track = 1> State Center(X,Y) Verification Kalman Filter Parameter Update Region Extraction Sub-window Prediction As can be seen in Figure 8(a), the edges of vehicle region are mostly displayed in vertical, but not in horizontal. Instead of analyzing the grayscale symmetry, the vehicle region is turned into an edge difference map by differencing all the columns to its first column of the PVR where it is shown in Figure 9(a). Next, a Sobel fitler is applied to the difference map to get the horizontal contour (D PVR), as shown in Figure 9(b). For the non-zero pixels ( i 1,, the contour symmetry axis is calculated regarding to the consecutive pixel ( i 2, at the same column as below: D PVR ( i, = G( i, G( i,1) (7) i1 + i2 center = 2 if DPVR ( i2, == 1 AND i1 i + + ContourAsym( center ) D arg max ContourAsym( i) asym = i where ContourAsym is an accumulator and D asym is the contour asymmetry axis. By combining the symmetry and asymmetry 2 (8) (9) <Track = 0> Figure 10: tracking system. Kalman filter is popular in vision tracking system due to its prediction characteristic based on previous information. The linear state equation and measurement equations are defined as : x k+ 1 = Ak + 1, k xk + Bk+ 1, kuk + wk (10) y k = H k xk + vk (11) where the state space (x) contains the X-position and Y- position of vehicle s center; A k+1,k is the transition matrix bringing state x k from time k to k+1, B k+1,k is the matrix connecting the acceleration vector u k to the state variables; w k is known as process noise; y k is the measurement output; H k is the observation model that maps the true state space to the observed space; and v k is the measurement noise. The initial conditions for xˆ k and Pk are given as: x = E[ ] (12) ˆ0 x0 P0 = E[( x0 E[ x0 ])( x0 E[ x0 ] )] (13) T In Figure 11, Pk is a priori estimate error covariance and P k is a posteriori estimate error covariance. The estimated state, x k is
updated with the estimated gain G k. The predicted state x k - is used to draw a sub-window on the next frame. Prediction Figure 11: Kalman filtering method. 5.0 RESULTS AND DISCUSSIONS Correction In this section, some results were presented to evaluate the performance of the proposed algorithm. Two video sequences were captured under sunny day condition at around 12p.m. using 6 Megapixels Canon IXUS 65. Video #1 was taken in a highway environment while video #2 was taken in a rural area. Initial values of the following parameters, T 1 =50, and T 2=T 3=12 were set in these experiments. The estimation of X-position and Y-position for a vehicle s center was demonstrated in Figure 12. For some brief explanation, desired point was the expected center on the vehicle region while the actual point was the position determined from the vehicle verification. Moreover, the predicted point was the position forecasted using the Kalman filtering. As can be observed in Figure 12(a) for video #1, the actual X-position and Y-position were detected approximately along with the desired one within tolerance ±5 pixels. Same performance was obtained in Figure 12(b) for video #2 with the proposed vehicle verification techniques. As shown in Figure 13 & 14, it showed a good performance of the proposed vehicle detection and tracking system since a preceding vehicle was well detected and tracked consecutively. The center of vehicle was well detected with a * symbol while denoted the predicted center tracked with Kalman filter. In Table 1, the detection time taken in assistance with vehicle tracking was (~90%) much faster than only vehicle detection in the traffic scene. The recommended safe distance is the summation of mean tracking time distance [~0.12*speed] and vehicle stopping distance [speed^2/20] regarding to the current speed as reported in (June, 2000). Vide o Table 1: Time taken for vehicle detection and tracking Total + Frame Time (s) Tracking Time (s) s #1 170 1.24~1.8 2 #2 190 1.25~1.8 2 6.0 CONCLUSION Mean Time(s) 0.11~0.13 0.12 0.14~0.16 0.15 A vision-based preceding vehicle detection and tracking system has been presented in this paper. Frontal vehicle is first detected based on shadow and brake lights detection. A bounding box extraction is applied to the possible vehicle region for symmetry verification. Brake lights symmetry and contour symmetry analysis are implemented and followed by contour asymmetry analysis in order to obtain center of a possible vehicle region. It is then passed to the tracking system to predict the movement of a vehicle. The advantages of this vehicle detection and tracking system are, (i) the reduction of vehicle searching time, and (ii) the increase performance of the vehicle detection since the searching area is bounded. However, this method cannot be applied during night or poor lighting condition and far vehicle detection. REFERENCES
(a) X-Position Y-Position X-Position Y-Position (b) Figure 12: The estimation of X-position and Y-position for (a) Video #1, (b) Video #2. Figure 13: Results of vehicle detection and tracking for video #1 (a) Frame 5, (b) Frame 50, (c) Frame 100, (d) Frame 140, (e) Frame 170 Figure 14: Results of vehicle detection and tracking for video #2 (a) Frame 5, (b) Frame 50, (c) Frame 100, (d) Frame 140, (e) Frame 190