Preceding vehicle detection and distance estimation. lane change, warning system.

Similar documents
Vehicle Detection Method using Haar-like Feature on Real Time System

On Road Vehicle Detection using Shadows

parco area delle Scienze, 181A via Ferrata, , Parma 27100, Pavia

2 OVERVIEW OF RELATED WORK

FPGA Implementation of a Vision-Based Blind Spot Warning System

Stereo Vision Based Advanced Driver Assistance System

On-road obstacle detection system for driver assistance

Integrated Vehicle and Lane Detection with Distance Estimation

Lane Departure and Front Collision Warning Using a Single Camera

Lane Markers Detection based on Consecutive Threshold Segmentation

Preceding Vehicle Detection and Tracking Adaptive to Illumination Variation in Night Traffic Scenes Based on Relevance Analysis

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

A Real-Time License Plate Localization Method Based on Vertical Edge Analysis

Measurement of Pedestrian Groups Using Subtraction Stereo

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

TxDOT Video Analytics System User Manual

Low-level Image Processing for Lane Detection and Tracking

Detection and Classification of a Moving Object in a Video Stream

Research on the Algorithms of Lane Recognition based on Machine Vision

Vehicle Detection Using Gabor Filter

Real-Time Human Detection using Relational Depth Similarity Features

Spatio temporal Segmentation using Laserscanner and Video Sequences

Computers and Mathematics with Applications. Vision-based vehicle detection for a driver assistance system

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY

Designing Applications that See Lecture 7: Object Recognition

Implementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT

Fog Detection System Based on Computer Vision Techniques

Low-level Image Processing for Lane Detection and Tracking

A Study on Similarity Computations in Template Matching Technique for Identity Verification

Lane Detection Algorithm based on Local Feature Extraction

Time-to-Contact from Image Intensity

Real-time Stereo Vision for Urban Traffic Scene Understanding

Depth Estimation Using Monocular Camera

Biometric Security System Using Palm print

An Overview of Lane Departure Warning System Based On DSP for Smart Vehicles

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen

Cs : Computer Vision Final Project Report

Car License Plate Detection Based on Line Segments

On-Board Driver Assistance System for Lane Departure Warning and Vehicle Detection

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

MATHEMATICAL IMAGE PROCESSING FOR AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM

Pixels. Orientation π. θ π/2 φ. x (i) A (i, j) height. (x, y) y(j)

IR Pedestrian Detection for Advanced Driver Assistance Systems

Digital Image Processing

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images

Real Time Obstacle Detection in Stereovision on Non Flat Road Geometry Through V-disparity Representation.

Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy

Sensory Augmentation for Increased Awareness of Driving Environment

VISUAL NAVIGATION SYSTEM ON WINDSHIELD HEAD-UP DISPLAY

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

Subpixel Corner Detection Using Spatial Moment 1)

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module

Study on the Signboard Region Detection in Natural Image

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

차세대지능형자동차를위한신호처리기술 정호기

Journal of Applied Research and Technology ISSN: Centro de Ciencias Aplicadas y Desarrollo Tecnológico.

Enhanced Image. Improved Dam point Labelling

Curb Detection Based on a Multi-Frame Persistence Map for Urban Driving Scenarios

A Method for the Identification of Inaccuracies in Pupil Segmentation

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

Layout Segmentation of Scanned Newspaper Documents

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Traffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System

Road-Sign Detection and Recognition Based on Support Vector Machines. Maldonado-Bascon et al. et al. Presented by Dara Nyknahad ECG 789

An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow

Medical images, segmentation and analysis

Real Time Lane Detection for Autonomous Vehicles

Visibility Estimation of Road Signs Considering Detect-ability Factors for Driver Assistance Systems

Vehicle and Pedestrian Detection in esafety Applications

Laserscanner Based Cooperative Pre-Data-Fusion

Robust lane lines detection and quantitative assessment

Fast Pedestrian Detection using Smart ROI separation and Integral image based Feature Extraction

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Stereo Vision Image Processing Strategy for Moving Object Detecting

An Efficient way of Number Plate Alphabets and Numbers Extraction for Security Purpose

Finger Print Enhancement Using Minutiae Based Algorithm

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

Restoring Chinese Documents Images Based on Text Boundary Lines

Unconstrained License Plate Detection Using the Hausdorff Distance

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

A Cooperative Approach to Vision-based Vehicle Detection

Text Information Extraction And Analysis From Images Using Digital Image Processing Techniques

Vision-based parking assistance system for leaving perpendicular and angle parking lots

Human Motion Detection and Tracking for Video Surveillance

An Efficient Character Segmentation Based on VNP Algorithm

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Free Space Detection on Highways using Time Correlation between Stabilized Sub-pixel precision IPM Images

I. INTRODUCTION. Figure-1 Basic block of text analysis

Time Stamp Detection and Recognition in Video Frames

Image processing techniques for driver assistance. Razvan Itu June 2014, Technical University Cluj-Napoca

AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS

Vision-based Frontal Vehicle Detection and Tracking

Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

Scene Text Detection Using Machine Learning Classifiers

Lane Detection using Fuzzy C-Means Clustering

Transcription:

Preceding vehicle detection and distance estimation for lane change warning system U. Iqbal, M.S. Sarfraz Computer Vision Research Group (COMVis) Department of Electrical Engineering, COMSATS Institute of Information Technology M.A. Jinnah Campus, Defense Road off Raiwind road, Lahore, Pakistan. E-mail: {uiqbal ssarfraz}@ciitlahore.edu.pk Abstract Vision based preceding vehicle detection systems are very promising for driver assistance in Intelligent Vehicles. In this paper, a vision based preceding vehicle detection approach capable of distance estimation is presented. That is relatively unaffected by shadows and lighting variations. The system acquires the rear view using two cameras mounted on side mirrors of a vehicle. Hough-Transform is used for the detection of near lane marking and neighbouring vehicles are detected smoothly by using gray level statistics. Then the approaching vehicles are detected efficiently using an adaptive region of interest. Finally, the distance between approaching and host vehicles is estimated utilizing a perspective camera model. Distance estimation results of presented system are precise enough to avoid vehicle collisions and accidents. Index Terms vehicle detection, neighbouring/side/closing vehicles, approaching vehicle, distance estimation, driver assistance, lane change, warning system. I. INTRODUCTION Recently, extensive research has been carried out in the field of driving assistance systems to improve safety and efficiency of transportation. Vision-based assistance systems take several hints of vehicles as criteria to distinguish vehicles from other objects and provide information for the lane change warning systems, the front vehicle collision warning system, night time light-beam controlling system and the auxiliary night vision systems [6,11]. In reality when drivers change lane, the only available resource for judgement of neighbouring and approaching vehicles is the side mirrors. But the human negligence and side vision blind spot are generally the main cause of car accidents and many collisions. To reduce the amount of car accidents this paper focus on the detection and distance estimation of vehicle from preceding vehicles for lane departure warning systems, as it play a vital role in driver safety assistance. Various methods have been proposed to detect lane marking and preceding vehicles for lane change warning systems using cameras, mounted on different positions in vehicles. Broggi et al [1] utilized the characteristics of symmetry, bounding box shape and road region constraints to differentiate vehicles from other objects in scene. Huan Shen et al [2] performed five steps scheme to locate lane marking under bad road scene. K-means cluster based algorithm is employed to localize the lane marking followed by edge detection, matching, searching and linking processions. Krips, Velton and Kummert [3] used the shadow based classification algorithm, and the adaptive template matching method for vehicle detection. The method is characterized by the self adjusted template and the matching score that allows false target rejection. Bing-Fei Wu et al [4] used Sobel edge pixels and gray intensity for lane marking and vehicle detection approaches. Image co-ordinate model is utilized for the distance estimation with detected vehicles. Sun et al [5] used a Gabor filter bank for feature extraction in vehicle detection. To enhance detection accuracy, they optimized these Gabor filters by genetic algorithms. Miyoshi et al [6] presented a temporal median filter to detect the latest background images. The technique was applied to the video image obtained from a handheld camera and detects the moving objects. Chang and Cho [7] developed a real-time vision-based vehicle detection system using an online boosting algorithm. Kim and Chon [8] used pure camera vision to detect vehicle between two consecutive images, one needs to analyze the feature vector to locate moving vehicle. Wen and Kou [9] detect features and perform comparison to find same features between two consecutive frames and calculate feature vectors using Speed Up Robust Features (SURF). These feature vectors are then analyzed to determine whether there are side moving vehicles or not. In this paper, a vision-based approach is proposed for detecting side and approaching vehicles, providing distance estimation and warning alerts to driver for assistance in lane change. The reliability and speed of vehicle detection is improved due to the road region segmentation and avoiding detection of multiple lane markings on road. The proposed technique is also capable of detecting neighbouring vehicles when there is no lane marking information available or lane markings are covered by another vehicle. This paper is organized as follows. Over all system architecture is described in section II. Section III illustrates the detection algorithm including lane marking detection, vehicles detections and technique for distance estimation. Experimental results of vehicle detection and distance estimation are presented in Section IV, followed by discussion and conclusion in section V.

II. SYSTEM ARCHITECTURE In this paper, a preceding vehicle detection approach is proposed using two CCD cameras mounted on the side mirrors of a vehicle as shown in Fig. 1. Where θ 1 and θ 2 are the angles between vehicles and camera. III. DETECTION ALGORITHM The flow chart of the proposed lane marking and vehicle detection is illustrated in Fig. 4. In any lane change warning system first step is to detect the nearest lane marking. After the true classification of near lane marking, it is identified whether there is any closing vehicle or not. In case, there is any closing vehicle, the driver will be warned to not to change the lane. If there is no closing vehicle in neighbouring lane, the adaptive region of interest (ROI) is set to detect the approaching vehicles, without detecting far lane markings. Near Lane Marking Detection Fig. 1 CCD cameras mounted on the side mirrors of vehicle Other than human negligence, side vision blind spots are also one of the major causes of car accidents. Whenever a driver looks on the mirror there is a side vision blind spot [9] as shown in Fig. 2. Closing Vehicle Detected True candidate of lane marking Warning Approaching Vehicle Detection Distance Estimation Fig. 2 Side vision blind spot area. Lighter area in Fig. 2 can be viewed by both side mirrors and the darker area is the blind spot region. If there is a moving vehicle in blind spot area at the same time when driver is changing lane, the driver will not be able to see it by both mirrors and a collision or car accident may occur. So, this blind spot area should also be covered by the camera. For this purpose the angle (θ 1 and θ 2 ) between vehicle and camera should be settled appropriately in Fig. 1. The captured images from right-side camera are shown in Fig. 3 where moving objects are neighbouring and approaching vehicles. Fig. 4 The flow chart of lane marking and vehicle detection. A. Near Lane Marking Detection In the beginning of the detection a ROI is defined as previously done in [4], for the lane marking detection as shown in Fig. 6. The ROI is labelled as the ABCD area in Fig. 5. Fig. 5 The Region of Interest for lane marking detection (a) A neighbouring vehicle. (b) An approaching vehicle. Fig. 3 Sample images taken by right CCD camera. In the ROI, inner boundary of near lane marking is figured out by the vertical edge pixels and Hough-transform for detection of line as shown in Fig. 6. But in many cases it is seen that the near lane markings are covered by the closing vehicle (Fig. 7) so, depending only on the near lane

marking to proceed further is not enough for an efficient system and it is also probable that the edges of closing vehicle will be classified as lane. So, it is compulsory to check the detected lane marking candidate to decide whether it is lane marking or not. If the near lane marking is classified as true candidate the system will start searching the closing vehicles. In case, the lane is not classified as true candidate then that region will be classified as closing vehicle. Fig. 8 The check regions for closing vehicle detection. Fig. 6 Detected inner boundary of lane. Fig. 9 The gray intensity histogram of the surface of road. Fig. 7 Near lane marking covered by a closing vehicle. B. Closing Vehicle in the neighbouring lane detection The closing vehicle in neighbouring lane is detected using gray level comparison similar to [4]. When there is a neighbouring vehicle, a huge amount of gray intensity that is different with road will appear. Therefore, to estimate the gray intensity histogram of the road surface, a ROI for road is arranged, shown as ROI-road in Fig. 8. To avoid the influence of lane marking, width of ROI-road is taken larger than the height of ROI-road. The highest number of gray intensity is denoted as I roadmax (Fig. 9). And the gray intensity ranging [I roadmax - α, I roadmax + α] is considered as the gray intensity region of the road surface. Two ROI, RN 1 and RN 2 are taken (Fig. 8) to detect the neighbouring vehicle. The amount of pixels (P) in RN 1 and RN 2, not belonging to the region of the road is calculated to obtain the ratio: ρ = P/N (1) N indicates the total number of pixels in RN 1 and RN 2 respectively. If the ratio ρ RN1 or ρ RN2 is greater than a certain threshold τ 1, it indicates that a certain amount of gray intensities is varied and there is a neighbouring vehicle. As mentioned in Section III (A), sometimes it is seen that near lane markings are covered by the closing vehicle. In that case, it is probable that the edges of closing vehicle will be classified as lane marking. To differentiate the lane marking from the edges of closing vehicle an adaptive ROI (RN 3 ) is taken whose diagonal is equal to the length detected candidate of lane marking as shown in Fig. 10. The ratio ρ is calculated for RN 3, using Eqt. 1. In case of ρ RN3 <= τ 2 the candidate for lane marking will be classified as true candidate of lane marking, otherwise it will be considered as closing vehicle and driver will be warned to not to change the lane. Fig. 10 ROI for the detection of true lane marking candidate. C. Approaching Vehicle Detection If there is no closing vehicle, then the system will start the detection of approaching vehicle without detecting far lane marking. Approaching vehicle is detected using an adaptive ROI (ROI-approach) as shown in Fig. 11. ROI is set using the information of near lane marking. At the start of detection procedure, width (Pixels) of ROI-approach is equal to 80% of the distance of pixel lying on lane marking from the 1 st pixel of the row and height of ROI is set to be 15 pixels. The system starts to check the amount of gray

intensity in ROI-approach, different from the intensity of the surface of road using Eqt. 1. If ρ ROI-approach > τ 3, it implies there is an approaching vehicle. In case if approaching vehicle is not detected then, ROI-approach will move upward and its width will be decreased by 3%. As the main information of vehicles only appears in the bottom half of image so, ROI-approach will keep on moving upward till the half of the image height, unless it detects the car Fig. 11. An example of image area, covered by ROI-approach is shown in Fig. 12. F v : Vertical focal length (pixels) θ: Incident angle of the detected portion of the preceding vehicle in the camera relative to vehicle pitch axis. Fig. 13 Vertical Road and mapping geometry. As illustrated in Fig. 13, the vertical mapping geometry is mainly dependent on the height (height cam ) of camera from the surface of road and the elevation (Height y ) of detected portion of vehicle above the ground. Where, both heights are in meters. The pitch angle (θ) relative to local tangential plane, in every scan line is: Fig. 11 Detected approaching vehicle. Ɵ = atan ( v F v ) (2) Using value, evaluated in Eqt. 2 the distance between approaching vehicle and the camera corresponding to v, is obtained: Fig. 12 Region covered by ROI-approach. D. Distance Estimation If an approaching vehicle is detected the system will start to estimate the distance between host and approaching vehicle. In order to estimate the distance, a perspective camera model is applied [11] as shown in Fig. 13. The distance between the vehicles is projected into image plane at a vertical and horizontal coordinates (u,v) respectively. The vertical mapping model is carried out, as in our application vertical model is only important. The vertical model considers that the road is flat. Following parameters are used for distance estimation: D : Distance from preceding vehicle D cam : Distance of preceding vehicle from camera D rear : Distance between camera and rear end of host vehicle height CAM : Height of camera from the surface of road Height y : Elevation of the detected portion of preceding vehicle above the road v : Vertical Image coordinates (pixels) HEIGHT : vertical size of the CCD (pixels) D cam = height CAM Height y (3) tan (Ɵ) The equation of distance (D), after applying a coordinate change will becomes: D cam = height CAM Height y 2. v HEIGHT (4) 2F v And finally, the actual distance (D) between the rear end of host vehicle from approaching is obtained as: D = D cam D rear (5) Where, D rear is the distance of camera from the rear end of vehicle. IV. EXPERIMENTAL RESULTS In this paper the system was implemented in a HP Intel (R) Core (TM) 2 Duo 2.4Ghz computer using Matlab- 2009b. In our system, focal length of both cameras is 6mm, height (height cam ) of camera from the surface of road is 1m and α is set to be 30. Height y is considered to be equal to 0.8, as the proposed system mostly detects the head lights of preceding vehicle. Based on experiments τ 1 is set 0.65, τ 2 and τ 3 equal to 0.4. The distance between camera and rear end of car is 6ft. So, D rear is equal to 1.83 meters. The vehicle detection and distance estimation time of the presented approach is good enough to be implemented in real time conditions. Estimated distances are shown in top left of the

output images shown in Fig. 14. The results show that the proposed algorithm is robust in case of shadows and under bad road scenes. Table 1 illustrate the comparison between the distance between host and approaching vehicles in real world and estimated distance in image space. 18.56 V. CONCLUSION AND FUTURE WORK In this paper, we have presented a lane change warning system by detecting preceding vehicles and estimating their distance from host vehicle. The system is robust in case of shadows and disturbance due to lane marking on the surface of road and detects the closing and neighbouring vehicles efficiently. The image processing time is quick enough for real time application and the estimated distances are precise enough for avoiding collision. The results are encouraging in day time. Furthermore, we plan to include the algorithm for night time lane change warning system, in future. Night time detection will reduce the occurrence of accidents in night time as well. No. 24.19 29.26 (a) (b) (c) Fig. 14 Approaching vehicle detection and distance estimation results. Table 1 Comparison between Real and Estimated Distance Estimated Distance Real Distance 1 29.26 31.3 2 22.28 26.3 3 24.19 21.8 4 18.56 17.2 5 8.30 8.0 REFERENCES [1] A. Broggi et al., Visual perception of obstacle and vehicles for platooning, IEEE Trans Intel. Transport. Syst., vol. 1, no. 3. Pp.164-176, Sept. 2000. [2] Huan Shen et al., Intelligent Vehicles oriented lane detection approach under bad road scene, IEEE Ninth International Conf. On Computer and Information Technology, Oct. 2009. [3] M. Krips, j. Valten, A. Kummert, AdTM tracking for bind spot collision avoidance, in Proc. IEEE Intelligent Vehicles Symposium, pp. 544-548, Jun. 2004. [4] B.-F. Wu, W.-H. Chen, C.-W. Chang, C.-J. Chen, M.-W. Chung, A new vehicle detection with distance estimation for lane change warning systems, in Proc. IEEE Intelligent Vehicles Symposium, pp. 698-703, Jun. 2007. [5] Z. Sun, G. Bebis, and R. Miller, "On-road vehicle detection using evolutionary Gabor filter optimization," IEEE Trans. Intel. Transport. Syst., vol. 6, no. 2, pp. 125-137, 2005. [6] M. Miyoshi, J.-K. Tan and S. Ishikawa, "Extracting Moving Objects from a Video by Sequential Background Detection Employing a Local Correlation Map," International Conference of the 2008 IEEE on Systems, Man and Cybernetics, 2008. [7] W.-C. Chang and C.-W. Cho, "On-Line Boosting for Vehicle Detection," IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 40(3), June 2010. [8] Z.-W. Kim and T.-E. Cohn, "Pseudoreal-Time Activity Detection for Railroad Grade-Crossing Safety," Conference of IEEE Transactions on Intelligent Transportation Systems, Vol. 5, No. 4, Dec. 2004. [9] W.-C. Chung, K.-J. Hsu, Vision based side vehicle detection from a moving vehicle, International Conference on System Science and Engineering, pp. 553-558, Jul. 2010. [10] D. Duan, M. Xie, Q. Mo, Z. Han, Y. Wan, An improved Hough transform for line detection, International Conference on Computer application and system modeling, Vol. 2, pp 354-357, Oct. 2010. [11] P.F. Alcantarilla et al., Night time vehicle detectionfor driving assistance lightbeam controller, IEEE Intelligent Vehicles Symposium, pp. 291-296, Jun. 2008. [12] E.D. Dickmans and B.D.Mysliwetz, Recursive 3d road and ego state recognition, Transactions on Pattern Analysis and Machine Intelligence, Feb. 1992.