Lane Markers Detection based on Consecutive Threshold Segmentation

Similar documents
LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY

Lecture 9: Hough Transform and Thresholding base Segmentation

Lane Tracking in Hough Space Using Kalman filter

L*a*b* color model based road lane detection in autonomous vehicles

TxDOT Video Analytics System User Manual

On Road Vehicle Detection using Shadows

Lane Detection Algorithm based on Local Feature Extraction

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Vision-based Frontal Vehicle Detection and Tracking

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Lane Detection using Fuzzy C-Means Clustering

Integrated Vehicle and Lane Detection with Distance Estimation

Preceding vehicle detection and distance estimation. lane change, warning system.

HOUGH TRANSFORM CS 6350 C V

Vehicle Detection Method using Haar-like Feature on Real Time System

Computer Aided Drafting, Design and Manufacturing Volume 26, Number 2, June 2016, Page 18

Open Access Self-Similarity Based Zebra-Crossing Detection for Intelligent Vehicle

ROBUST LANE DETECTION & TRACKING BASED ON NOVEL FEATURE EXTRACTION AND LANE CATEGORIZATION

A Road Marking Extraction Method Using GPGPU

Rectangle Positioning Algorithm Simulation Based on Edge Detection and Hough Transform

Detection & Classification of Arrow Markings on Roads using Signed Edge Signatures

Journal of Applied Research and Technology ISSN: Centro de Ciencias Aplicadas y Desarrollo Tecnológico.

Lecture 15: Segmentation (Edge Based, Hough Transform)

The Vehicle Logo Location System based on saliency model

Real-Time Detection of Road Markings for Driving Assistance Applications

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

Elaborazione delle Immagini Informazione Multimediale. Raffaella Lanzarotti

Digital Image Processing

Traffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System

Change detection using joint intensity histogram

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Moving Object Counting in Video Signals

Low-level Image Processing for Lane Detection and Tracking

Fog Detection System Based on Computer Vision Techniques

Pedestrian Detection with Improved LBP and Hog Algorithm

Introduction. Chapter Overview

A Paper presentation on REAL TIME IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM

Evaluation of Lane Detection Algorithms based on an Embedded Platform

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

Detection and Classification of Vehicles

Research on the Algorithms of Lane Recognition based on Machine Vision

E0005E - Industrial Image Analysis

Detecting and recognizing centerlines as parabolic sections of the steerable filter response

Time-to-Contact from Image Intensity

HOUGH TRANSFORM FOR INTERIOR ORIENTATION IN DIGITAL PHOTOGRAMMETRY

Gradient-Free Boundary Tracking* Zhong Hu Faculty Advisor: Todd Wittman August 2007 UCLA Department of Mathematics

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

Study on the Signboard Region Detection in Natural Image

Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above

A New Algorithm for Shape Detection

CS 445 HW#6 Solutions

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification

Image Analysis Lecture Segmentation. Idar Dyrdal

2 OVERVIEW OF RELATED WORK

Stereo Vision Image Processing Strategy for Moving Object Detecting

(Refer Slide Time: 0:32)

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

Stereo Vision Based Advanced Driver Assistance System

A NOVEL LANE FEATURE EXTRACTION ALGORITHM BASED ON DIGITAL INTERPOLATION

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Study on Image Position Algorithm of the PCB Detection

ATIP A Tool for 3D Navigation inside a Single Image with Automatic Camera Calibration

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICDSP.2009.

DETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA. Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI

Efficient vanishing point detection method in unstructured road environments based on dark channel prior

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

On-road obstacle detection system for driver assistance

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2

Stereo Image Rectification for Simple Panoramic Image Generation

EE368 Project: Visual Code Marker Detection

1. What are the derivative operators useful in image segmentation? Explain their role in segmentation.

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Real-Time Lane Departure and Front Collision Warning System on an FPGA

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry*

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Estimating the Vanishing Point of a Sidewalk

Learning an Alphabet of Shape and Appearance for Multi-Class Object Detection

Low-level Image Processing for Lane Detection and Tracking

Research of Traffic Flow Based on SVM Method. Deng-hong YIN, Jian WANG and Bo LI *

The Application of Image Processing to Solve Occlusion Issue in Object Tracking

Detecting Ellipses via Bounding Boxes

Research on online inspection system of emulsion explosive packaging defect based on machine vision

Topics and things to know about them:

Using Shift Number Coding with Wavelet Transform for Image Compression

On Performance Evaluation Metrics for Lane Estimation

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 17, NO. 1, JANUARY Efficient Lane Detection Based on Spatiotemporal Images

Automatic Navigation through a Single 2D Image using Vanishing Point

Finding 2D Shapes and the Hough Transform

Evaluation of the Use of High-Resolution Satellite Imagery in Transportation Applications

Vehicle Detection under Day and Night Illumination

parco area delle Scienze, 181A via Ferrata, , Parma 27100, Pavia

Distance and Angles Effect in Hough Transform for line detection

Awareness on the Roads Through Video Analysis. Undergraduate Honors Research Thesis

Fitting: Voting and the Hough Transform April 23 rd, Yong Jae Lee UC Davis

Laboratory of Applied Robotics

Transcription:

ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol. 6, No. 3, 2011, pp. 207-212 Lane Markers Detection based on Consecutive Threshold Segmentation Huan Wang +, Mingwu Ren,Sulin Shao School of computer science, Nanjing university of science & technology, Nanjing, PR. China. 210094 (Received March 29, 2011, accepted April 20, 2011) Abstract. This paper proposed a simple and robust lane markers detection method for intelligent vehicle navigation. It needs not calculate inverse perspective map. The method uses multiple threshold segmentation instead of single threshold segmentation. And straight and curve lane markers are directly extracted in Run- Length accumulation (RLA) images. It performs well in various complex conditions and costs less than 50 ms for a 352 by 288 image. Experiments on many kinds of real complex image sequences demonstrate the effectiveness and efficiency of the proposed method. Keywords: Automatic land vehicle Lane markers detection Weighted Hough Transformation 1. Introduction Lane markers detection is a key technique for Automatic lane vehicle (ALV) navigation, intelligent traffic, driving safety and so on. Most of the previous researches were focused on the detection of lane markers on structured road such as highway roads. However, lane detection in non-structured or semistructured road is a more challenge task due to parked and moving vehicles, bad quality lines, shadows cast form buildings, trees, and other vehicles, sharper curves, irregular/strange lane shapes, emerging and merging lanes, sun glare, writings and other markings on the road (e.g. pedestrian crosswalks), different pavement materials, and different slopes. The method proposed by Ref. [1] is based on generating a birdseye view of the road by inverse perspective mapping (IPM), which needs camera calibration. Ref. [2] uses three-feature based method to detect and track lane markers, which can only adapt to the slow varying of lanes. Color segmentation method is applied to preprocess and highlight lane pixels in [3,4], but it is sensitive to the change in ambient light color. In this paper, we propose a simple but robust lane markers detection method for intelligent vehicle navigation without generating inverse perspective map. The method uses multiple threshold segmentation instead of single threshold segmentation [9]. Straight and curve lane markers are directly extracted in Run-Length accumulation (RLA) images. It performs well in various complex conditions and costs less than 50 ms for a 352 by 288 image. Experiments on many kinds of real complex image sequences demonstrate the effectiveness and efficiency of the proposed method. 2. Proposed algorithm The flowchart of the proposed algorithm is shown in Fig.1: Fig.1 flowchart of the proposed algorithm + Corresponding author. E-mail address: wanghuan_ywzq@tom.com. Published by World Academic Press, World Academic Union

208 Huan Wang, et al: Lane Markers Detection based on Consecutive Threshold Segmentation We aim to detect all potential lane makers which exist in a road image. Before detection, we need suitable preprocessing for alleviating distractions and improving processing speed. As is shown in Fig.2, we first select a rectangle-like region as region of interest (ROI) in order to remove the sky region and the region corresponds to the front part of vehicles. For each image, see Fig.2, all the following processing are then done within this ROI region. We call this region as original image without special explanation. Secondly, we transform color ROI image to a gray ROI image using Equation (1): I=0.299*R+0.587*G+0.114*B (1) It is known that there are both white and yellow lane markers in general road scene. However, our algorithm is designed to detect them in a unified way. In RGB color space, white corresponds to (255, 255, 255) and yellow to (255, 255, 0), thus we ignore blue channel and using equation (2) instead of equation (1) to make the yellow markers being white ones. Therefore, we consider only white lane markers. I = (R+G)/2 (2) Fig.2 ROI extraction, the green rectangle is ROI region. 2.1. Consecutive threshold segmentation In order to detect lane makers, we usually need to find an optimal threshold which can separate lanes and background perfectly and use the threshold to segment an original image, result in clustering the background points to one class, and lane marker points to the other. It is hard to select such an optimal threshold to make sure the lane marker points be segment totally because of uneven illumination and cast shadows. Fortunately, if we use multiple thresholds in certain intensity range to binary an original image, where each binary image corresponding to each threshold may include parts of lane marker points. Fig.3 gives such an example, (a) is an original image, three binary images of (a) which corresponding to threshold 94,128,194 are shown in (b), (c) and (d) respectively. We can clearly observe that different parts of the two lane markers appear in different binary images. Therefore, by using a set of consecutive thresholds, we can collect all parts of lane marker points to obtain an integrated lane marker. In this section, we use consecutive threshold segmentation to detect lane marker points. Instead of selecting single threshold, consecutive threshold segmentation uses multiple thresholds and collects useful information from each segmented images. (a) (b) (c) (d) Fig.3 (b),(c),and (d) are binary results of original image (a) with binary threshold 94, 128, 194 respectively. Given a thresholdt, we binary a gray original image I to a binary image B, which is: 255 I( x, > t B( x, = (3) 0 else Suppose all gray intensity of lane markers vary from g 1 to g 2, if we select a thresholdt, ift > g2, all lanes are included in background, if g 1 < t < g 2, we can obtain parts of lane marker points, ift < g1, all lane JIC email for contribution: editor@jic.org.uk

Journal of Information and Computing Science, Vol. 6 (2011) No. 3, pp 207-212 209 marker points are obtained. However, more and more background distractions will also be segmented as t decrease and the extraction of lanes could be affected. So we try to expect a high thresholdt 2 and a low thresholdt 1, Lett 2 approximate g 2 and let t 1 be a little lower than g 1.This expectation is based on histogram profile analysis, see Fig.4,we experimentally set: t 1 = u t2 = u + 3s (4) whereu and s are the mean and standard deviation of original image. t 1 t 2 (a) (b) (c) Fig.4 Consecutive threshold segmentation. (a) ROI image and transform RGB to gray image. (b) histogram of (a), (c) RLA image of (a). Another key problem is how to select the part of lane marker points exist in each binary image. To sufficiently utilize structure information of lane markers, we introduce Run-Length (RL) description, a RL is a white line segment in a row or a column of an image. A RL will be selected from each binary image if its length satisfies certain length range. Hence, for each binary image, we collect all the horizontal and vertical RLs whose length is within the interval [ l 1, l 2 ]. We allocate a black image (all pixels intensity are initially set to zero), which is the same size of original image. If one RL is select, the intensity of all pixels of the corresponding RL is progressive increased in the accumulation image. We call the accumulation image as RL accumulation (RLA) image. Fig.4 shows an example of RLA image. RLA image is essentially a saliency map, which clearly shows the probability of a pixel being a lane marker point. RLA image can withstand the illumination changes and shadow affection, and it can remove these markings, such as writings and stop lines on the road due to length constraint of RLs. 2.2. Weighted Hough Transformation Hough transformation is a transformation between space position of points in image plane and parameter space of a specific model. The dimensions of the parameter space depend upon the selected model. Equation of a straight line model is y = ax + b in x-y plane with a, b as parameters. Alternatively we often use its parametric model in polar axis: ρ = x cosθ + y sinθ (5) This parameter space works with θ and ρ, whereθ is the angle of axis x and the perpendicular line of the straight line passing origin and ρ is the perpendicular distance of the line from origin. As shown in last section, in a RLA image, the brighter the intensity of a pixel is, the more probability a lane marker point is. So we directly adopt straight line model Hough Transformation in the RLA image for finding lanes markers. Generally, the accumulation process in the parameter space of Hough Transformation is accumulating the number of points for each parameter bin. Since different points have different saliency, we should consider this fact in the accumulation process. Therefore, we adopt weighted Hough Transformation, it directly accumulate the intensity of each pixel in RLA image: A ( ρ, θ ) = A( ρ, θ ) + RLA( x, (6) where A is 2D array of parameter space, and x, y, ρ, θ satisfy equation (5), RLA( x, represents the intensity of pixel ( x, in RLA image. We select all the local maximums from the accumulation values of all bins in the parameter space. Each local maximum corresponds to a possible lane marker. Therefore, we aim to find all valid ones which correspond to true lane markers from these maximums. We use sequential selection strategy to extract all possible lane markers sequentially in the accumulation image. See Fig.5. The most salient lane marker is first extracted, then the second most salient lane markers, then the third and so on. One point should be noticed JIC email for subscription: publishing@wau.org.uk

210 Huan Wang, et al: Lane Markers Detection based on Consecutive Threshold Segmentation that we have to remove the suggest region of the selected lane before the next selection. Here, we define a support region as the region which is bounded by the two parallel lines with a suitable interval and boundaries of processing region of Weighted Hough Transformation. Straight line model is competent in most of straight road scene. However, lane markers may be curve in turning roads, in this case the far end of a lane marker is often a curve, many methods use complex curve model[5,8] to approximate the curve markers. In our method, we use a local search and growing strategy to solve this problem. It is based on the straight line detected above. It only aims at processing the far end (curve part) of a lane since the near end can also be approximated well by straight line model. This strategy is not time consuming and performs well in our test sequences. It is a good complement of straight line model. In our method, straight line model based weighted Hough Transformation is first applied in the whole image to detect straight lanes, and the suggest region of the lane in image is removed. Then weighted Hough Transformation is again applied to the lower 1/3 region of the image, if a lane is also detected, search and growing process is triggered. Fig.6 shows an RLA image to illustrate specific search and growing process. In Fig.6, the lower 1/3 region of a RLA image is concerned (the region below the white line), weighted Hough transformation is applied to find straight lines near the car head in these region (the yellow line). A circle window centers at the far end points of detected lines (red circle), and we cumulate the intensity summary of RLA image along its radius direction from 0 to 180 degree, the direction corresponds to the maximum summary is the direction the circle window should shift, namely the growing direction of lane marker (the red line). The center of circle window is shifted along the maximum summary direction until arrive at the point in the original circle window. For the new window (the green circle) after shifting, we cumulate the intensity summary of RLA image again to find maximum summary direction (the green line). The process is repeated until the maximum summary is below than a threshold or the circle window arrives at the image boundary. (a) original image (b) RLA image of (a) (c) the first selected line (d) remove the first selected line from RLA image (e) the Second selected line. (f) remove the second selected line from RLA image Fig.5 line detection process by weighted Hough transformation Fig.6 An illustration of the search and growing process. 2.3. Post-verifying by Knowledge Constraints We measure the confidence of a lane marker in order to keep true detections and remove false detections by considering confidence of the lane marker itself and geometry relations with other lane markers. For single lane marker, we use mean intensity u of its support region in the RLA image to measure its confidence. For geometry relations of any two lane markers, we use vanish point constraint and road area JIC email for contribution: editor@jic.org.uk

Journal of Information and Computing Science, Vol. 6 (2011) No. 3, pp 207-212 211 constraint to measure confidence, where vanish point constraint make sure the interaction point of arbitrary two lanes is out of the ROI. Road area constraint makes sure the area of the region closed by the two lanes and the four boundaries of ROI should also be larger than a threshold. In summary, we apply following three constraints to remove false detections: (i) The confidence of a lane marker should be larger than the thresholdt = ( t ) / 2 t1 4, where t,t 1 2 are the thresholds presented in section 2.1. (ii) The area of the region between two lane markers and the up and bottom boundaries of the ROI should be larger than 1/ 4 of the area of the whole ROI region. If this condition is unsatisfied, the lane marker with the low confidence will be removed. (iii) The row coordination of intersection point of any two lane markers should be smaller than Y top, which is the row coordination of the top boundary of ROI. If this condition is unsatisfied, the lane marker with the low confidence will also be removed. Finally, we determine lane marker type (left, middle or right) and color (white or yellow) by lane markers direction degree and mean intensity of lane marker support region in blue component of original image. For dashed lanes, temporal blurring [6] can be applied to enhance current RLA image by blurring RLA images from last N history frames, and Kalman filter [7] can also be used to limit the search range of weight Hough transformation and remove the inconsistence of the detection results in different frames. 3. Experiments (a) (b) (c) (d) (e) (f) Fig. 7 lane markers detection results All the experimental data in this paper are captured by a WAT1000E camera which was mounted in the front floor of our automatic lane vehicle. The image resolution is 352 by 288, and the capturing frequency is 25Hz. Some typical detection results and analysis are given in this section. In each result image, green rectangle represents ROI region. Left, middle and right lane markers are marked in red, yellow and blue color respectively. In Fig.7 (a), (b), shadow cast affects the detection of one of the lane markers seriously which lead to intensity inconsistent of lane markers. This is very difficult for single threshold segmentation algorithm. While in our method, the affected lane markers can be extracted due to consecutive threshold segmentation. In Fig.7(c),(d), the markers are occluded by other vehicles or contaminated by grain or dust. Hough Transformation is robust for line break. Weighted Hough Transformation further boosts this robustness by using uncovered marker points to estimate the whole lane marker. In Fig.7 (e),(f) gives two turning scenes. The lane marker shows large slope. Local search and growing strategy approximates the turning lane markers well. It is shown clearly that our method is competent in these complex cases. Fig.8 shows other detection results. Finally, we also test the average time cost of our method in all test sequences (more than ten scenes and twenty thousands frames). The average cost is 48.94 ms, which shows that the JIC email for subscription: publishing@wau.org.uk

212 Huan Wang, et al: Lane Markers Detection based on Consecutive Threshold Segmentation proposed method is time efficient. 4. Conclusions Fig. 8 Other lane markers detection results This paper proposes a robust and real time lane markers detection method where consecutive threshold segmentation deviate the uneven illumination and shadows distractions, weighted Hough Transform further improves the accuracy of the marker localization, moreover, local search and growing approach realizes curve approximation in turning road scene. It needs no camera calibration and shows good performance in many complex scenes with low time cost. 5. Reference [1] A. Borkar, M Hayes, M.T.Stmth. Detecting Lane markers in complex urban Environments. IEEE 7 th international conference on mobile Adhoc and sensor systems, 2010. [2] Yong UK Yim, Se-Young Oh. Three-Feature based automatic lane detection algorithm for Autonomous Driving. IEEE Transaction on intelligent transportation system. 2003, 4(4): 219-224.. [3] C.G.Rotaru, T.J.Zhang. Extracting road feature from color image using a cognitive approach. IEEE intelligent vehicle symposium. 2004, pp.298-303. [4] T.Y.Sun, S.J.Tsai, V.Chan. HIS color model based lane-marking detection. IEEE intelligent transportation systems conference. 2006, pp.1168-1172. [5] Mohamed Aly. Real time detection of lane markers in urban streets. IEEE intelligent vehicles symposium. 2008, pp.7-12. [6] A. Borkar, M Hayes, M.T.Stmth S. Pankani. A Layered approach to robust lane detection at night. Computational intelligence in Vehicles and Vehicular systems, 2009. [7] A. Borkar, M Hayes, M.T.Stmth. Robust lane detection and tracking with ransack and kalman filter. International Conference on Image Processing. 2009, pp.3261-3264. [8] Yue Wanga, Eam Khwang Teoha, Dinggang Shenb. Lane detection and tracking using B-Snake. Image and Vision Computing. 2004, 22, pp.269-280. [9] Kuo-Yu Chiu, Sheng-Fuu Lin. Lane detection using color-based segmentation. IEEE intelligent vehicles symposium. 2005, pp.706-711. JIC email for contribution: editor@jic.org.uk