LANE MARKING DETECTION AND TRACKING

Similar documents
Real-Time Detection of Road Markings for Driving Assistance Applications

Research on the Algorithms of Lane Recognition based on Machine Vision

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Robust lane lines detection and quantitative assessment

Robust contour extraction and junction detection by a neural model utilizing recurrent long-range interactions

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY

Sensory Augmentation for Increased Awareness of Driving Environment

Real Time Lane Detection for Autonomous Vehicles

Digital Image Processing

Nicolai Petkov Intelligent Systems group Institute for Mathematics and Computing Science

Computational Models of V1 cells. Gabor and CORF

Nicolai Petkov Intelligent Systems group Institute for Mathematics and Computing Science

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen

DESIGNING A REAL TIME SYSTEM FOR CAR NUMBER DETECTION USING DISCRETE HOPFIELD NETWORK

Effects Of Shadow On Canny Edge Detection through a camera

Lane Markers Detection based on Consecutive Threshold Segmentation

Nicolai Petkov Intelligent Systems group Institute for Mathematics and Computing Science

AR-Navi: An In-Vehicle Navigation System Using Video-Based Augmented Reality Technology

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

Fingertips Tracking based on Gradient Vector

IMAGE S EGMENTATION AND TEXT EXTRACTION: APPLICATION TO THE EXTRACTION OF TEXTUAL INFORMATION IN SCENE IMAGES

A Road Marking Extraction Method Using GPGPU

CAR RECOGNITION ON A STATIC IMAGE USING 2D BASIC SHAPE GEOMETRY

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang

SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY

Preceding vehicle detection and distance estimation. lane change, warning system.

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Vision-based Frontal Vehicle Detection and Tracking

Robust Unstructured Road Detection: The Importance of Contextual Information

CS4758: Rovio Augmented Vision Mapping Project

Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions

Lane Detection Algorithm based on Local Feature Extraction

HOUGH TRANSFORM CS 6350 C V

A NOVEL LANE FEATURE EXTRACTION ALGORITHM BASED ON DIGITAL INTERPOLATION

Use of Shape Deformation to Seamlessly Stitch Historical Document Images

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

Cs : Computer Vision Final Project Report

Chapter 9 Object Tracking an Overview

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

A New Approach for Shape Dissimilarity Retrieval Based on Curve Evolution and Ant Colony Optimization

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

A Survey of Light Source Detection Methods

Fingerprint Image Enhancement Algorithm and Performance Evaluation

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Metric for the Fusion of Synthetic and Real Imagery from Multimodal Sensors

On-road obstacle detection system for driver assistance

Biometric Security System Using Palm print

On-Board Driver Assistance System for Lane Departure Warning and Vehicle Detection

Detection & Classification of Arrow Markings on Roads using Signed Edge Signatures

Deep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies

Low Cost Motion Capture

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Measurements using three-dimensional product imaging

Multi-pass approach to adaptive thresholding based image segmentation

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

The area processing unit of Caroline

Time Stamp Detection and Recognition in Video Frames

Vehicle Detection Method using Haar-like Feature on Real Time System

L*a*b* color model based road lane detection in autonomous vehicles

Skew Detection and Correction of Document Image using Hough Transform Method

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

American International Journal of Research in Science, Technology, Engineering & Mathematics

Understanding Tracking and StroMotion of Soccer Ball

Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training

Identifying and Reading Visual Code Markers

Cameras. Guide for selection, installation and setting of video surveillance cameras. Cameras : selection, installation and setting guide VIT 1

Evaluation of Road Marking Feature Extraction

A New Algorithm for Shape Detection

AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA

Vision. OCR and OCV Application Guide OCR and OCV Application Guide 1/14

Requirements for region detection

A Novel Video Enhancement Based on Color Consistency and Piecewise Tone Mapping

International Journal of Advance Engineering and Research Development. Iris Recognition and Automated Eye Tracking

Light source estimation using feature points from specular highlights and cast shadows

Estimation of a Road Mask to Improve Vehicle Detection and Tracking in Airborne Imagery

Malaysian License Plate Recognition Artificial Neural Networks and Evolu Computation. The original publication is availabl

Finger Print Enhancement Using Minutiae Based Algorithm

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

With the number plate now correctly positioned it is important that it s size is at least 130 pixels high, and up to a maximum of 500 pixels.

Locating ego-centers in depth for hippocampal place cells

Vision Based Road Lane Detection System for Vehicles Guidance

Automated image enhancement using power law transformations

Journal of Industrial Engineering Research

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

A Development Of A Web-Based Application System Of QR Code Location Generator and Scanner named QR- Location

On Road Vehicle Detection using Shadows

Motion Detection. Final project by. Neta Sokolovsky

Mouse Pointer Tracking with Eyes

Project proposal: Statistical models of visual neurons

Efficient Lane Detection and Tracking in Urban Environments

Research Fellow, Korea Institute of Civil Engineering and Building Technology, Korea (*corresponding author) 2

TRACK AND OBSTACLE DETECTION WITH USB CAMERA FOR VISION-BASED AUTOMATED GUIDED VEHICLE

AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK

Low-level Image Processing for Lane Detection and Tracking

IMAGE SEGMENTATION. Václav Hlaváč

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai

How does the ROI affect the thresholding?

Transcription:

LANE MARKING DETECTION AND TRACKING Adrian Soon Bee Tiong 1, Soon Chin Fhong 1, Rosli Omar 1, Mohammed I. Awad 2 and Tee Kian Sek 1 1 Faculty of Electrical and Electronic Engineering, University of Tun Hussein Onn Malaysia, Batu Pahat, Johor, Malaysia 2 Mechanical and Mechatronics Engineering, Faculty of Engineering, Ain Shams University, Cairo, Egypt E-Mail: adriansbt@yahoo.com ABSTRACT Visually detecting and tracking lane markings on the road is a difficult task. The challenges faced on the road are different lighting, raining, poor lane markings, and objects on the road. This paper presents a different method to detect and track lane markings. The system carefully selects region of interests from the road image for image processing and recognition, avoiding the need to process large part of the image. The feature extraction is based on modified local threshold that is robust to different lighting condition. The line recognition is based on template matching that imitates the orientation columns of visual cortex. Detection of lane marking is done by scanning the road image for line pattern and the line is trace to complete the detection. When detection is complete, the system will track the lane marking with simple alignment algorithm. The lane marking detection and tracking system was tested on the road, in speed up to 70 km/h. The system is able to detect lane markings under cast shadows, variable ambient lighting and raining conditions, with low processing power. Keywords: lane marking detection, vision, local threshold. INTRODUCTION Lane markings are one the most important road information for drivers to drive safely and efficiently. One of the present challenge of computer vision in automous vehicle is the ability to recognize lane markings, under different lighting condition, in a fast and robust method. Some methods are capable to detect straight lines, while some can detect curve lines at the expense of high computation. Lane marking detection generally consist of feature extraction and algorithm to interpret the features detected. With few exceptions such as (Kluge and Lakshmanan, 1995) that uses deformable template model onto the intensity gradient of the input image directly. There are various lane marking feature extraction used, such as edge detection (Miao, Li, and Shen, 2012), (Dickmanns and Mysliwetz, 1992), local thresholding (Lipski et al., 2008), (Shi, Kong, and Zheng, 2009), (Pollard, Gruyer, Tarel, Ieng, and Cord, 2011), (Revilloud, Gruyer, and Pollard, 2013), ridgeness (López, Serrat, Cañero, Lumbreras, and Graf, 2010), Hough transform (Meuter et al., 2009), contextual features (Gopalan, Hong, Shneier, and Chellappa, 2012). Some works require certain image processing, such as perspective transformation (Lipski et al., 2008), before the features are extracted. The algorithm used to analyse or interpret the extracted features is dependent on the type of feature extraction or approach used. A deformable template is used on the intensity gradient of the input image, and go through iteration process until it meets the road line in the image (Kluge and Lakshmanan, 1995). Other method used are lane modelling and fitting method (López et al., 2010), (Revilloud et al., 2013), voting (Meuter et al., 2009), and K-means cluster algorithm (Miao et al., 2012). For tracking the lane markings, the algorithms used are Kalman filter with Interacting Multiple Models algorithm (Meuter et al., 2009), and particle filter (Gopalan et al., 2012). There are few work done that imitate some concept in biological vertebrate vision such as in (Dickmanns and Mysliwetz, 1992) by identifying the lane markings edge size and orientation in the initial stage of visual processing. Hubel and Wiesel (Hubel and Wiesel, 1962) shows that the primary visual cortex has orientation columns. The neurons in this region are excited by visual line stimuli of varying angles. This concept is applied to this current work, to match the detected possible object with line template of different orientation. In this paper, the feature extraction of lane marking is based on local threshold. Unlike previous works, we extract features only from selected regions of interest. The features are used to determine if it is a possible lane marking. The features are matched with a collection of varying line orientation template, consisting of areas of excitatory response and inhibitory responses, similar to the orientation columns in the visual cortex. This matching method will determine if the feature is a lane marking and the lane marking s orientation. Our contributions include 1) the development of a modified local threshold algorithm specifically road lane marking extraction, 2) line recognition imitating the orientation column, and 3) a different processing flow. LANE MARKING DETECTION AND TRACKING The hardware used in this system is a webcam and a laptop. The program runs mainly in MATLAB. The acquired video images resolution is 432x240, with frame rate of 30. Figure-1 shows the processing structure of the lane marking detection system, consisting of three levels. The low level is the feature extraction based on local threshold, middle level is the recognition of lane marking and its orientation using template matching, and lastly the high-level that carry out the tracing and tracking of lane 8910

markings. At the start of the system, a 24x24 pixel sized region of interest (ROI) scan through the bottom rows of the image to search for possible lane marking. When the possible lane marking is identified and its orientation kwn, the rest of the lane marking is traced. After both left and right lane marking is fully detected, the system will keep track of the lane marking as the vehicle drives on the road. pixel value range. T pixel will be set to a much lower value than V max. The third stage will be to extract out the possible lane markings from the ROI by using the threshold T pixel. Pixels that are above T pixel will be set to white. The algorithm flow is shown in Figure-3. m x m ROI Figure-1. Processing structure. V diff = V max V min N_min : number of pixels that < V min + V diff /2 N_max : number of pixels that > V max -V diff /2 Features extraction To extract features from the ROI, the ROI image goes through few stages of analysis, to avoid inaccurate extraction and to ensure successful feature extraction from low and high contrast image. If possible lane marking is detected, ROI will be aligned before being process at the line recognition stage. Compared to other local threshold algorithms (Lipski et al., 2008), (Shi et al., 2009), this algorithm is specifically for extracting lane marking on the road, avoid extracting ises on road surfaces, and uses less computing time. Firstly, the local contrast V diff, the difference between maximal and minimal value, is determined. If V diff is smaller than the threshold T contrast means that there s a very small contrast in the image, shown in Figure-2(a). The threshold T pixel for the pixel value of the possible lane marking, will be set to the maximal pixel value V max. If exceed, the image shows a high contrast, shown in Figure-2(b). V diff > T contrast N_min < (m x m x 0.9) Percentage = 100 V diff /255 x 45 Percentage = 100 V diff /255 x 80 Percentage = 100 T pixel = V max x percentage/100 Figure-2. (a) Road line with low contrast. (b) Road line with high contrast. The following stage is to determine the threshold T pixel for high contrast ROI. The number of pixels that are within range of a certain level of the minimum pixel value, V min and (V min + V diff /2), are counted. When the total number of pixels are lower than T ROI % of the total number of pixel of the ROI, the pixel value of the possible lane marking has higher pixel value range. T pixel will be set to a slightly lower value than V max. On the other hand, when it is above T ROI % of the total number of pixel of the ROI, the pixel value of the possible lane marking has lower V pixel > T pixel Label as white Label as Black Figure-3. Feature extraction algorithm flow. Line recognition The line recognition is based on template matching. This process is to determine if the feature is a line and in what orientation the line is. Kwing the 8911

orientation of the line, the next section of the line can be estimated for tracing purposes, explained later section. The extracted feature image is matched with a collection of line template of different orientation, shown in Figure-4. This method imitates some of the concept in the orientation columns of the visual cortex. Figure-5 shows an example of a receptive fields of one of the template. This is similar to the simple cortical receptive fields (Hubel and Wiesel, 1962). White area gives the excitatory response while the dark area gives the inhibitory responses. The extracted feature needs to activate eugh response from both areas in order to have successful matching. Figure-6. Left lane marking detection and tracing. Figure-4. Collection of line template of different orientation. Figure-7. Changes in ROIs size according to the distance. Figure-5. Receptive fields of a template. White area gives the excitatory response. Dark area gives the inhibitory responses. Line tracing The system process the image by selecting various region of interests (ROI) throughout each input frame. It begins by scanning for possible left and right lane marking in the road image. Once detected (ROI 1 in Figure-6), the line orientation is identified and the position of the next section of the line (2 nd ROI in Figure-6) can be estimated, as shown in Figure-6. The first left road line is trace to the visible near end of the road (n-th ROI in Figure-6). When both of the left and right lane marking has been searched and traced, the system will proceed to track the line as the vehicle moves along the road. After a road line lane marking is successfully trace, the system stored the positions of the ROIs, which will be used for tracking purposes. The region of interest (ROI) are square shape and vary in size according to the distance of the lane from the vehicle. ROI will be smaller as it moves further from the vehicle, shown in Figure-7. Tracking Tracking of the lane marking is carried out after the left and right lane markings are successfully detected and traced. To track the lane marking as the vehicle moves on the road, the system uses the positions of ROIs of the traced lane markings to check if the line in the ROIs has shifted in its positions in the new image frame. This process uses the same feature extraction mentioned earlier to extract line feature in the each of the selected ROIs, and a simple alignment algorithm is used to detect changes of the line position in each ROIs. The system will then compute and update the new positions of the ROIs of the lane markings. Tracking continues throughout the frames using the updated positions of the ROIs of the lane markings. Using the positions of the ROIs of the left and right lane markings, the system is able to detect if the vehicle is within the lane or has departed from the lane. EXPERIMENTAL RESULTS Feature extraction The feature extraction gave promising results, giving good results in shadow area and in low contrast situation. Low contrast situation will occur when lane marking is further ahead and gets blur, faded lane marking colour, road affected by shadow, bright sunlight reflection on the road, or during raining. Figure-8 shows the line extraction of road line with low contrast under shadowed region. The left image (a) shows the location of the ROI on the road, the middle image (b) is the enlarged image of the ROI, and the right image (c) is the extracted line image. Despite its low contrast, the algorithm is able to give sufficient extraction. Figure-9 shows ather line extraction from a low contrast ROI taken outside of the shadowed region. 8912

of lights and rain droplets on the windscreen affects the feature extraction. The movement of the wiper of the vehicle affects little on the performance of the system. Figure-8. (a) ROI with low contrast under shadowed region. (b) Enlarged image of the ROI. (c) Line extraction of the ROI. Tracking The system is able to track the lane marking as the vehicle moves within inside of the lane or out of the lane. However, the result is affected during the raining situation due to the reflection of the rain water all over the road. Road markings or objects that comes within the ROIs slightly affects the lane marking tracking. The system will t able to track the lane marking if the vehicle make a sudden swerve off the lane. Figure-9. (a) ROI with low contrast under brightly lit region. (b) Enlarged image of the ROI. (c) Line extraction of the ROI. Line recognition The line matching method is able to give good results for single line Figure-10, and double line Figure-11. Both figures shows the image of the selected ROI, the extracted line feature and the result of the recognition. In cases of double line, the accuracy of the orientation recognition is slightly affected. Figure-10. Line recognition of single line (a) image of the ROI. (b) Extracted line feature. (c) Result of the line recognition. Figure-12. Sample results under different situations: (a) good weather condition, double line lane marking; (b) good weather condition, broken single line lane marking; (c) raining condition. Figure-11. Line recognition of double line (a) image of the ROI. (b) Extracted line feature. (c) Result of the line recognition. Tracing Figure-12 shows sample results of the tracing of left and right lane marking under few different conditions. Tracing is successful for continuous or broken single line and double line, under good weather condition. When raining, the road line becomes visually less visible, and reflection CONCLUSIONS In this paper we presented a different method of lane marking detection and tracking system using a camera and a conventional laptop. The use of selective ROIs and the modified local threshold algorithm give good feature extraction at different lighting situations. It also require less processing speed by processing only the necessary parts of the road image. Imitating visual cortex s orientation columns for line recognition shows good recognition result, even when the line is t clear or broken. The line recognition and tracing method is able to detect left and right lane markings without the need to 8913

process large part of the image. The simple tracking method is able to track the lane marking as the vehicle swerves from the lane. The system has been tested on the road with the vehicle s average speed of 70 km/h. The current system needs to be fine tune for better detection and tracking under raining situation. The high-level processing stage can be improved for better detection and tracking of different condition of lane marking. (ITSC), 2011 14 th International IEEE Conference on (pp. 1741-1746). Revilloud, M., Gruyer, D. and Pollard, E. 2013. An improved approach for robust road marking detection and tracking applied to multi-lane estimation. In Intelligent Vehicles Symposium (IV), 2013 IEEE (pp. 783-790). Shi, X., Kong, B. & Zheng, F. 2009. A new lane detection method based on feature pattern. In Image and Signal Processing, 2009. CISP 09. 2 nd International Congress on (pp. 1-5). REFERENCES Dickmanns, E. D. and Mysliwetz, B. D. 1992. Recursive 3-d road and relative ego-state recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2), 199-213. Gopalan, R., Hong, T., Shneier, M. and Chellappa, R. 2012. A learning approach towards detection and tracking of lane markings. Intelligent Transportation Systems, IEEE Transactions on, 13(3), 1088-1098. Hubel, D. H. and Wiesel, T. N. 1962. Receptive fields, bicular interaction and functional architecture in the cat s visual cortex. The Journal of Physiology, 160(1), 106. Kluge, K. and Lakshmanan, S. 1995. A deformabletemplate approach to lane detection. In Intelligent Vehicles 95 Symposium., Proceedings of the (pp. 54-59). Lipski, C., Scholz, B., Berger, K., Linz, C., Stich, T. and Magr, M. 2008. A fast and robust approach to lane marking detection and lane tracking. In Image Analysis and Interpretation, 2008. SSIAI 2008. IEEE Southwest Symposium on (pp. 57-60). López, A., Serrat, J., Cañero, C., Lumbreras, F. and Graf, T. 2010. Robust lane markings detection and road geometry computation. International Journal of Automotive Techlogy, 11(3), 395-407. Meuter, M., Müller-Schneiders, S., Mika, A., Hold, S., Nunn, C. and Kummert, A. 2009. A vel approach to lane detection and tracking. In Intelligent Transportation Systems, 2009. ITSC 09. 12 th International IEEE Conference on (pp. 1-6). Miao, X., Li, S. and Shen, H. 2012. On-board lane detection system for intelligent vehicle based on mocular vision. International Journal on Smart Sensing and Intelligent Systems, 5(4), 957-972. Pollard, E., Gruyer, D., Tarel, J.-P., Ieng, S.-S. and Cord, A. 2011) Lane marking extraction with combination strategy and comparative evaluation on synthetic and camera images. In Intelligent Transportation Systems 8914