Self-calibration of an On-Board Stereo-vision System for Driver Assistance Systems

Similar documents
Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

Stereo Image Rectification for Simple Panoramic Image Generation

Flexible Calibration of a Portable Structured Light System through Surface Plane

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen

Camera Model and Calibration

Bowling for Calibration: An Undemanding Camera Calibration Procedure Using a Sphere

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

Computer Vision cmput 428/615

Fisheye Camera s Intrinsic Parameter Estimation Using Trajectories of Feature Points Obtained from Camera Rotation

MOVING CAMERA ROTATION ESTIMATION USING HORIZON LINE FEATURES MOTION FIELD

Vision Review: Image Formation. Course web page:

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Free Space Detection on Highways using Time Correlation between Stabilized Sub-pixel precision IPM Images

arxiv: v1 [cs.cv] 28 Sep 2018

2 OVERVIEW OF RELATED WORK

Calibrating an Overhead Video Camera

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Real Time Obstacle Detection in Stereovision on Non Flat Road Geometry Through V-disparity Representation.

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

Preceding vehicle detection and distance estimation. lane change, warning system.

Stereo imaging ideal geometry

on some prior knowledge of the scene as flat road met in real scenarios. With stereo-vision systems the

An Evolutionary Approach to Lane Markings Detection in Road Environments

An Efficient Obstacle Awareness Application for Android Mobile Devices

A Robust Two Feature Points Based Depth Estimation Method 1)

Midterm Exam Solutions

Silhouette Coherence for Camera Calibration under Circular Motion

Image processing techniques for driver assistance. Razvan Itu June 2014, Technical University Cluj-Napoca

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

STEREO-VISION SYSTEM PERFORMANCE ANALYSIS

Camera Model and Calibration. Lecture-12

Lane Markers Detection based on Consecutive Threshold Segmentation

Stereo Vision-based Feature Extraction for Vehicle Detection

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Spatio-Temporal Stereo Disparity Integration

Spatio temporal Segmentation using Laserscanner and Video Sequences

3D Lane Detection System Based on Stereovision

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

Vehicle Detection Method using Haar-like Feature on Real Time System

Self-Calibration of a Stereo Vision System for Automotive Applications

An idea which can be used once is a trick. If it can be used more than once it becomes a method

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

1 Projective Geometry

Curb Detection Based on a Multi-Frame Persistence Map for Urban Driving Scenarios

Robust lane lines detection and quantitative assessment

A 3-D Scanner Capturing Range and Color for the Robotics Applications

Practice Exam Sample Solutions

Pattern Feature Detection for Camera Calibration Using Circular Sample

Object Modeling from Multiple Images Using Genetic Algorithms. Hideo SAITO and Masayuki MORI. Department of Electrical Engineering, Keio University

Department of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Real-Time Detection of Road Markings for Driving Assistance Applications

Perspective Projection Describes Image Formation Berthold K.P. Horn

Low-level Image Processing for Lane Detection and Tracking

Calibration of a rotating multi-beam Lidar

Vision Based Tangent Point Detection Algorithm, Evaluation and Validation

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Vision-based Localization of an Underwater Robot in a Structured Environment

Stereo Vision Based Advanced Driver Assistance System

6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception

Low-level Image Processing for Lane Detection and Tracking

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

A Cooperative Approach to Vision-based Vehicle Detection

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module

Robotics - Projective Geometry and Camera model. Marcello Restelli

Real-Time Lane and Obstacle Detection on the

Real Time Stereo Image Registration for Planar Structure and 3D Sensor Pose Estimation

HW 1: Project Report (Camera Calibration)

Lecture 16: Computer Vision

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

Vehicle and Pedestrian Detection in esafety Applications

3D Geometry and Camera Calibration

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

On Road Vehicle Detection using Shadows

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

A Road Marking Extraction Method Using GPGPU

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

METR 4202: Advanced Control & Robotics

Autonomous Navigation for Flying Robots

Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging

Stereo Inverse Perspective Mapping: Theory and Applications

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

lecture 10 - depth from blur, binocular stereo

Camera kinetic model applied to crop row tracking and weed detection

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

Subpixel Corner Detection Using Spatial Moment 1)

Measurements using three-dimensional product imaging

Using Genetic Algorithms to Solve the Box Stacking Problem

MAPI Computer Vision. Multiple View Geometry

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Sensory Augmentation for Increased Awareness of Driving Environment

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT

Stereo inverse perspective mapping: theory and applications

Transcription:

4-1 Self-calibration of an On-Board Stereo-vision System for Driver Assistance Systems Juan M. Collado, Cristina Hilario, Arturo de la Escalera and Jose M. Armingol Intelligent Systems Lab Department of Systems Engineering and Automation Universidad Carlos III de Madrid, 2811, Leganés, Madrid, SPAIN E-mail: {Cristina.Hilario,JuanManuel.Collado,JoseMaria.Armingol,Arturo.delaEscalera}@uc3m.es Abstract Vision-based Driver Assistance Systems need to establish a correspondence between the position of the objects on the road, and its projection in the image. Although intrinsic parameters can be calibrated before installation, calibration of extrinsic parameters can only be done with the cameras mounted in the vehicle. In this paper the self-calibration system of the IVVI (Intelligent Vehicle based on Visual Information) project is presented. It pretends to ease the process of installation in commercial vehicles. The system is able to self calibrate a stereo-vision system using only basic road infrastructure. Specifically, only a frame captured in a straight and plane stretch of a road with marked lanes is required. Road lines are extracted with the Hough Transform, and used as a calibration pattern. Then, a genetic algorithm finds the height, pitch and roll parameters of the vision system. The user must run this algorithm only once, unless the vision system is replaced or reinstalled in a different position. 1 Introduction Companies are showing an increasing interest in Driver Assistance Systems (DAS). Many of them have declared its interest in commercialize vision-based DAS in the short term. These vision-based DAS need to establish a correspondence between the position of the objects on the road, and its projection in the image. Thus, it is essential a calibration process in order to obtain the parameters of the vision-system. A vision-system has intrinsic and extrinsic parameters. Intrinsic parameters are those related to the camera-optic set, and may be pre-calibrated in factory or before being installed on-board in the vehicle. However, extrinsic parameters (position and orientation of the stereo system, as showed in figure 2(a)) can only be calibrated after installation. Calibration of extrinsic parameters can be done in factory, by technical operators, or afterward, by the user. The algorithm proposed in this paper is useful in both cases. Obtaining the extrinsic parameters has several problems. On one hand, due to the variability of vehicles, the space of possible positions and orientations is big enough to prevent using initial guesses.on the other hand, any calibration process need a calibration pattern, and when the stereo systems is already installed on-board, the pattern must be seen from the cameras. Most current systems use a calibration pattern which is painted on the road [1], painted on the hood of the vehicle [2], or in a moving object [3]. The algorithm presented here intends to ease installation of stereo-vision systems, by taking advantage of road structure, i.e, using the structured, and partially known, road environment as a calibration pattern. In short, the main goal of this work, is to design and implement a self-calibration algorithm for stereo-vision systems boarded on vehicles, with the following requirements: 1. allowing to obtain the main extrinsic parameters needed by the Driver Assistance System of the IvvI, namely, height, pitch and roll. 2. using exclusively objects of the basic road infrastructure. 3. requiring minimum user participation. The designed algorithm uses the road lane boundaries as a calibration pattern, and a genetic algorithm (GA) [4] as an optimization technique. The user only needs to drive the vehicle through a straight and plane stretch of a road with marked lanes, and indicate the system to initiate the autocalibration process. This is only needed to be performed once, unless the vision system is changed or reinstalled in a different position. This algorithm has been designed to support other capabilities of a DAS. 16

1.1 Previous work To calibrate monocular systems usually requires to simplify the problem, or certain information about the environment. In [] the calibration problem is reduced to detect the height and the pitch of the vision system. For the pitch, it is enough to detect the horizon line. However, in order to make distance measures in the image, the user must provide the algorithm with the true distance between two points. Similarly, in [6] a supervised calibration method is described. It requires the user to supply the width of the lane, and to select from the detected lines in the image, which of them are lane boundaries. As in [], only the pitch angle is considered. In [7], the orientation of a moving camera is calculated by analyzing sequences of 2 or 3 images. The camera is supposed to keep the same orientation during movement. The algorithm needs to know the height of the camera, and the displacement of the vehicle between two frames. Other systems use a pattern painted on the floor to calibrate a binocular system. In [1] the pattern is used to calibrate each camera separately. The equations are linearized and solved with a recursive minimum square technique, and the algorithm outputs the displacement and rotation of the stereo-system respect to the world. Other works have a supervised stage, where, once the grid pattern has been captured by a binocular system, the user selects the intersections of the grid lines [2]. It follows an unsupervised stage, where the parameters are refined with an iterative process, assuming that the images captured from both cameras come from a flat textured surface. This technique is also used to correct small drifts of the extrinsic parameters due to the movement of the vehicle, but, instead of a flat textured surface, it uses several markers placed on the hood of the vehicle that can be easily detected by the cameras. In [8], no special calibration pattern in needed, and the vdisparity algorithm is used to sequentially estimate roll, pitch and yaw parameters of the vehicle. This paper presents a new approach, able to estimate simultaneously the roll, pitch, and height of a stereo visionsystem, without the need of an artificial calibration pattern. 2 Algorithm Description The main idea of this algorithm is to use the road lane boundaries as a calibration pattern. First, we start from a straight and plane stretch of a road (figure 1(1)). Then, we capture an image with both cameras (figure 1(2)). Next, pixels belonging to road lanes are detected, and lines are 1 3 Left camera Fig. 1. Algorithm Description 2 Right camera extracted with the Hough Transform. After that, assuming a possible set of extrinsic parameters, the perspective transformation is reversed (figure 1(3)). If the extrinsic parameters coincide with the true ones, the images projected from both cameras onto the road plane should be identical, but, if the parameters are wrong, then the road lines will not match neither be parallel. A genetic algorithm tries to find a set of extrinsic parameters that makes the bird-eye view obtained from right and left camera be coherent. In other words: 1. all the road lines appear completely parallel, 2. the road lines projected from right camera match the lines projected from the other. Therefore, the genetic algorithm evaluates the fitness of the possible solutions following the two criteria above mentioned. The fitness function will be detailed in subsection 2.3. 2.1 Stereo-vision System Model The perspective transformation requires a mathematical model of the complete vision system. This model consists of two parallel cameras joined together with a solid rod, so that they are separated a distance b. Thanks to the rectification of the images [], both cameras can be considered to be identical, and perfectly aligned. The origin of the reference system is located in the middle point of the rod (figure 2(b)). The projection of any point of the road onto each camera can be calculated using homogeneus matrices, which allows to represent advanced transformations with a single matrix. In order to obtain the matrix that represents the perspective transformation between road plane and camera plane, several steps are followed: 4 17

Roll Pitch X Z Height Yaw (a) Extrinsic parameters Y X Fig. 2. Vision system model b Z (b) Stereo-vision model First the coordinate system is raised a height H, and rotated an angle β (roll) and α (pitch) over the OY and OX axes, respectively. Second, the coordinate system is displaced b/2 or b/2 (for the right or left camera, respectively) over the OX axis. Next, the point is projected along the OY axis, according to the pin-hole model. Finally, the coordinates are translated from millimeters to pixels, and the origin translated to the optical center of the CCD. Multiplying these matrices, the global transformation matrix is obtained: T = K u f C u K u f b/2+c u 1 C v K v f C v 1 Y Cβ + Sβ Sα Sβ Cα Sβ Cα Sβ H Cα Sα Sα H Sβ + Cβ Sα Cβ Cα Cβ Cα Cβ H 1 where S and C stand for the sine and cosine functions, f is the focal distance, K u and K v refer to the pixel size, expressed in pixel/mm, and C u and C v are the optical center in pixels. The matrix dimensions can be reduced to 3 3, if the second row (that corresponds to the depth coordinate, irrelevant in images), and the third column (that corresponds to the height coordinate, always zero if the world is flat), are eliminated. The resultant matrix M persp stores all the necessary information to perform the perspective transformation: u v s where s is a scale factor. = M persp x y 1 (1) (2) 2.2 Extraction of the calibration pattern Extraction of the calibration pattern is done in three steps. First, in order to obtain the horizontal gradient, the image is correlated with the kernel 1 1. As road lines are white stripes over a darker background, they can be recognized because if the gradient image is scanned row by row, two opposite peaks must appear at a distance equal to the line width. So, when a positive peak can be matched with a corresponding negative peak, the positive peak is marked as a pixel that belongs to a road line. Several trials were carried out marking the middle point (equidistant to positive and negative peaks), but the results were not satisfactory. This is because, due to perspective effects, the middle point of the horizontal section, do not coincide exactly with the center of the line. In the second step, the marked pixels are used by the Hough Transform to detect the right and left road lines. The Hough Transform used here has been improved as explained in [1]. The usual ρ-θ parameterization is used, and the range of the θ parameter goes from π to π, allowing ρ to be negative, so that left and right lane borders have different sign in θ (figure 3(a)). Then, Hough Transform selects the best line in the region of negative angles, and the best in the region of positive angles. But these detected lines are not valid yet as a calibration pattern. Some experiments have been done trying to compare two lines from its parameters, without success. For this reason, what the algorithm compares is two set of points extracted from the road lines, evaluating the detected lines at different heights in the image. The first height is at % of the vanishing point height. The subsequent heights are spaced ten pixels, until the bottom of the image is reached. 2.3 Fitness function Each individual of the population of the genetic algorithm represents a position and orientation of the stereo-vision system. The GA should evaluate that: the projections of the pattern from both cameras onto the road are identical, road lines are parallel. Accordingly, these requirements will be evaluated by an error function composed of two terms. The first term evaluates that the two pairs of lines are matched. The algorithm compares two set of points extracted from the road lines, as already explained. Each 18

-ρ y +ρ +θ -θ (a) Hough Parameterization x 1 1 2 2 3 4 4 1 2 3 4 6 (b) Calibration pattern Fig. 3. Calibration pattern. In (b): (+) represent the pattern from the right camera; ( ) represent the pattern from the right camera point projected onto the road (through the inverse perspective transformation) from the right camera should coincide with the projection from the left camera. The discrepancies between the two sets of lines can be evaluated by the sum of the squares of the distance between one point and its corresponding from the other camera: E 1 = xi, le ft x i, right 2 i where x i, le ft and x i, right are the sets of points extracted from the left camera and right camera, respectively. The second term evaluates the parallelism between the right and left borders of the lane. In Eq. (4) and Eq. () ϑ is the angle between the line and the horizontal axis. Subscripts 1 and 2 represent first and second line in the image (i.e. right and left lane border), while subscripts right and le ft represents right and left camera, respectively. (3) E 2 = ϑ right,1 ϑ right,2 + ϑ le ft,1 ϑ le ft,2 (4) E 2 = min ( ϑ right,1 ϑ right,2, ϑ le ft,1 ϑ le ft,2 ) Equation (4) does not work. Experiments using this equations shows that it gives a very high error even when the parameters are close to the true ones. In contrast, Eq. () gives better results, and it represents that at least there is one pair of lines well aligned. Both terms E 1 and E 2 should be zero (or nearly zero) in the perfect case. Now, a fitness function must be created from them. Experience has proved that it is indispensable for the fitness function to give both terms the same importance, and to have a high resolution when we are close to the true solution. Next, E 1 and E 2 are normalized and transformed into and, as in Eq. (6). K is a constant that helps to define an upper limit (1/K), which is the same for both terms, so that they have the same importance. The use of inverse () function increases the resolution when we are close to the global maximum. = 1 E 1 + K ; = 1 E 2 + K The global fitness function cannot be a simple sum of and, because the genetic algorithm tends to maximize only one of them at the expense of the other. Thus the fitness function has been defined as the minimum of the two terms: (6) Fitness = 1 K min(, ) (7) In the experiments, the constant K has been empirically set to 1 4, so that the fitness is constrained to the interval [,1 4 ]. 3 Results Both tests in sinthetic and real environments have been carried out. The behaviour or the algorithm in a real environment has been tested with the IvvI platform (figure 4). IvvI is a research platform for the implementation of systems based on computer vision, with the goal of building an Advanced Driver Assistance System (ADAS).However, a real environment do not allow to evaluate the estimation because it is difficult to measure the true parameters. Thus synthetic images (figure ) have been used in order to evaluate the performance of the algorithm. These images have been generated through a perspective transformation with the same intrinsic parameters of the cameras mounted on the IvvI (Table I). Table II shows the parameters of the genetic algorithm (GA), and table III shows the output of the GA and the comparison with the true parameters. For the first set of Fig. 4. (left) IVVI vehicle; (top-right) detail of the trinocular vision system; (bottom-right) detail of the processing system 1

TABLE I INTRINSIC PARAMETERS (a) Road Model Image width Image height Focal distance CCD width CCD heigth CCD X center CCD Y center Distance between cameras (b) 64 pixels 48 pixels 6,28 mm 7,78 mm 3,8 mm 342.8 pixels 261.43 pixels 148,1 mm (b) Left camera (c) Right camera Fig.. system Road projected onto the cameras of the stereo-vision TABLE II GA PARAMETERS parameters, it also includes the error of the estimation and the variance of the estimated parameters calculated from ten executions of the algorithm. The variance of the result is about 1 2 in height, 1 8 in pitch, and 1 1 in roll, that is, the convergence of the algorithm is quite robust. Fig. 6 shows an example of execution. The figure on the left represents the error of the solution, and the convergence. The figure on the right shows the correspondence between the points projected from the right camera (crosses) and the left camera (circles). It also shows (with dots) the points projected onto the road using the true parameters. In other words, the dots represent the true position of the lane on the road, and the crosses and circles represent the supposed position using the estimated parameters. Error is about 1 meter at meters ahead (3%), and about centimeters at 4 meters (1.3%). In order to recreate real environment, and to study the sen-.3.2 8.8 8.7 8.6 8. 8.4 H α β = 144.731.16846318.43347 Best = 4118.3768 = 881.6232 = 881.7838 8.3 1 2 3 4 3 2 2 1 1 3 2 1 1 2 Fig. 6. GA execution example; (+) points projected from right camera; ( ) points projected from left camera; ( ) points projected using true parameters Population representation Real-valued Population 1 Number of parents % of the population Number of children % of the population Crossover probability 7% Mutation probability 1% Max. number of generations TABLE III RESULTS OF THE GA Parameters Height Pitch Roll (mm) (rad) (deg) (rad) (deg) True 1.17.74 -. -.1 Estimated 148.16.6 -.3 -. Variance 1.34 1 11 2 1 7 Error 42..1..3.2 True 1.17.74..16 Estimated 1484.168.62.81 4.6 True 11.17.74 -. -.16 Estimated 16.168.64 -.1 -.74 TABLE IV RESULTS WITH GAUSSIAN NOISE ADDED TO ORIGINAL IMAGES Parameters Height Pitch Roll (mm) (rad) (deg) (rad) (deg) True 1.17.74 -. -.16 % noise 14.16.6 -. -.16 1% noise 1677.17.72 -.42-2.42 2% noise 1663.17.71 -.4-2.28 16

.3 H α β = 144.736.16846318.43763.2 Best = 4118.3772 = 881.6228 = 881.861 3 8.8 8.7 8.6 8. 8.4 2 2 1 1 (a) Left camera (b) Right camera 8.3 1 2 3 4 (a) % noise 3 2 1 1 2 (c) Markings detected (d) Markings detected H α β = 1676.638.166372.42278277 4. 8.8 Best = 672.721 = 327.427 = 328.136 8.8 1 2 3 4.2.2. Best = 74.2862 = 24.7138 = 246.671 1 2 3 4 3 2 2 1 1 (b) 1% noise 4 2 2 4 H α β = 1662.23.163448.3788 4 3 2 2 1 1 (c) 2% noise 4 2 2 4 Fig. 7. Results with noise added to original images; (left) convergence of the GA and error of the solution; (right) projection of the pattern in world coordinates, where: + are points projected from right camera, are points projected from left camera, and are points projected using true parameters sitivity of the algorithm to errors in the pattern extracted from the image, gaussian noise has been added to the synthetic images. Table IV and figure 7 show the results of an experiment with different amounts of noise. Experiments show that noise affects mainly the height and roll estimation, but the pitch remains nearly unaffected. With % of noise, the algorithm remains unaffected. Error is about 1 (e) Lines detected (f) Lines detected Fig. 8. Lines detected and used to generate the calibration pattern meter at meters ahead (3%), and about centimeters at 4 meters (1.3%). With over 1% of noise, the error is about 4 centimeters at 4 meters ahead (1%), and meters at 3 meters ahead (16%). It can be seen that the algorithm always converge before 2 generations. Table V shows the results obtained from a sequence in a real environment, while figure 8 shows the first frame the sequence and the output of the preprocessing steps. In this sequence the vehicle has just finished a curve to the left, thus it is difficult to divide the variability that comes from the algorithm from the variability that comes from the inertial movements of the vehicle. The results are coherent with the location and orientation of the stereo-system onboard the vehicle. 4 Conclusion and Perspectives A self-calibration algorithm to estimate height, pitch and roll parameters for stereo-vision systems has been presented. Precision is very good with noise below 1%, and location of objects is estimated with less than 3% error 3 meters ahead. With noise over 1% the error increases because the road lines extraction is not accurate enough. Thus, if the image is highly degraded, only the pitch is estimated accurately, and the position error of objects is about 16%. The test in a real environment have proved the consistency and convergence of the algorithm. Future work involves the improvement of the road lines extraction, in order to increase the fitting precision so that the algorithm can deal with highly degraded images. Also 161

TABLE V RESULTS FROM A SEQUENCE OF TEN CONSECUTIVE FRAMES CAPTURED FROM A CAR DRIVING AT 8KM/H Frame Height Pitch Roll (m) (rad) (deg) (rad) (deg) 1.1 -.2 -.11 -.32-2.24 2. -.32 -.1 -.367-2.1 3.26 -.72 -.41 -.24-1.6 4.7 -.74 -.43 -.312-1.7 1.2 -.131 -.7 -.273-1.7 6 1.47 -.131 -.7 -.27-1.6 7 1.28 -.1 -.2 -.26-1.7 8 1.6 -. -.31 -.272-1.6 1.34 -.78 -.4 -.34-1.8 1.86 -.78 -.4 -.243-1.3 Avg..7 -.76 -.44 -.37-1.76 Std..3.36.274.47.2688 fitting a curved model will make the algorithm work not only in straight sections of road, but in curved sections too. Acknowledgments This work was supported in part by the Spanish Government through the CICYT project ASISTENTUR (Grant TRA24-7441-C3-1). [] T. Bücher, Measurement of distance and height in images based on easy attainable calibration parameters, in Proceedings IEEE Intelligent Vehicles Symposium, Detroit, USA, 2, pp. 314 31. [6] B. Southall and C. Taylor, Stochastic road shape estimation, in 8th IEEE International Conference on Computer Vision (ICCV), vol. 1, 7-14 July 21, pp. 2 212. [7] H.-J. Lee and C.-T. Deng, Determining camera models from multiple frames, Journal of Information Science and Engineering, vol. 12, no. 2, pp. 13 214, June 16. [8] R. Labayrade and D. Aubert, A single framework for vehicle roll, pitch, yaw estimation and obstacles detection by stereovision, in Proceedings IEEE Intelligent Vehicles Symposium, -11 June 23, pp. 31 36. [] J.-Y. Bouguet, Camera calibration toolbox for matlab, Last visit on: April 26. [Online]. Available: http://www.vision.caltech.edu/bouguetj/ calib\ doc [1] J. M. Collado, C. Hilario, A. de la Escalera, and J. M. Armingol, Detection and classification of road lanes with a frequency analysis, in IEEE Intelligent Vehicle Symposium, Las Vegas, Nevada, U.S.A., June 6-8 2, pp. 78 83. References [1] S. Ernst, C. Stiller, J. Goldbeck, and C. Roessig, Camera calibration for lane and obstacle detection, in Proceedings 1 IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems, Tokyo, Japan, -8 Oct. 1, pp. 6 361. [2] A. Broggi, M. Bertozzi, and A. Fascioli, Selfcalibration of a stereo vision system for automotive applications, in Proceedings 21 IEEE International Conference on Robotics and Automation (ICRA), vol. 4, Seoul, South Korea, 21-26 May 21, pp. 368 373. [3] T. Dang and C. Hoffmann, Stereo calibration in vehicles, in IEEE Intelligent Vehicles Symposium, Parma, Italy, 14-17 June 24, pp. 268 273. [4] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Pub. Co., 18. 162