Vision Based Tangent Point Detection Algorithm, Evaluation and Validation

Similar documents
Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Land & Lee (1994) Where do we look when we steer

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

Motion estimation of unmanned marine vehicles Massimo Caccia

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Fog Detection System Based on Computer Vision Techniques

Automated Driving Development

Tracking driver actions and guiding phone usage for safer driving. Hongyu Li Jan 25, 2018

Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention

AR-Navi: An In-Vehicle Navigation System Using Video-Based Augmented Reality Technology

Research on Evaluation Method of Video Stabilization

A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters

Self-calibration of an On-Board Stereo-vision System for Driver Assistance Systems

arxiv: v1 [cs.cv] 28 Sep 2018

Measurement of Pedestrian Groups Using Subtraction Stereo

A global decentralized control strategy for urban vehicle platooning relying solely on monocular vision

Robust lane lines detection and quantitative assessment

Chapter 2 Trajectory and Floating-Car Data

Graphical Analysis of Kinematics

Time-to-Contact from Image Intensity

Graphical Analysis of Kinematics

The Fly & Anti-Fly Missile

Ground Plane Motion Parameter Estimation For Non Circular Paths

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Variations of images to increase their visibility

UItiMotion. Paul J. Gray, Ph.D. Manager Path Planning Front-End Design R&D

S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT

Practical Robotics (PRAC)

CS 4758 Robot Navigation Through Exit Sign Detection

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS

Egocentric Direction and the Visual Guidance of Robot Locomotion Background, Theory and Implementation

Chapter 3 Path Optimization

Sensory Augmentation for Increased Awareness of Driving Environment

Optical Flow-Based Person Tracking by Multiple Cameras

Vision based autonomous driving - A survey of recent methods. -Tejus Gupta

Reference Tracking System for a Mobile Skid Steer Robot (Version 1.00)

Exam in DD2426 Robotics and Autonomous Systems

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Obstacle Avoidance Project: Final Report

T O B C A T C A S E E U R O S E N S E D E T E C T I N G O B J E C T S I N A E R I A L I M A G E R Y

A Robust Two Feature Points Based Depth Estimation Method 1)

Estimation of common groundplane based on co-motion statistics

Fast Local Planner for Autonomous Helicopter

Chapter 9 Object Tracking an Overview

Evaluation of a laser-based reference system for ADAS

Silhouette Coherence for Camera Calibration under Circular Motion

3D Motion from Image Derivatives Using the Least Trimmed Square Regression

Observed Differential Carrier Phase Float Accuracies on three Kinematic Surveys using a u-blox Antaris GPS Receiver under Open Sky Conditions

Optic Flow and Basics Towards Horn-Schunck 1

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

Vision Review: Image Formation. Course web page:

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

The Transition Curves (Spiral Curves)

Integrated Vehicle and Lane Detection with Distance Estimation

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences

Preceding vehicle detection and distance estimation. lane change, warning system.

Expanding gait identification methods from straight to curved trajectories

Visual Physics Camera Parallax Lab 1

Summarization of Egocentric Moving Videos for Generating Walking Route Guidance

On-line and Off-line 3D Reconstruction for Crisis Management Applications

PROBLEM FORMULATION AND RESEARCH METHODOLOGY

Everything you did not want to know about least squares and positional tolerance! (in one hour or less) Raymond J. Hintz, PLS, PhD University of Maine

Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET

W4. Perception & Situation Awareness & Decision making

Level lines based disocclusion

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens

EE368 Project: Visual Code Marker Detection

Appendix E: Software

A method for depth-based hand tracing

Gaze interaction (2): models and technologies

Learning Obstacle Avoidance Parameters from Operator Behavior

An Image Based Approach to Compute Object Distance

Map Guided Lane Detection Alexander Döbert 1,2, Andre Linarth 1,2, Eva Kollorz 2

Car tracking in tunnels

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

Figure 1 - Refraction

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Developing Algorithms for Robotics and Autonomous Systems

Fisheye Camera s Intrinsic Parameter Estimation Using Trajectories of Feature Points Obtained from Camera Rotation

TURN AROUND BEHAVIOR GENERATION AND EXECUTION FOR UNMANNED GROUND VEHICLES OPERATING IN ROUGH TERRAIN

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016

A Linear Approximation Based Method for Noise-Robust and Illumination-Invariant Image Change Detection

CHAPTER 5 MOTION DETECTION AND ANALYSIS

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

LPR and Traffic Analytics cameras installation guide. Technical Manual

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

SOLUTIONS FOR TESTING CAMERA-BASED ADVANCED DRIVER ASSISTANCE SYSTEMS SOLUTIONS FOR VIRTUAL TEST DRIVING

Final Exam Study Guide

Nao Devils Dortmund. Team Description Paper for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann

VELOCITY FIELD MEASUREMENT FROM IMAGE SEQUENCES

Experimental results of a Differential Optic-Flow System

Stereo Vision-based Feature Extraction for Vehicle Detection

Improving autonomous orchard vehicle trajectory tracking performance via slippage compensation

Transcription:

15-6 MVA2009 IAPR Conference on Machine Vision Applications, May 20-22, 2009, Yokohama, JAPAN Vision Based Tangent Point Detection Algorithm, Evaluation and Validation Romain Gallen romain.gallen@lcpc.fr Sébastien Glaser sebastien.glaser@lcpc.fr LIVIC - INRETS/LCPC 14,routedelaminière 78000, Versailles, France Abstract In this article, we show how to find the Tangent Point () and compute the Tangent Point Angle in real-time imaging with a monocular in-vehicle camera. To achieve this goal, we first need a ground truth to which compare our algorithms used to find the in real-timeroadimaging. Weproposeamethodtocompute this ground truth based on localization informations obtained through the use of a RTK 1 GPS and a precise map of the road track. Then we present our vision based algortihm that depends on a lane markings detection system. We detail the needs and requirements of our vision based method, compare it with respect to the ground truth and give our conclusions on reliability and performance. 1 Introduction The Tangent Point () is the rightmost point visible on the inside of a left turn and the leftmost point on visible the inside of a right turn (see Fig. 2). Free locomotion and locomotion using vehicles are essentially different. In the first case one may move freely his body, moving different parts (legs, chest, head) independently whereas in the second case, the body is in a mobile and is mainly still while only the head can move. According to Land and Lee [4] [5], the drivers use the as a fixation point in order to adapt the steering to the curvature of the road. Salvucci and Gray [7] have proposed a trajectory control model based on the use of specific reference points like the, also maintaining for the fixation points theory. Though Land [6] and Fajen and Warren [1] argue on the necessity of optical flow, raw optical flow or fixation points ; they consider that instantaneous heading of the driver (heading of his chest) is a major information in locomotor tasks and is identical to the yaw angle of the vehicle (as the driver s chest is always stuck to the seat). This information is different than the direction of the head or of the gaze but seems to be useful in locomotor tasks and especially in driving situations. In this framework, identifying precisely the Tangent Point Angle (A) in regard to instantaneous heading of the driver and the vehicle may help in the comprehension of the impact of this position on trajectory management. We have developed two different approaches. The first one is based on the use of our RTK GPS and a 1 Real Time Kinematic precise map of the road. This method is a post processing of the data and has been developed in order to dispose of a ground truth to which compare the second method, a real-time imaging method that can easily be embedded on a car. Our goal is essentially to find the position of the in the image but also to determine the angle between yaw and that we call Tangent Point Angle (A). We ll see in section 2 the requirements in order to compute a ground truth. Then we will see in sections 3 and 4 the tools and algorithm we used to compute the and A in real-time road imaging. 2 Calculation of the ground truth 2.1 Equipments The strongly depends on the viewing point (i.e. on the path), our ground truth must be computed on the same path corresponding to the images. First, we designed an algorithm that works offline and computes a ground truth A for a given path. We dispose of a map of the markings that we gathered using a RTK GPS. The track is sampled with a point every 10cm. The track on which we conducted the experiments is 3.5km long, it is outdoor so vehicles driving on are under natural conditions (atmospheric conditions, lighting conditions). During the experimentations, the vehicle was equipped with RTK GPS giving its precise location with a sample frequency of 20Hz. The quality of positioning has been validated in [2]. Using the centimetric map of the track and a centimetric positioning of the car we will expose how to calculate the ground truth A that would subsequently be compared with the vision algorithm. 2.2 Denoising yaw estimation A is an angle measured relatively to yaw. An error on yaw estimation directly leads to a biased estimation of the A. In order to precisely compute the A with positioning informations, we need only three informations : the position of the car on the track, the yawofthevehicleandamapofthetrack. Wewill first compute the yaw angle. In order to do so, we will use the successive positions given by the GPS. Though the GPS accuracy is centimeter like, it is not exempt of noise, and computing the yaw (i.e. the derivative 518

of the position, see Eq. (1)) provided noisy data will result in even more noisy yaw angle estimations. Yaw(t) =Position(t) Position(t 1) (1) Figure 2: Methodology used to find the : the first zero-crossing of the derivative of side points angles belongs to the left border and occurs between points L5 and L6. The real is between points L5 and L6 ( L i L i+1 10cm ). Finally, the is set to L5 through the use of a RTK GPS, is also noisy. As exposed in Fig. 2, we compute the angle between yaw and the side points of the road. Given two functions, angle to right side, α Ri, and angle to left side, α Li, if a is present, one of those functions will present a maximum or minimum at this location. The side containing the and the itself can be retrieved as the first zero crossing of the derivative of α Ri and α Li (see Fig. 2). The positioning of the side points being noisy, we may find erroneous by simply derivating the angles to sides (particularly if one specific point of the track is badly located). Figure 1: Noise on position / noise on yaw and denoised yaw estimation 1.2098 Comparison of computed with differents methods on curved road Fig. 1 shows that instantaneous yaw calculation is very noisy. We chose to deal with yaw noise by smoothing the trajectory of the vehicle. We decided to do a polynomial regression of the 2 nd order around each position, using informations on the path between t 0.25s and t +0.25s. The yaw is obtained by taking the tangent to the polynomial at the position at time t. This polynomial fitting can only be done offline or with delay but is only needed in the validation process when computing the ground truth. This process doesn t eliminate all noises, it eliminates much of the noise on yaw angle, not the positioning error. 1.2098 Yaw 1.2097 Natural derivative 1.2096 Large derivative Large derivative + side smoothed 1.2096 5.8156 5.8156 5.8157 5.8158 5.8158 5.8159 Comparison of computed with differents methods on straight road 2.3 Finding the Tangent Point In order to build the ground truth that will be compared with our camera algorithms, we must take into account the real position of the camera inside the vehicle relatively to GPS RTK exact position. The lateral position of the camera in the vehicle impacts on the estimation of the A by creating an offset. Using our yaw signal, the position of the car, the position of the camera inside the car and the precise location of the markings we can look for the on the map and compute the reference A for a given path on the map. The map of the road, though obtained 1.2082 1.208 Large derivative + side smoothed 1.2078 Large derivative 1.2076 1.2074 1.2072 Natural derivative Yaw 1.207 5.8195 5.82 5.8205 5.821 5.8215 5.822 Figure 3: detection in curve and on straight line 519

We tried different ways of computing the using GPS informations, taking a simple derivative, taking a large derivative and finally we have tried to roughly estimate the position of the by a large derivative then approximating the side near this point with a 2 nd order polynomial and finally calculating the tangent to this polynomial passing through the camera optical center. As we can see on Fig. 3, the natural derivative is very sensible to noise, the large derivative and large derivative + side smoothing are almost equally efficient (considering the distance to the in Fig. 3, the A estimated with the last methods are almost equivalent). On the digital map of the track we can look several kilometers ahead of the car if we need to, but obviously, a driver may never have such visibility. In order to be coherent with reality, we don t look for further than 50 meters for multiple reasons. Fixation points theories are mostly used when dealing with curved trajectories, this information on straight lines is of lesser use. Angles for points further than 50 meters on a road are very low ( 0.1rad) and hardly differentiated by the camera because of CCD resolution or limited atmospheric visibility. Finally, the operational range of our road following algorithm is usually less than 50m. We have computed this ground truth for comparison with A value estimated using the camera. The RTK GPS positions of the car that were logged at 20Hz were timestamped, as for the images taken by the camera that were timestamped when registered. We will compare the A estimation using the camera with the ground truth value of the A corresponding to the position closest in time with the image. 3 Tools 3.1 Lane markings detector Labayrade and Douret lane detection system [3] has a few working steps. First, primitive features of lane markings are detected. Those primitives are detected using two different algorithms, one using a lateral consistent detection on the two sides of markings, and the other ones relies on longitudinal coherence along each side of the markings. Those detections are confirmed afterwards with a higher level process that combines the outputs of the low-level algorithms, confirm them and using a Kalman filter can predict the position of the lanes on the following image, thus accelerating and making the detections more robust. Figure 4: Lane detection algorithm Though robust, as this road following system is based on a frontal monocular camera, it has limitations. If the image is hardly exploitable due to hard lightning conditions or atmospheric disturbances or if the markings are unusable (quality, quantity, position particularly unfavourable) the lane markings following algorithm will provide wrong results. In those situations, the input of the detection algorithm is wrong, resulting in a false estimation. 3.2 Geometric calibration of the camera Some informations like focal length and position of the optical center of the camera are necessary in order to compute precisely the A with geometric features extracted from the image. Given the lateral position of a marking in the image, as we re only interested in the angle between yaw and this marking (i.e. the lateral component of the 3D angle between optical axis and marking), we use the following equation : A=arctan( X c X 0 ) (2) α with X c, lateral position of the marking (column number), X 0 the lateral position of the optical center of the camera in pixels and α focal length in pixels. If the manufacturer s informations on focal length are false, we may find angle values dilated or compressed. But the position of the optical center of the camera is never given by the sellers and is as much important as the focal length. Any shift in its position leads to an offset on the estimation of any angle using pixel location informations. In our case, for a PAL image of 576x768px we estimated the position of the optical center using Jean-Yves Bouguet 2 Calibration Toolbox and found it to be 231x390px (as compared with a theoretic position of 288x384px). 4 Vision based detection We have tested different methods to find the in images. All those methods had in input the positions coming from the lane marking detection algorithm. For example, we tried the same approach as in the ground truth computation (derivative of the positions of marking points), but it lacked efficiency due to the few number of marking points detected and their sampling compared with the number of GPS points of the track we dispose of. We now present the best method we have and that is based on prototypes of turns. This method allows us to calculate the only when approaching aturnorinaturn. We proceed in two steps, finding the model of the road ahead (straight line or turn) using only the points detected and confirmed as markings, then, if we detect a turn, we look for the and compute the A on the right lane if we are in a right turn and on the left lane for a left turn.. In order to find the model of the road, we define prototypes of turns according to the beginning and the end of the markings detected. Those prototypes are used to determine which side is susceptible of containing the. If the evolution of the beginnings and the ends of the markings is typical 2 http://www.vision.caltech.edu/bouguetj/calib doc/ 520

Tangent Point Angle in left turns 0 0.05 Tangent Point Angle in rad 0.1 0.15 0.2 Figure 5: Prototypes of turn of a straight path (i.e. if both sides tend to converge toward one point) we do not try to find a. Once the side containing the is known, we compute all possible angles between the yaw axis and the markings of this side and we keep the lowest one. The real radian value of this angle is obtained using Eq. (2) with the geometric parameters of the camera we got through a calibration process. 0.25 Real time Vision Based Estimation of the A 0.3 Ground truth A 3500 3600 3700 3800 3900 4000 4100 Indices of track points Figure 7: Comparison of ground truth A with vision based estimation of the A 5 Conclusion In this work we have shown a method that allows to compute the Tangent Point Angle in real time with invehicle monocular camera. One difficult part we overcame was to build a ground truth algorithm that could perform an estimation of the A with a given run. The results we get are very close to ground truth and false positive are low. With the road prototypes method, we have very few stops of the good working of the algorithm, we could filter them in real time by using 2 or 3 buffer images. That would introduce a little delay (40-80ms) but would make the output continuous in turns. We have validated the monocamera algorithm, we have shown it can be used in any car provided that we calibrate the camera before, take its position in the car into account and that our road following algorithm provides us with positions of the markings. We have already provided a version of this algorithm to psychology and cognition researchers that will use it in combination with an embedded oculometer in order to better comprehend what makes a secure trajectory. References Figure 6: detection in curve We present on Fig. 7 our results in the estimation of the A in a series of left turns. The error is inferior to 0.03 radians when both ground truth A and vision estimation A are defined. Zero values occur when no has been found, wether the prototype of the road ahead has been classified straight road (no seeking of the ) or sometimes because the road following algorithm no longer provides markings confirmed. [1] B.R. Fajen and W.H. Warren : Go with the flow, Trends in Cognitive Sciences, 4(10):pp.369-370, 2000. [2] S. Glaser, S. Mammar, M. Netto, and B. Lusetti : Experimental time to line crossing validation, Intelligent Transportation Systems Conference, Autriche, 2005. [3] R. Labayrade, J. Douret, J. Laneurit and R. Chapuis. : A reliable and robust lane detection system based on the parallel use of three algorithms for driving safety assistance, IEICE - Trans. Inf. Syst., E89-D(7), 2006. [4] M.F. Land : Where we look when we steer, Nature, 1994. [5] M.F. Land : Vision and action,chapter The visual control of steering, pages pp.163-180. Cambridge : Cambridge University Press, 1998. [6] M.F. Land : Motion Vision - computational, neural, and ecological constraints,chapter Does steering a car involve perception of the velocity flow field?,springer Verlag, Berlin Heidelberg, New York, 2001. [7] D.D. Salvucci and R. Gray : A two-point visual control model of steering Perception, 33(10):pp.1233-1248,2004. 521