Sensory Augmentation for Increased Awareness of Driving Environment

Similar documents
Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen

Attack Resilient State Estimation for Vehicular Systems

Detection and Motion Planning for Roadside Parked Vehicles at Long Distance

Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios

Obstacle Detection From Roadside Laser Scans. Research Project

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Classification and Tracking of Dynamic Objects with Multiple Sensors for Autonomous Driving in Urban Environments

A real-time Road Boundary Detection Algorithm Based on Driverless Cars Xuekui ZHU. , Meijuan GAO2, b, Shangnian LI3, c

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Calibration of a rotating multi-beam Lidar

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving

Range Sensing Based Autonomous Canal Following Using a Simulated Multi-copter. Ali Ahmad

Correcting INS Drift in Terrain Surface Measurements. Heather Chemistruck Ph.D. Student Mechanical Engineering Vehicle Terrain Performance Lab

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR

W4. Perception & Situation Awareness & Decision making

Automated Driving Development

On Road Vehicle Detection using Shadows

Uncertainties: Representation and Propagation & Line Extraction from Range data

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016

Parallel Scheduling for Cyber-Physical Systems: Analysis and Case Study on a Self-Driving Car

Overtaking Vehicle Detection Using Implicit Optical Flow

Laserscanner Based Cooperative Pre-Data-Fusion

Towards Fully-automated Driving. tue-mps.org. Challenges and Potential Solutions. Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners

On Board 6D Visual Sensors for Intersection Driving Assistance Systems

QUASI-3D SCANNING WITH LASERSCANNERS

Pattern Recognition for Autonomous. Pattern Recognition for Autonomous. Driving. Freie Universität t Berlin. Raul Rojas

Pedestrian Detection Using Multi-layer LIDAR

Neural Networks for Obstacle Avoidance

Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA

Vision based autonomous driving - A survey of recent methods. -Tejus Gupta

A Lightweight Simulator for Autonomous Driving Motion Planning Development

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps

Korea Autonomous Vehicle Contest 2013

FOOTPRINTS EXTRACTION

Domain Adaptation For Mobile Robot Navigation

차세대지능형자동차를위한신호처리기술 정호기

Measuring the World: Designing Robust Vehicle Localization for Autonomous Driving. Frank Schuster, Dr. Martin Haueis

Option Driver Assistance. Product Information

Road Marking Detection Using LIDAR Reflective Intensity Data and its Application to Vehicle Localization

Spatio temporal Segmentation using Laserscanner and Video Sequences

Dynamic Road Surface Detection Method based on 3D Lidar

#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS

Evaluation of a laser-based reference system for ADAS

Why the Self-Driving Revolution Hinges on one Enabling Technology: LiDAR

Real Time Multi-Sensor Data Acquisition and Processing for a Road Mapping System

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Geometrical Feature Extraction Using 2D Range Scanner

arxiv: v1 [cs.ro] 6 Jun 2017

Adaption of Robotic Approaches for Vehicle Localization

Robot Autonomy Final Report : Team Husky

Detection and Tracking of Moving Objects Using 2.5D Motion Grids

Designing a software framework for automated driving. Dr.-Ing. Sebastian Ohl, 2017 October 12 th

Collision Warning and Sensor Data Processing in Urban Areas

Mobile Mapping Solutions for Ohio s Integrated Transportation Network. Brian Foster, CP

New Features in TerraScan. Arttu Soininen Software developer Terrasolid Ltd

A New Way to Control Mobile LiDAR Data

Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots

Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation

Automotive LiDAR. General Motors R&D. Ariel Lipson

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 16, NO. 6, DECEMBER

Automating Data Accuracy from Multiple Collects

Fig Edge image of Fig. 19 Fig Edge image of Fig. 20

Following Dirt Roads at Night-Time: Sensors and Features for Lane Recognition and Tracking

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA

Fast Feature Detection and Stochastic Parameter Estimation of Road Shape using Multiple LIDAR

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

Extended target tracking using PHD filters

HiFi Visual Target Methods for measuring optical and geometrical characteristics of soft car targets for ADAS and AD

Calibration of a rotating multi-beam Lidar

SHRP 2 Safety Research Symposium July 27, Site-Based Video System Design and Development: Research Plans and Issues

6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Lane boundary and curb estimation with lateral uncertainties

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Navigation and Tree Mapping in Orchards

Advanced Driver Assistance: Modular Image Sensor Concept

Real Time Obstacle and Road Edge Detection for an Autonomous Formula SAE-Electric Race Car

Technical Bulletin Global Vehicle Target Specification Version 1.0 May 2018 TB 025

Solid State LiDAR for Ubiquitous 3D Sensing

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA

Solving functional safety challenges in Automotive with NOR Flash Memory

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Chapters 1 7: Overview

IGTF 2016 Fort Worth, TX, April 11-15, 2016 Submission 149

Map Representation and LIDAR-Based Vehicle

Loop detection and extended target tracking using laser data

Aerial and Mobile LiDAR Data Fusion

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

ILR Perception System Using Stereo Vision and Radar

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Transcription:

Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213

DISCLAIMER The contents of this report reflect the views of the authors, who are responsible for the facts and the accuracy of the information presented herein. This document is disseminated under the sponsorship of the U.S. Department of Transportation s University Transportation Centers Program, in the interest of information exchange. The U.S. Government assumes no liability for the contents or use thereof. 2

Table of Contents Problem... 4 Approach/Methodology... 4 Findings... 9 Conclusions/Recommendations... 10 Works Cited... 11 3

Problem The goal of this project was to detect road boundaries and stationary obstacles on the road using low-cost automotive-grade LiDAR scanners for the purpose of lateral positioning of the vehicle along the road. For autonomous driving, this information is especially useful on roads without proper lane markings or regions with poor GPS reception, such as tunnels, dense urban environment or under a heavy tree canopy. For manual driving, it can similarly be used as input to lane-centering assist systems. Approach/Methodology For the purpose of road boundary detection, we use the 6 IBEO LiDAR scanners installed in the GM-CMU Autonomous Driving Lab s Cadillac SRX vehicle, which give 360-degree coverage around the car [1]. Figure 1 shows the location of the LiDAR scanners and radar in the car, and Figure 2 shows the shows the range and field of view of these scanners along with their blind spots. Figure 1: Sensor installation Figure 2: Perception system coverage with all sensors integrated. a) Maximum ranges b) Sensor Field-of-View (FoV) and coverage close to the car Using these data, we try to model the road boundaries in the form of a clothoid [2], which defines a function of road curvature given road (longitudinal) length as κl c c l κςdς, where c 0 is the initial curvature and c 1 is the curvature derivative. They are numerically represented as their 3rd-order Taylor series expansion given by 4

1 6 3 1 2 2 where is the lateral offset between the boundary and center of the vehicle, β is the heading angle with respect to the vehicle s driving direction, c 0 is the curvature, and c 1 is the curvature rate. Here, x is in the vehicle s lateral direction and y is the vehicle s longitudinal direction. The road is assumed to be curved in x as a cubic function of y. Prior work on road edge detection using LiDAR, and LiDAR in combination with vision, used downward-looking scanners with scan planes making a large angle with the road [3, 4]. These scans are then integrated over time to create a 3-D rolling window to derive the road boundary. In our system, the scanning planes are almost parallel to the ground and each LiDAR scanner has 4 scanning planes per sensor with a total of 3.2-degrees vertical field-of-view. Because of this, we are able to scan a long section of the road-boundary with a single LiDAR scan, as shown in Figure 3, and obtain its shape. These road boundaries can then be tracked over time to account for inaccuracies in the LiDAR sensing and car s odometer, and obtain a robust road boundary even if we are not able to obtain the correct boundary in every frame. A limitation of our sensing capability is that we are unable to detect curbs that are lower than ~8 inches in height, as we do not get the adequate number of points required for a good detection. Most of the high curbs, guardrails, Jersey barriers and tunnel walls are detected well. Figure 4 shows the steps taken to obtain the road boundaries from each frame and track them over time. To obtain the road boundaries from each LiDAR frame, we first preprocess the IBEO data to obtain a database of all points which could potentially be associated with the road- 5

boundary. This can be done based on its position relative to the car, the sensor from which the reading is coming, and the IBEO s internal filtering. Figure 3. Raw IBEO scan at a single time with visible road boundary (a) inside a tunnel (b) on a highway Figure 4 shows the steps taken to obtain the road boundaries from each frame and track them over time. To obtain the road boundaries from each LiDAR frame, we first preprocess the IBEO data to obtain a database of all points which could potentially be associated with the roadboundary. This can be done based on its position relative to the car, the sensor from which the reading is coming, and the IBEO s internal filtering. With the set of preprocessed points, we want to group points into different clusters such that all the points representing a particular boundary are grouped into a single cluster. To do so, we find the direction of maximum variance for the data set using Principal Component Analysis (PCA). This direction of maximum variance is assumed to correspond to the direction of the road. The points are then compressed along the PCA direction and a k-means clustering is 6

applied to it where the number of clusters k = α, n being the total number of points and α being a suitable constant. Figure 4(a) shows a set of clustered preprocessed points. Following that, we find a cubic polynomial for each cluster using a RANSAC search. Using RANSAC over a curve-fitting technique ensures that the curve fits a section of the points very precisely while ignoring the outliers. Data points from all the clusters that fail to give a good cubic fit and the outliers from other clusters are removed from the dataset, giving us a filtered set of points that may correspond to a road boundary. Figure 4. (a) Raw IBEO scan grouped into different clusters, shown by different colors. (b) Line fits generated using RANSAC for each cluster shown in red. Out of them, 3 lie on left of the road, 4 on the right of the road and 1 inside the road. (c) Final cubic road boundaries after UKF shown in blue and red which take into account the boundaries over previous time intervals and are more robust to sudden changes. The green and black lines are the cubic currenttime-instant cubic fits. On this set of filtered points, we again apply the PCA compression and k-means clustering, and we rotate the points with the road s direction as y and fit lines to each new cluster, as shown in Figure 4(b). This is done in order to identify the clusters that give us the most probable road boundary, as in many cases we get several good curve fits on both sides of the road. Out of this set of new lines, we rate the probability of their corresponding to the actual road 7

boundary based on their distance to the car s center, their slopes, the range and density of points in the cluster, the number of points representing the curve, etc. and we find the lines on both sides of the car with the highest probabilities. The clusters to which those fits belong are kept while the others are removed from the dataset of LiDAR points. We then transform these points to an ego-car perspective and find the best cubic fits for both sides of the roads using RANSAC and determine their probability of representing the actual road boundary. If high, we designate those curves as the road boundaries for that time. In order to track the left and right boundaries of the road over time, we use an unscented Kalman Filter (UKF). The UKF uses an unscented transformation to model nonlinear dynamics [5], which often works better than an Extended Kalman filter, which uses a linearization technique to approximate non-linear dynamics using the Jacobian. Here, the state is the road boundary expressed as x = [,, c 0, c 1 ] and the current road boundary, expressed as a set of points {,, ", " } $:& is taken as the observation. Figure 4(c) shows the road boundary after implementing the UKF tracking. For the purpose of detecting static obstacles on the road, such as parked cars, trailers, etc., we pick out all the points that lie inside the current road boundaries in the original dataset of LiDAR scans. Out of these points, we find clusters of points with a high density and find line-fits perpendicular to the road boundaries. These clusters and lines are then tracked over time to determine their velocities relative to the car. The obstacles that appear to be stationary are then included in the static road map. 8

Findings In this work, our goal was to reliably detect road boundaries on highways and tunnels using relatively inexpensive automotive-grade LiDAR sensors compared to high-cost, highcapability sensors such as the $70,000 Velodyne spinning laser sensor used in order to get a high-density 3D range point cloud. Such a sensor is not practical for production automobiles from a cost or appearance/integration standpoint. (a) (b) (c) (d) Figure 5. Final road boundaries shown in blue and red with green and black lines being the cubic fits for that time instant for (a) tunnel and (c) highway with guardrails, and the corresponding visual data shown in (b) and (d). 9

Figure 5 (a) shows an example of a road boundary detected inside a tunnel (b). Due to the lack of noise from inside the tunnel and good data points from the high curb, we are able to get accurate road boundaries. Figure 5 (c) shows an example of a road boundary detected on a highway with guardrails (d). Due to the lack of a good boundary on the right, we are not able to identify it accurately. More generally, for regions where the road boundary is high and not very far apart from the car (<10 meters on either side) such as inside a tunnel or on a highway with high Jersey barriers, the road boundary is accurately detected at every time step and smoothly tracked over time. In regions with no boundary clearly observable by a LiDAR, such as a short guardrail or a curb, the boundaries are only detected approximately 65% of the time and even with the UKF, they cannot be tracked well at most times. Conclusions/Recommendations This project lays the groundwork for the detection of road boundaries along highways and tunnels using relatively inexpensive automotive-grade LiDAR. This is very useful for the purpose of lateral positioning of the vehicle along the road, especially on roads without proper lane markings or regions with poor GPS reception. We are expanding this into detecting and mapping stationary obstacle on the road including parked cars and build a static map of the road. Future work should include: Refining the road-boundary detection and using adaptive algorithms that deal with varying scenarios Understanding the various failure scenarios and modifying our approach to handle them better 10

Use radar along with LiDAR to track vehicles along the road and differentiate parked cars from stationary cars Build a static road map by combining the road-boundary and stationary obstacle inside the road. Works Cited [1] Wei, Junqing, et al. "Towards a viable autonomous driving research platform." Intelligent Vehicles Symposium (IV), 2013 IEEE. IEEE, 2013. [2] E. D. Dickmanns and B. D. Mysliwetz, Recursive 3-d road and relative ego-state recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 199 213, 1992. [3] Zhang, Wende. "Lidar-based road and road-edge detection." Intelligent Vehicles Symposium (IV), 2010 IEEE. IEEE, 2010. [4] Qin, Baoxing, et al. "Road detection and mapping using 3D rolling window." Intelligent Vehicles Symposium (IV), 2013 IEEE. IEEE, 2013. [5] S.J.JulierandJ.K.Uhlmann, An extension of the kalman filter to nonlinear systems, in Proceedings of SPIE Signal Processing, Sensor Fusion, and Target Recognition, 1997. 11