Collision Risk Assessment to Improve Driving Safety

Size: px
Start display at page:

Download "Collision Risk Assessment to Improve Driving Safety"

Transcription

1 Collision Risk Assessment to Improve Driving Safety Christian Laugier, Igor E. Paromtchik, Christopher Tay, Kamel Mekhnacha, Gabriel Othmezouri, Hiromichi Yanagihara INRIA Grenoble Rhône-Alpes, Saint Ismier, France ProBayes, Saint Ismier, France Toyota Motor Europe, B-1930 Zaventem, Belgium Abstract Robust analysis of dynamic scenes in urban traffic environments is needed to estimate and predict collision risk level during vehicle driving. The risk estimation relies on monitoring of the traffic environment of the vehicle by means of on-board lidars and a stereo camera. The collision risks are considered as stochastic variables. Hidden Markov Model and Gaussian process are used to estimate and predict collision risks and the likely behaviors of multiple dynamic agents in road scenes. The proposed approach to risk estimation is tested in a virtual environment with human-driven vehicles and during a highway driving. The obtained results have proven the feasibility of our approach to assist the driver in avoiding potentially dangerous situations. Index Terms Collision risk, urban road, driver assistance, Hidden Markov Model, Gaussian process A. Problem statement I. INTRODUCTION The urban traffic environment with multiple participants contains risks of potential collision and damage. The vehicle safety technologies (e.g. seat belts, airbags, safety glass, energy-absorbing frames) mitigate the effects of accidents. The advanced technologies will be capable of monitoring the environment to estimate and predict collision risks during vehicle driving, in order to help reduce the likelihood of accidents occuring. The risk management by traffic participants is an efficient way to improve traffic safety toward zerocollision driving. The key problem is to correctly interpret the traffic scene by means of processing information from a variety of sensors [1]. A collision risk level can be predicted for a few seconds ahead to warn the driver about unnoticed potential risks. The estimated risk of collision can also be used to select a trajectory that minimizes the risks for an autonomous vehicle. The estimation of collision risk relies on the sensor information about the surrounding environment and the driver s behavior. The obtained risk values must be interpreted by the dedicated application. The following set of processed sensor inputs is assumed to be available. Road geometry. The road width and the road curvature are obtained by processing raw information from camera images and lidars or, alternatively, from a Geographic Information System (GIS) with a pre-built map and a localization device such as Global Positioning System (GPS). Object tracking. The detection and tracking of moving objects is accomplished by the dedicated algorithms, i.e. positions and velocities of the objects are available. Auxiliary sensors. Information about the signal light status of other vehicles is an indicator of the motion intention. Additional virtual sensors are capable of detecting distances between the vehicles and the lane borders, which might indicate an intention to perform a lane change. We use a term ego-vehicle to distinguish our vehicle from other vehicles. The ego-vehicle is assumed to be equipped with the appropriate sensors for obtaining a set of inputs mentioned above. The estimated risk is a numerical value which expresses quantitatively the collision risk of the ego-vehicle with another vehicle during the next few seconds. Estimating the future collision risk requires the models which describe the vehicle motion in the sensor visibility range of the ego-vehicle. This model must be capable of reasonably predicting the future states of the vehicle in terms of the probability. We present a fully probabilistic model of the vehicle s motion evolution for obtaining and inferring beliefs on the future states of vehicles in an urban traffic environment. Consequently, the estimated risk of collision is obtained from the models in terms of the probability in a theoretically consistent manner. B. Related work Current commercially available crash warning systems aim at preventing front, rear, or side collisions. Such systems are usually equipped with radar based sensors on the front, rear or sides to measure the velocity and distance to obstacles. The algorithms for determining the risk of collision are based on variants of time-to-collision (TTC) [2], giving the time remaining before one vehicle collides with another one, assuming the both vehicles are maintaining their linear velocities. Some systems are capable of directly controlling the brakes and possibly the steering to perform the necessary corrective actions. Systems based on TTC use the observations made at a reasonably high frequency in order to adapt to a dynamic environment. Current commercial systems work reasonably well on highways or straight sections of the city roads. However, the linearity assumption does not hold on curved roads, as shown in figure 1, where the risk level tends to be overestimated. Several research projects overcome this problem by taking into account the structure of the environment, especially at intersections where the rate of accidents is higher. These projects aim at providing the collision warning systems, which use wireless communication either between the vehicles or between the vehicle and a road infrastructure, such as traffic

2 Figure 1: An example of triggering a false alarm about collision because of the invalid linearity assumption on curved roads in the TTC based systems. lights [3], [4], [5], [6], [7]. These systems are equipped with a pair of detectors (radars or laser scanners) at the left and right front corners of the vehicle in order to detect cross-traffic vehicles at intersections. The obtained speeds of the objects and the respective TTC are then evaluated to determine the collision risk. Although the environmental structures are taken into consideration, yet the collision risk calculation assumes the straight motion. The time horizon of risk prediction is short, and the crucial environmental information and sensor data are not fully employed. C. Outline of the approach The knowledge about an object being at a certain location at a specific time does not provide sufficient information to assess its impact on the safety of the ego-vehicle. In addition, environmental constraints should be taken into account, especially on urban roads. We propose a framework for understanding behaviors of other vehicles and present our approach in the next section. The relevant sensors include stereo vision, lidars, an inertial measurement unit (IMU) combined with the GPS, and odometry. The local environment is represented by a grid. The fusion of sensor data is accomplished by means of the Bayesian Occupancy Filter (BOF) [8], [9], that provides to assign probabilities of cell occupancy and cell velocity for each cell in the grid. The collision risks are considered as stochastic variables. Hidden Markov Model (HMM) and Gaussian process (GP) are used to estimate and predict collision risks and the likely behaviors of multiple dynamic agents in road scenes [10], [11]. Consider vehicle A and ego-vehicle B traveling in the same direction on the adjacent lanes, as shown in figure 2. The collision risk must be estimated for vehicle B. From the driver s viewpoint, the road structure is implicitly described by such maneuvers as: move straight, turn right/left, or change a lane. These maneuvers are referred to as behaviors, and a set of the possible behaviors is predefined. However, some behaviors are unavailable at all instances, e.g. it might be impossible to turn left at an intersection because of the road geometry. The lane following for a given behavior is represented by means of a GP, i.e. a probability distribution over the possible future realizations of the paths with the mean corresponding Figure 2: Collision risk estimation for vehicle B relies on predicting the path of vehicle A by sampling from the GP for two possible behaviors in this example: moving straight or lane change. The collision risk is obtained as a weighted sum of paths leading to collision. to the exact following of the lane middle. The GP samples for such behaviors as lane change and moving straight are depicted in figure 2, where the dotted lines represent the paths sampled from the GP. For a lane turning on a curved road, the GP is adapted to reflect the road geometry. The set of GPs for each feasible behavior, in combination with the probability of vehicle A executing a certain behavior, gives a probabilistic model of the future evolution of vehicle A in the scene. Similar to the TTC approach, the evaluation of collision risk is performed for vehicle B against vehicle A. In contrast to the TTC s linearity assumption about the future paths for the vehicles, we evaluate the collision risk of the intended path of vehicle B against all possible paths to be taken by vehicle A. The weights are assigned according to the probabilistic model of the behaviors evolution of vehicle A. II. COLLISION RISK ESTIMATION An overall architecture of the risk estimation module is shown in figure 3. It comprises three sub-modules, such as: driving behavior recognition, driving behavior realization, and collision risk estimation [11], [12]. Driving behavior recognition. The behavior recognition aims at estimating the probability distribution of feasible behaviors, e.g. P(turn_le f t) represents the probability of turning left by the vehicle. The behaviors provide an implicit high-level representation of a road structure, which contains semantics. The probability distribution over behaviors is obtained by HMM. Our current model includes the following four behaviors: move straight, turn left, turn right, and overtake. Driving behavior realization. The collision risk evaluation requires the road geometry. Driving behavior realization takes the form of GP, i.e. a probabilistic representation of a possible evolution of the vehicle motion for a given behavior [10].

3 Figure 3: Architecture of the risk assessment module. The adaptation of GP according to the behavior is based on the geometric transformation known as the Least Squares Conformal Map (LSCM) [13]. Collision risk estimation. A complete probabilistic model of the possible future motion is given by the probability distribution over behaviors from the driving behavior recognition and the driving behavior realization. The collision risk is calculated from this model. Intuitively, the result can be explained as collision risk for a few seconds ahead. However, the precise mathematical definition of risk depends on the meaning and interpretation of risk [11]. Behavior recognition and modeling The behavior recognition aims at assigning a label and a probability measure to sequential data, i.e. observations from the sensors. Examples of sensor values are: distance to lane borders, signaling light status, or a proximity to an intersection. The objective is to obtain the probability values over behaviors, i.e. the behaviors are hidden variables. The behavior modeling contains two hierarchical layers. The upper layer is a single HMM, where its hidden states represent high-level behaviors, such as: move straight, turn left, turn right, and overtake. For each hidden state or each behavior in the upper layer HMM, there is a corresponding HMM in the lower layer to represent the sequence of the finer state transitions of a single behavior, as depicted in figure 4. Let us define the following hidden state semantics in the lower layer HMMs for each of the following behaviors of the higher layer HMM: Move straight (1 hidden state): move forward. Turn left or turn right (3 hidden states): Decelerate before a turn, execute a turn, and resume a cruise speed. Overtake (4 hidden states): lane change, accelerate (while overtaking a vehicle), lane change to return to the original lane, resume a cruise speed. In order to infer the behaviors, we maintain a probability distribution over the behaviors represented by the hidden states of the upper layer HMM. The observations of vehicles (i.e. Figure 4: Layered HMM, where each lower layer HMM s observation likelihood is the upper layer HMM s observation. sensor data) interact with the HMM in the lower layer, and the information is then propagated to the upper layer. Driving behavior realization A behavior is an abstract representation of the vehicle motion. For a given behavior, a probability distribution over the physical realization of the vehicle motion is indispensable for risk estimation. The GP allows us to obtain this probability distribution by assuming that usual driving is represented by the GP, i.e. lane following without drifting too far off to the lane sides. On a straight road, this is a canonical GP with the mean corresponding to the lane middle. To deal with the variations of lane curvature or such behaviors as turn left or turn right, we propose an adaptation procedure, where the canonical GP serves as a basis and it is deformed according to the road geometry. The deformation method is based on LSCM. Its advantage is a compact and flexible representation of the road geometry. The canonical GP can be calculated once and, then, can be reused for different situations, thus, resulting in a better computational efficiency. An example is shown in figure 5 for a curved road. Figure 5: Deformed GP model example for a lane turning left.

4 Collision risk estimation The layered HMM approach assigns a probability distribution over behaviors at each time instance, and a GP gives the probability distribution over its physical realization for each behavior. Because the behavioral semantics are propagated from the layered HMM down to the physical level, it is now possible to assign semantics to risk values. One should note that the definition of risk can take a variety of forms, which is largely dependent on how the risk output is going to be used. A risk scalar value might be sufficient for a crash warning system, or an application might require the risk values against each vehicle in the traffic scene. The risk calculation is performed by first sampling of the paths from the GP. The fraction of samples in collision gives the risk of collision, which corresponds to the behavior represented by the GP. A general risk value is obtained by marginalizing over behaviors based on the probability distribution over behaviors obtained from the layered HMM. It is possible to calculate risk of taking a certain path, a certain behavior, or a general risk value of a certain vehicle against another vehicle. A systematic framework for evaluation of different types of risk can be found in [11]. A. Driving simulation III. EXPERIMENTS The simulation of crash situations in a virtual environment is used instead of dealing with them in real experiments. The virtual environment is a 3D geometric model of a road network with vehicles, where each vehicle is driven by a human driver. The simulator was developed by Toyota Motor Europe (TME). Each human driver controls his or her virtual vehicle by means of a steering wheel, the acceleration and brake pedals. Recording a scenario with multiple vehicles, which are driven concurrently, requires a large number of human drivers. An alternative is to generate the scenario iteratively, with one human-driven vehicle at a time and adding human drivers iteratively, with a replay of the previously recorded humandriven vehicles. The resulting virtual environment allows us to simulate crash situations safely. The layered HMM evaluates the behavior of every vehicle in the scene for different time horizons, except the ego-vehicle. The training data are obtained by collecting sequences for a series of human-driven cases, where each driver uses the steering wheel as an interface to the virtual environment of the simulator. The driving sequences are then annotated manually by means of an annotation tool of ProBayes. Subsequently, the annotated data are used to train the layered HMM. The TME simulator provides a 3D road view for the driver and a 2D view of the road network, as shown in figure 6. The collision risk is calculated for a yellow vehicle, while other vehicles are shown by red rectangles. The relevant area of the scene is inside a large yellow circle. The right-hand traffic rule is assumed. The trail behind the yellow vehicle in 2D view indicates the risk levels estimated previously. At each instant, the probabilities of the possible behaviors of the nearest neighbor (red vehicle) are estimated by the layered HMM and are displayed by the vertical white bars. The speed of the yellow vehicle is shown in 3D view, where the rightside vertical bar shows the risk encoding by color from low (green) to high (red). The left-side vertical bar in 3D view indicates the current risk value for the yellow vehicle. Figure 6: Virtual environment of the TME simulator. The speed warning in the case of a potential danger of frontal collision is available in most commercial systems. Additionally to this functionality, our algorithm evaluates risk at intersections, where the linearity assumption about the vehicle motion would result in underestimated values of collision risk. The combination of the behavior estimation by the layered HMM and the use of semantics (e.g. turn right or move straight) at the geometric level allows us to obtain the appropriate risk values. The training data for the layered HMM were collected with ten human drivers who were asked to show different driving behaviors. The collected data is split uniformly distributed into the training data and the test data (30% of total data examples). The behavior recognition is trained on the training data and is evaluated against the test data. Figure 7 summarizes the recognition performance of the layered HMM. The results are presented as a confusion matrix, where the columns correspond to the true class and the rows correspond to the estimated class. The diagonal values of the confusion matrix give the correctly predicted class, while non-diagonal values show the percentage of mislabeling for each class. The highest recognition rate is for move straight behavior (91.9%) as well as turn right or turn left ones (82.5% and 81.1%, respectively). The overtake behavior has a relatively low recognition rate of 61.6%. Intuitively, this lower rate can be explained by a composite structure of the overtaking maneuver because it consists of such behaviors as: accelerating, lane changing, returning to the original lane, and resuming a cruise speed. Consequently, it also takes longer than a three-second period (current prediction horizon) to complete an overtaking maneuver. The approach to risk assessment is illustrated by figure 8, where the probability of collision is estimated for a period of three seconds ahead of each collision for ten different traffic scenarios. The rapid increase in the probability of collision and its certainty are observed when the collision instant approaches.

5 Confusion Matrix (filter mode) Recognition rate 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% straight overtaking turning_left turning_right straight 91.9% 15.2% 13.9% 13.6% overtaking 2.2% 61.6% 3.3% 2.0% turning_left 2.4% 10.9% 81.1% 2.0% turning_right 3.5% 12.3% 1.7% 82.5% Figure 7: Performance summary of the behaviors recognition with layered HMM. Figure 9: Example of a highway scenario where a vehicle on the middle lane performs a lane change to the right. 1.2 Probability of collision Time before collision, s Figure 8: Example of collision risk assessment for ten humandriven scenarios and a three-second prediction horizon. B. Behavior estimation on a highway sequence The first phase is to gather experimental data when driving on a highway to estimate behaviors of other vehicles. The experiments have been conducted jointly by the TME and ProBayes. The data acquisition was performed for four scenarios on a highway, with each scenario lasting for ten minutes approximately and the sensor data (stereo camera images, vehicle odometry, and GPS information) being recorded. The behaviors to be estimated are: move straight, a lane change to the left, and a lane change to the right. An example of the behavior estimation on a highway is shown in figure 9. The detection of vehicles is performed by clustering of the disparity points obtained from the stereo camera mounted behind the windshield. The clustering is performed in the image areas, which are indicated by the image based detection using support vector machines (SVMs). The positions of vehicles are tracked on the road plane by means of the BOF [8], [9]. The observation variables for behavior recognition include the vehicle s speed, the distances to the lane borders, and the information about the presence of other vehicles on the adjacent lanes. In order to obtain the observation variables in a global reference frame, a particle filter is used for localizing the vehicle on the highway map obtained from the Geographic Information System (GIS). The particle filter allows us to estimate the position and direction of the vehicle at each time instant and to employ the observations from stereovision (lanes detection), GPS and vehicle odometry. A similar approach is used for the training phase, when the acquired data are divided into the training and evaluation sets annotated manually to indicate the current behavior for each time instance of the data acquired. An example of the behavior estimation on a highway is shown in Fig. 9. The positions of the tracked vehicles are projected onto the image plane and are represented by the rectangles. The probability distribution of the estimated behaviors is shown by the height of the color bars above the vehicles, e.g. the lane change to the right behavior of the vehicle on the middle lane and the move straight behavior of the two vehicles on the left lane are evaluated correctly. These results illustrate the validity of the proposed approach for behavior estimation. The different probability decomposition of the observation variables, the selection of the observation variables and the reactivity of the behavior estimation are topics of our ongoing work to generalize the approach. The results on behavior estimation are currently preliminary. The current work involves experimenting with different probability decomposition of observation variables and observation variable selection. Furthermore, for purposes of risk evaluation, we will also be able to evaluate the reactivity of behavior estimation. The ongoing work will allow us to generalize the results and evaluate them quantitatively.

6 IV. CONCLUSION Collision risk estimation and prediction will be mandatory for future vehicles. A fraction of a second of the driver s reaction time can help save human lives. Our data processing approach, sensor models and software modules allow us to monitor the urban traffic environment. The analysis and interpretation of traffic scenes rely on evaluation of driving behaviors as stochastic variables to estimate and predict collision risks for a short period ahead. Our initial experiments on behavior estimation during vehicle driving allowed us to verify the validity of the approach. Our future work will deal with its integration and evaluation on a Lexus vehicle equipped with sensors and shown in figure 10. [11] C. Tay, Analysis of Dynamics Scenes: Application to Driving Assistance. PhD Thesis, INRIA, [12] C. Laugier et al., Vehicle or traffic control method and system, Patent application no , August [13] B. Lévy, S. Petitjean, N. Ray, and J. Maillot, Least squares conformal maps for automatic texture atlas generation, in ACM SIGGRAPH conference proceedings, Figure 10: Experimental platform on a Lexus LS600h equipped with two IBEO Lux lidars, a TYZX stereo camera, and an Xsens MTi-G inertial sensor/gps. REFERENCES [1] I. E. Paromtchik, C. Laugier, M. Perrollaz, A. Nègre, M. Yong, and C. Tay, The ArosDyn project: Robust analysis of dynamic scenes, in Int. Conf. on Control, Automation, Robotics, and Vision, (Singapore), December [2] D. N. Lee, A theory of visual control of braking based on information about time-to-collision, Perception, vol. 5, no. 4, pp , [3] J. Pierowicz, E. Jocoy, M. Lloyd, A. Bittner, and B. Pirson, Intersection collision avoidance using its countermeasures, Tech. Rep. DOT HS , NHTSA, U.S. DOT, [4] Vehicle-based countermeasures for signal and stop sign violation, Tech. Rep. DOT HS , NHTSA, U.S. DOT, [5] K. Fuerstenberg and J. Chen, New European approach for intersection safety - results of the EC project INTERSAFE, in Proc. International Forum on Advanced Microsystems for Automotive Application, [6] S. Lefèvre, J. Ibanez-Guzmán, and C. Laugier, Context-based prediction of vehicle destination at road intersections, in Proc. of the IEEE Symp. on Computational Intelligence in Vehicles and Transportation Systems, (France), [7] S. Lefèvre, J. Ibanez-Guzmán, and C. Laugier, Exploiting map information for driver intention estimation at road intersections, in Proc. of the IEEE Intelligent Vehicles Symp., (Germany), [8] C. Coué, C. Pradalier, C. Laugier, T. Fraichard, and P. Bessière, Bayesian occupancy filtering for multitarget tracking: An automotive application, Int. J. Robotics Research, no. 1, [9] M. K. Tay, K. Mekhnacha, C. Chen, M. Yguel, and C. Laugier, An efficient formulation of the Bayesian occupation filter for target tracking in dynamic environments, Int. J. Autonomous Vehicles, vol. 6, no. 1-2, pp , [10] C. Tay and C. Laugier, Modelling smooth paths using Gaussian processes, in Proc. of the Int. Conf. on Field and Service Robotics, 2007.

Probabilistic Analysis of Dynamic Scenes and Collision Risk Assessment to Improve Driving Safety

Probabilistic Analysis of Dynamic Scenes and Collision Risk Assessment to Improve Driving Safety Probabilistic Analysis of Dynamic Scenes and Collision Risk Assessment to Improve Driving Safety Christian Laugier, Igor Paromtchik, Mathias Perrollaz, Yong Mao, John-David Yoder, Christopher Tay, Kamel

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

The ArosDyn Project: Robust Analysis of Dynamic Scenes

The ArosDyn Project: Robust Analysis of Dynamic Scenes Author manuscript, published in "International Conference on Control, Automation, Robotics and Vision (2010)" The ArosDyn Project: Robust Analysis of Dynamic Scenes Igor E. Paromtchik, Christian Laugier,

More information

Probabilistic analysis of dynamic scenes and collision risks assessment to improve driving safety

Probabilistic analysis of dynamic scenes and collision risks assessment to improve driving safety Probabilistic analysis of dynamic scenes and collision risks assessment to improve driving safety Christian Laugier, Igor E. Paromtchik, Mathias Perrollaz, John-David Yoder, Christopher Tay, Mao Yong,

More information

Qadeer Baig, Mathias Perrollaz, Jander Botelho Do Nascimento, Christian Laugier

Qadeer Baig, Mathias Perrollaz, Jander Botelho Do Nascimento, Christian Laugier Using Fast Classification of Static and Dynamic Environment for Improving Bayesian Occupancy Filter (BOF) and Tracking Qadeer Baig, Mathias Perrollaz, Jander Botelho Do Nascimento, Christian Laugier e-motion

More information

DSRC Field Trials Whitepaper

DSRC Field Trials Whitepaper DSRC Field Trials Whitepaper August 19, 2017 www.cohdawireless.com 1 Overview Cohda has performed more than 300 Dedicated Short Range Communications (DSRC) field trials, comparing DSRC radios from multiple

More information

Fast classification of static and dynamic environment for Bayesian Occupancy Filter (BOF)

Fast classification of static and dynamic environment for Bayesian Occupancy Filter (BOF) Fast classification of static and dynamic environment for Bayesian Occupancy Filter (BOF) Qadeer Baig, Mathias Perrollaz, Jander Botelho Do Nascimento, Christian Laugier The authors are with the E-Motion

More information

Cohda Wireless White Paper DSRC Field Trials

Cohda Wireless White Paper DSRC Field Trials Cohda Wireless White Paper DSRC Field Trials Copyright Cohda Wireless Pty Ltd ABN 84 107 936 309 Cohda Wireless Pty Ltd 82-84 Melbourne Street North Adelaide, SA 5006 Australia P +61 8 8364 4719 F +61

More information

Probabilistic Grid-based Collision Risk Prediction for Driving Application

Probabilistic Grid-based Collision Risk Prediction for Driving Application Probabilistic Grid-based Collision Risk Prediction for Driving Application Lukas Rummelhard, Amaury Nègre, Mathias Perrollaz, Christian Laugier To cite this version: Lukas Rummelhard, Amaury Nègre, Mathias

More information

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR Bijun Lee a, Yang Wei a, I. Yuan Guo a a State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,

More information

Towards Fully-automated Driving. tue-mps.org. Challenges and Potential Solutions. Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA

Towards Fully-automated Driving. tue-mps.org. Challenges and Potential Solutions. Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA Towards Fully-automated Driving Challenges and Potential Solutions Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA Mobile Perception Systems 6 PhDs, 1 postdoc, 1 project manager, 2 software engineers

More information

Computing Occupancy Grids from Multiple Sensors using Linear Opinion Pools

Computing Occupancy Grids from Multiple Sensors using Linear Opinion Pools Computing Occupancy Grids from Multiple Sensors using Linear Opinion Pools Juan David Adarve, Mathias Perrollaz, Alexandros Makris, Christian Laugier To cite this version: Juan David Adarve, Mathias Perrollaz,

More information

Laserscanner Based Cooperative Pre-Data-Fusion

Laserscanner Based Cooperative Pre-Data-Fusion Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel

More information

An Architecture for Automated Driving in Urban Environments

An Architecture for Automated Driving in Urban Environments An Architecture for Automated Driving in Urban Environments Gang Chen and Thierry Fraichard Inria Rhône-Alpes & LIG-CNRS Lab., Grenoble (FR) firstname.lastname@inrialpes.fr Summary. This paper presents

More information

Functional Discretization of Space Using Gaussian Processes for Road Intersection Crossing

Functional Discretization of Space Using Gaussian Processes for Road Intersection Crossing Functional Discretization of Space Using Gaussian Processes for Road Intersection Crossing M A T H I E U B A R B I E R 1,2, C H R I S T I A N L A U G I E R 1, O L I V I E R S I M O N I N 1, J A V I E R

More information

SIMULATION ENVIRONMENT

SIMULATION ENVIRONMENT F2010-C-123 SIMULATION ENVIRONMENT FOR THE DEVELOPMENT OF PREDICTIVE SAFETY SYSTEMS 1 Dirndorfer, Tobias *, 1 Roth, Erwin, 1 Neumann-Cosel, Kilian von, 2 Weiss, Christian, 1 Knoll, Alois 1 TU München,

More information

Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation

Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation February 26, 2013 ESRA Fed. GIS Outline: Big picture: Positioning and

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Evaluation of a laser-based reference system for ADAS

Evaluation of a laser-based reference system for ADAS 23 rd ITS World Congress, Melbourne, Australia, 10 14 October 2016 Paper number ITS- EU-TP0045 Evaluation of a laser-based reference system for ADAS N. Steinhardt 1*, S. Kaufmann 2, S. Rebhan 1, U. Lages

More information

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen Stochastic Road Shape Estimation, B. Southall & C. Taylor Review by: Christopher Rasmussen September 26, 2002 Announcements Readings for next Tuesday: Chapter 14-14.4, 22-22.5 in Forsyth & Ponce Main Contributions

More information

REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION

REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION ABSTRACT Mark A. Mueller Georgia Institute of Technology, Computer Science, Atlanta, GA USA The problem of autonomous vehicle navigation between

More information

Scale Invariant Segment Detection and Tracking

Scale Invariant Segment Detection and Tracking Scale Invariant Segment Detection and Tracking Amaury Nègre 1, James L. Crowley 1, and Christian Laugier 1 INRIA, Grenoble, France firstname.lastname@inrialpes.fr Abstract. This paper presents a new feature

More information

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation For Intelligent GPS Navigation and Traffic Interpretation Tianshi Gao Stanford University tianshig@stanford.edu 1. Introduction Imagine that you are driving on the highway at 70 mph and trying to figure

More information

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016 Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016 Outdoor Robot Navigation Based on View-based Global Localization and Local Navigation Yohei Inoue, Jun Miura, and Shuji Oishi Department

More information

Intersection Safety using Lidar and Stereo Vision sensors

Intersection Safety using Lidar and Stereo Vision sensors Intersection Safety using Lidar and Stereo Vision sensors Olivier Aycard, Qadeer Baig, Siviu Bota, Fawzi Nashashibi, Sergiu Nedevschi, Cosmin Pantilie, Michel Parent, Paulo Resende, Trung-Dung Vu University

More information

Grid-based Localization and Online Mapping with Moving Objects Detection and Tracking: new results

Grid-based Localization and Online Mapping with Moving Objects Detection and Tracking: new results Grid-based Localization and Online Mapping with Moving Objects Detection and Tracking: new results Trung-Dung Vu, Julien Burlet and Olivier Aycard Laboratoire d Informatique de Grenoble, France firstname.lastname@inrialpes.fr

More information

SHRP 2 Safety Research Symposium July 27, Site-Based Video System Design and Development: Research Plans and Issues

SHRP 2 Safety Research Symposium July 27, Site-Based Video System Design and Development: Research Plans and Issues SHRP 2 Safety Research Symposium July 27, 2007 Site-Based Video System Design and Development: Research Plans and Issues S09 Objectives Support SHRP2 program research questions: Establish crash surrogates

More information

Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios

Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios Sensor Fusion for Car Tracking Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios, Daniel Goehring 1 Motivation Direction to Object-Detection: What is possible with costefficient microphone

More information

Integration of ADAS algorithm in a Vehicle Prototype

Integration of ADAS algorithm in a Vehicle Prototype Integration of ADAS algorithm in a Vehicle Prototype Jerome Lussereau, Procópio Stein, Jean-Alix David, Lukas Rummelhard, Amaury Negre, Christian Laugier, Nicolas Vignard, Gabriel Othmezouri To cite this

More information

Designing a software framework for automated driving. Dr.-Ing. Sebastian Ohl, 2017 October 12 th

Designing a software framework for automated driving. Dr.-Ing. Sebastian Ohl, 2017 October 12 th Designing a software framework for automated driving Dr.-Ing. Sebastian Ohl, 2017 October 12 th Challenges Functional software architecture with open interfaces and a set of well-defined software components

More information

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE National Technical University of Athens School of Civil Engineering Department of Transportation Planning and Engineering Doctoral Dissertation CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE

More information

Fundamental Technologies Driving the Evolution of Autonomous Driving

Fundamental Technologies Driving the Evolution of Autonomous Driving 426 Hitachi Review Vol. 65 (2016), No. 9 Featured Articles Fundamental Technologies Driving the Evolution of Autonomous Driving Takeshi Shima Takeshi Nagasaki Akira Kuriyama Kentaro Yoshimura, Ph.D. Tsuneo

More information

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving 3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving Quanwen Zhu, Long Chen, Qingquan Li, Ming Li, Andreas Nüchter and Jian Wang Abstract Finding road intersections in advance is

More information

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA F2014-ACD-014 AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA 1 Roy Bours (*), 1 Martijn Tideman, 2 Ulrich Lages, 2 Roman Katz, 2 Martin Spencer 1 TASS International, Rijswijk, The

More information

Conditional Monte Carlo Dense Occupancy Tracker

Conditional Monte Carlo Dense Occupancy Tracker Conditional Monte Carlo Dense Occupancy Tracker Lukas Rummelhard, Amaury Negre, Christian Laugier To cite this version: Lukas Rummelhard, Amaury Negre, Christian Laugier. Conditional Monte Carlo Dense

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,

More information

Option Driver Assistance. Product Information

Option Driver Assistance. Product Information Product Information Table of Contents 1 Overview... 3 1.1 Introduction... 3 1.2 Features and Advantages... 3 1.3 Application Areas... 4 1.4 Further Information... 5 2 Functions... 5 3 Creating the Configuration

More information

Vehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image

Vehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image Vehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image Masafumi NODA 1,, Tomokazu TAKAHASHI 1,2, Daisuke DEGUCHI 1, Ichiro IDE 1, Hiroshi MURASE 1, Yoshiko KOJIMA 3 and Takashi

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Scale Invariant Detection and Tracking of Elongated Structures

Scale Invariant Detection and Tracking of Elongated Structures Scale Invariant Detection and Tracking of Elongated Structures Amaury Nègre, James L. Crowley, Christian Laugier To cite this version: Amaury Nègre, James L. Crowley, Christian Laugier. Scale Invariant

More information

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization Marcus A. Brubaker (Toyota Technological Institute at Chicago) Andreas Geiger (Karlsruhe Institute of Technology & MPI Tübingen) Raquel

More information

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module www.lnttechservices.com Table of Contents Abstract 03 Introduction 03 Solution Overview 03 Output

More information

High Level Sensor Data Fusion for Automotive Applications using Occupancy Grids

High Level Sensor Data Fusion for Automotive Applications using Occupancy Grids High Level Sensor Data Fusion for Automotive Applications using Occupancy Grids Ruben Garcia, Olivier Aycard, Trung-Dung Vu LIG and INRIA Rhônes Alpes Grenoble, France Email: firstname.lastname@inrialpes.fr

More information

A multilevel simulation framework for highly automated harvest processes enabled by environmental sensor systems

A multilevel simulation framework for highly automated harvest processes enabled by environmental sensor systems A multilevel simulation framework for highly automated harvest processes enabled by environmental sensor systems Jannik Redenius, M.Sc., Matthias Dingwerth, M.Sc., Prof. Dr. Arno Ruckelshausen, Faculty

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters

A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters ISSN (e): 2250 3005 Volume, 06 Issue, 12 December 2016 International Journal of Computational Engineering Research (IJCER) A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters

More information

Neural Networks for Obstacle Avoidance

Neural Networks for Obstacle Avoidance Neural Networks for Obstacle Avoidance Joseph Djugash Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 josephad@andrew.cmu.edu Bradley Hamner Robotics Institute Carnegie Mellon University

More information

Probabilistic Vehicle Trajectory Prediction over Occupancy Grid Map via Recurrent Neural Network

Probabilistic Vehicle Trajectory Prediction over Occupancy Grid Map via Recurrent Neural Network Probabilistic Vehicle Trajectory Prediction over Occupancy Grid Map via Recurrent Neural Network ByeoungDo Kim, Chang Mook Kang, Jaekyum Kim, Seung Hi Lee, Chung Choo Chung, and Jun Won Choi* Hanyang University,

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2016 Localization II Localization I 25.04.2016 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking In Intersection Like Scenario

Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking In Intersection Like Scenario Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking In Intersection Like Scenario Qadeer Baig, Olivier Aycard, Trung Dung Vu and Thierry Fraichard Abstract Using multiple sensors in

More information

SILAB A Task Oriented Driving Simulation

SILAB A Task Oriented Driving Simulation SILAB A Task Oriented Driving Simulation Hans-Peter Krueger, Martin Grein, Armin Kaussner, Christian Mark Center for Traffic Sciences, University of Wuerzburg Roentgenring 11 D-97070 Wuerzburg, Germany

More information

Bayesian Occupancy Filtering for Multi-Target Tracking: an Automotive Application

Bayesian Occupancy Filtering for Multi-Target Tracking: an Automotive Application Bayesian Occupancy Filtering for Multi-Target Tracking: an Automotive Application Christophe Coué, Cédric Pradalier, Christian Laugier, Thierry Fraichard and Pierre Bessière Inria Rhône-Alpes & Gravir

More information

SYSTEM FOR MONITORING CONDITIONS ON DANGEROUS ROAD SECTIONS

SYSTEM FOR MONITORING CONDITIONS ON DANGEROUS ROAD SECTIONS SYSTEM FOR MONITORING CONDITIONS ON DANGEROUS ROAD SECTIONS Miha Ambrož Jernej Korinšek Ivan Prebil University of Ljubljana Faculty of Mechanical Engineering Chair of Modelling in Engineering Sciences

More information

Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy

Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Gideon P. Stein Ofer Mano Amnon Shashua MobileEye Vision Technologies Ltd. MobileEye Vision Technologies Ltd. Hebrew University

More information

The Bayesian Occupation Filter

The Bayesian Occupation Filter The Bayesian Occupation Filter Christopher Tay, Kamel Mekhnacha, Manuel Yguel, Christophe Coue, Cédric Pradalier, Christian Laugier, Thierry Fraichard, Pierre Bessiere To cite this version: Christopher

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018 Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018 Asaf Moses Systematics Ltd., Technical Product Manager aviasafm@systematics.co.il 1 Autonomous

More information

Naturalistic observations to investigate conflicts between drivers and VRU in the PROSPECT project

Naturalistic observations to investigate conflicts between drivers and VRU in the PROSPECT project Naturalistic observations to investigate conflicts between drivers and VRU in the PROSPECT project Marie-Pierre Bruyas, Sébastien Ambellouis, Céline Estraillier, Fabien Moreau (IFSTTAR, France) Andrés

More information

A Real-Time Navigation Architecture for Automated Vehicles in Urban Environments

A Real-Time Navigation Architecture for Automated Vehicles in Urban Environments A Real-Time Navigation Architecture for Automated Vehicles in Urban Environments Gang Chen and Thierry Fraichard Inria Rhône-Alpes & LIG-CNRS Laboratory, Grenoble (FR) Submitted to the Int. Conf. on Field

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2018 Localization II Localization I 16.04.2018 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Introduction to Mobile Robotics

Introduction to Mobile Robotics Introduction to Mobile Robotics Olivier Aycard Associate Professor University of Grenoble Laboratoire d Informatique de Grenoble http://membres-liglab.imag.fr/aycard 1/29 Some examples of mobile robots

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

Creating Affordable and Reliable Autonomous Vehicle Systems

Creating Affordable and Reliable Autonomous Vehicle Systems Creating Affordable and Reliable Autonomous Vehicle Systems Shaoshan Liu shaoshan.liu@perceptin.io Autonomous Driving Localization Most crucial task of autonomous driving Solutions: GNSS but withvariations,

More information

Real time trajectory prediction for collision risk estimation between vehicles

Real time trajectory prediction for collision risk estimation between vehicles Real time trajectory prediction for collision risk estimation between vehicles Samer Ammoun, Fawzi Nashashibi To cite this version: Samer Ammoun, Fawzi Nashashibi. Real time trajectory prediction for collision

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang MULTI-MODAL MAPPING Robotics Day, 31 Mar 2017 Frank Mascarich, Shehryar Khattak, Tung Dang Application-Specific Sensors Cameras TOF Cameras PERCEPTION LiDAR IMU Localization Mapping Autonomy Robotic Perception

More information

Pedestrian Detection with Radar and Computer Vision

Pedestrian Detection with Radar and Computer Vision Pedestrian Detection with Radar and Computer Vision camera radar sensor Stefan Milch, Marc Behrens, Darmstadt, September 25 25 / 26, 2001 Pedestrian accidents and protection systems Impact zone: 10% opposite

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

OBSTACLE DETECTION USING STRUCTURED BACKGROUND

OBSTACLE DETECTION USING STRUCTURED BACKGROUND OBSTACLE DETECTION USING STRUCTURED BACKGROUND Ghaida Al Zeer, Adnan Abou Nabout and Bernd Tibken Chair of Automatic Control, Faculty of Electrical, Information and Media Engineering University of Wuppertal,

More information

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Bowei Zou and Xiaochuan Cui Abstract This paper introduces the measurement method on detection range of reversing

More information

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Epipolar geometry-based ego-localization using an in-vehicle monocular camera Epipolar geometry-based ego-localization using an in-vehicle monocular camera Haruya Kyutoku 1, Yasutomo Kawanishi 1, Daisuke Deguchi 1, Ichiro Ide 1, Hiroshi Murase 1 1 : Nagoya University, Japan E-mail:

More information

A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera

A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera Claudio Caraffi, Tomas Vojir, Jiri Trefny, Jan Sochman, Jiri Matas Toyota Motor Europe Center for Machine Perception,

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

Sensor Data Fusion for Active Safety Systems

Sensor Data Fusion for Active Safety Systems Sensor Data Fusion for Active Safety Systems 2010-01-2332 Published 10/19/2010 Jorge Sans Sangorrin, Jan Sparbert, Ulrike Ahlrichs and Wolfgang Branz Robert Bosch GmbH Oliver Schwindt Robert Bosch LLC

More information

Pedestrian Detection Using Multi-layer LIDAR

Pedestrian Detection Using Multi-layer LIDAR 1 st International Conference on Transportation Infrastructure and Materials (ICTIM 2016) ISBN: 978-1-60595-367-0 Pedestrian Detection Using Multi-layer LIDAR Mingfang Zhang 1, Yuping Lu 2 and Tong Liu

More information

Semantic Place Classification and Mapping for Autonomous Agricultural Robots

Semantic Place Classification and Mapping for Autonomous Agricultural Robots Semantic Place Classification and Mapping for Autonomous Agricultural Robots Ulrich Weiss and Peter Biber Abstract Our work focuses on semantic place classification and on navigation of outdoor agricultural

More information

Integrated Vehicle and Lane Detection with Distance Estimation

Integrated Vehicle and Lane Detection with Distance Estimation Integrated Vehicle and Lane Detection with Distance Estimation Yu-Chun Chen, Te-Feng Su, Shang-Hong Lai Department of Computer Science, National Tsing Hua University,Taiwan 30013, R.O.C Abstract. In this

More information

Safe Prediction-Based Local Path Planning using Obstacle Probability Sections

Safe Prediction-Based Local Path Planning using Obstacle Probability Sections Slide 1 Safe Prediction-Based Local Path Planning using Obstacle Probability Sections Tanja Hebecker and Frank Ortmeier Chair of Software Engineering, Otto-von-Guericke University of Magdeburg, Germany

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Vehicle Detection Using Android Smartphones

Vehicle Detection Using Android Smartphones University of Iowa Iowa Research Online Driving Assessment Conference 2013 Driving Assessment Conference Jun 19th, 12:00 AM Vehicle Detection Using Android Smartphones Zhiquan Ren Shanghai Jiao Tong University,

More information

Next Generation Infotainment Systems

Next Generation Infotainment Systems Next Generation Infotainment Systems John Absmeier Director, Silicon Valley Innovation Center Digital gadgets more important for car buyers January 11, 2013 Smartphones in the Driver s Seat Never before

More information

Pattern Recognition for Autonomous. Pattern Recognition for Autonomous. Driving. Freie Universität t Berlin. Raul Rojas

Pattern Recognition for Autonomous. Pattern Recognition for Autonomous. Driving. Freie Universität t Berlin. Raul Rojas Pattern Recognition for Autonomous Pattern Recognition for Autonomous Driving Raul Rojas Freie Universität t Berlin FU Berlin Berlin 3d model from Berlin Partner Freie Universitaet Berlin Outline of the

More information

On Road Vehicle Detection using Shadows

On Road Vehicle Detection using Shadows On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu

More information

A survey on motion prediction and risk assessment for intelligent vehicles

A survey on motion prediction and risk assessment for intelligent vehicles Lefèvre et al. ROBOMECH Journal 2014, 1:1 REVIEW A survey on motion prediction and risk assessment for intelligent vehicles Stéphanie Lefèvre 1,2*, Dizan Vasquez 1 and Christian Laugier 1 Open Access Abstract

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Real-Time Detection of Road Markings for Driving Assistance Applications

Real-Time Detection of Road Markings for Driving Assistance Applications Real-Time Detection of Road Markings for Driving Assistance Applications Ioana Maria Chira, Ancuta Chibulcutean Students, Faculty of Automation and Computer Science Technical University of Cluj-Napoca

More information

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,

More information

Emergency Response: How dedicated short range communication will help in the future. Matthew Henchey and Tejswaroop Geetla, University at Buffalo

Emergency Response: How dedicated short range communication will help in the future. Matthew Henchey and Tejswaroop Geetla, University at Buffalo Emergency Response: How dedicated short range communication will help in the future. 1.0 Introduction Matthew Henchey and Tejswaroop Geetla, University at Buffalo Dedicated short range communication (DSRC)

More information

Last update: May 6, Robotics. CMSC 421: Chapter 25. CMSC 421: Chapter 25 1

Last update: May 6, Robotics. CMSC 421: Chapter 25. CMSC 421: Chapter 25 1 Last update: May 6, 2010 Robotics CMSC 421: Chapter 25 CMSC 421: Chapter 25 1 A machine to perform tasks What is a robot? Some level of autonomy and flexibility, in some type of environment Sensory-motor

More information

Terrain Roughness Identification for High-Speed UGVs

Terrain Roughness Identification for High-Speed UGVs Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 11 Terrain Roughness Identification for High-Speed UGVs Graeme N.

More information

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment Planning and Navigation Where am I going? How do I get there?? Localization "Position" Global Map Cognition Environment Model Local Map Perception Real World Environment Path Motion Control Competencies

More information

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for

More information

Predictive Autonomous Robot Navigation

Predictive Autonomous Robot Navigation Predictive Autonomous Robot Navigation Amalia F. Foka and Panos E. Trahanias Institute of Computer Science, Foundation for Research and Technology-Hellas (FORTH), Heraklion, Greece and Department of Computer

More information

Sensor Fusion-Based Parking Assist System

Sensor Fusion-Based Parking Assist System Sensor Fusion-Based Parking Assist System 2014-01-0327 Jaeseob Choi, Eugene Chang, Daejoong Yoon, and Seongsook Ryu Hyundai & Kia Corp. Hogi Jung and Jaekyu Suhr Hanyang Univ. Published 04/01/2014 CITATION:

More information

Accelerating solutions for highway safety, renewal, reliability, and capacity. Connected Vehicles and the Future of Transportation

Accelerating solutions for highway safety, renewal, reliability, and capacity. Connected Vehicles and the Future of Transportation Accelerating solutions for highway safety, renewal, reliability, and capacity Regional Operations Forums Connected Vehicles and the Future of Transportation ti Session Overview What are connected and automated

More information

Advanced Driver Assistance: Modular Image Sensor Concept

Advanced Driver Assistance: Modular Image Sensor Concept Vision Advanced Driver Assistance: Modular Image Sensor Concept Supplying value. Integrated Passive and Active Safety Systems Active Safety Passive Safety Scope Reduction of accident probability Get ready

More information

차세대지능형자동차를위한신호처리기술 정호기

차세대지능형자동차를위한신호처리기술 정호기 차세대지능형자동차를위한신호처리기술 008.08. 정호기 E-mail: hgjung@mando.com hgjung@yonsei.ac.kr 0 . 지능형자동차의미래 ) 단위 system functions 운전자상황인식 얼굴방향인식 시선방향인식 졸음운전인식 운전능력상실인식 차선인식, 전방장애물검출및분류 Lane Keeping System + Adaptive Cruise

More information