Using the disparity space to compute occupancy grids from stereo-vision

Similar documents
Probabilistic representation of the uncertainty of stereo-vision and application to obstacle detection

Fast Window Based Stereo Matching for 3D Scene Reconstruction

Refinement of scene depth from stereo camera ego-motion parameters

Real Time On Board Stereo Camera Pose through Image Registration*

Robust Camera Calibration for an Autonomous Underwater Vehicle

Computing Occupancy Grids from Multiple Sensors using Linear Opinion Pools

Dense Disparity Estimation in Ego-motion Reduced Search Space

Qadeer Baig, Mathias Perrollaz, Jander Botelho Do Nascimento, Christian Laugier

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control

Dual Arm Robot Research Report

W4. Perception & Situation Awareness & Decision making

Image Segmentation using K-means clustering and Thresholding

Shift-map Image Registration

Stereo Vision-based Subpixel Level Free Space Boundary Detection Using Modified u-disparity and Preview Dynamic Programming

[2006] IEEE. Reprinted, with permission, from [Damith C. Herath, Sarath Kodagoda and Gamini Dissanayake, Simultaneous Localisation and Mapping: A

A Duality Based Approach for Realtime TV-L 1 Optical Flow

Online Appendix to: Generalizing Database Forensics

Multimodal Stereo Image Registration for Pedestrian Detection

1/5/2014. Bedrich Benes Purdue University Dec 12 th 2013 INRIA Imagine. Modeling is an open problem in CG

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means

Exploring Context with Deep Structured models for Semantic Segmentation

SMART IMAGE PROCESSING OF FLOW VISUALIZATION

Estimating Velocity Fields on a Freeway from Low Resolution Video

New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction

1/5/2014. Bedrich Benes Purdue University Dec 6 th 2013 Prague. Modeling is an open problem in CG

THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE

CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy

Computer Graphics Chapter 7 Three-Dimensional Viewing Viewing

Multi-camera tracking algorithm study based on information fusion

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles

Adjacency Matrix Based Full-Text Indexing Models

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method

Using Vector and Raster-Based Techniques in Categorical Map Generalization

Robust Ground Plane Tracking in Cluttered Environments from Egocentric Stereo Vision

6 Gradient Descent. 6.1 Functions

Learning convex bodies is hard

Yet Another Parallel Hypothesis Search for Inverse Entailment Hiroyuki Nishiyama and Hayato Ohwada Faculty of Sci. and Tech. Tokyo University of Scien

Vision-based Multi-Robot Simultaneous Localization and Mapping

Calculation on diffraction aperture of cube corner retroreflector

Exploring Context with Deep Structured models for Semantic Segmentation

State Indexed Policy Search by Dynamic Programming. Abstract. 1. Introduction. 2. System parameterization. Charles DuHadway

MORA: a Movement-Based Routing Algorithm for Vehicle Ad Hoc Networks

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Comparison of Methods for Increasing the Performance of a DUA Computation

Estimation of large-amplitude motion and disparity fields: Application to intermediate view reconstruction

A PSO Optimized Layered Approach for Parametric Clustering on Weather Dataset

1 Surprises in high dimensions

FINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD

Queueing Model and Optimization of Packet Dropping in Real-Time Wireless Sensor Networks

Skyline Community Search in Multi-valued Networks

Local Path Planning with Proximity Sensing for Robot Arm Manipulators. 1. Introduction

On Board 6D Visual Sensors for Intersection Driving Assistance Systems

SLAM in Dynamic Environments via ML-RANSAC

EXACT SIMULATION OF A BOOLEAN MODEL

Characterizing Decoding Robustness under Parametric Channel Uncertainty

A Versatile Model-Based Visibility Measure for Geometric Primitives

Kinematic Analysis of a Family of 3R Manipulators

Spatio-Temporal Stereo Disparity Integration

Shift-map Image Registration

AN INVESTIGATION OF FOCUSING AND ANGULAR TECHNIQUES FOR VOLUMETRIC IMAGES BY USING THE 2D CIRCULAR ULTRASONIC PHASED ARRAY

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem

Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking In Intersection Like Scenario

Exercises of PIV. incomplete draft, version 0.0. October 2009

NEW METHOD FOR FINDING A REFERENCE POINT IN FINGERPRINT IMAGES WITH THE USE OF THE IPAN99 ALGORITHM 1. INTRODUCTION 2.

A Plane Tracker for AEC-automation Applications

Verifying performance-based design objectives using assemblybased vulnerability

EFFICIENT STEREO MATCHING BASED ON A NEW CONFIDENCE METRIC. Won-Hee Lee, Yumi Kim, and Jong Beom Ra

Spherical Billboards and their Application to Rendering Explosions

Module13:Interference-I Lecture 13: Interference-I

d 3 d 4 d d d d d d d d d d d 1 d d d d d d

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

UNIT 9 INTERFEROMETRY

filtering LETTER An Improved Neighbor Selection Algorithm in Collaborative Taek-Hun KIM a), Student Member and Sung-Bong YANG b), Nonmember

Generalized Edge Coloring for Channel Assignment in Wireless Networks

Flexible Calibration of a Portable Structured Light System through Surface Plane

Computer Organization

An Investigation in the Use of Vehicle Reidentification for Deriving Travel Time and Travel Time Distributions

Navigation Around an Unknown Obstacle for Autonomous Surface Vehicles Using a Forward-Facing Sonar

A T-Step Ahead Constrained Optimal Target Detection Algorithm for a Multi Sensor Surveillance System

Section 20. Thin Prisms

A multiple wavelength unwrapping algorithm for digital fringe profilometry based on spatial shift estimation

Linear First-Order PDEs

Comparative Study of Projection/Back-projection Schemes in Cryo-EM Tomography

CS 106 Winter 2016 Craig S. Kaplan. Module 01 Processing Recap. Topics

Study of Network Optimization Method Based on ACL

Technical Report TR Navigation Around an Unknown Obstacle for Autonomous Surface Vehicles Using a Forward-Facing Sonar

A new fuzzy visual servoing with application to robot manipulator

Geometry of Multiple views

Stereo camera de-calibration detection based on observing kinematic attributes of detected objects and the camera rig

Chapter 5 Proposed models for reconstituting/ adapting three stereoscopes

An Algorithm for Building an Enterprise Network Topology Using Widespread Data Sources

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Coupling the User Interfaces of a Multiuser Program

Feature Extraction and Rule Classification Algorithm of Digital Mammography based on Rough Set Theory

Data Mining: Clustering

A Panoramic Stereo Imaging System for Aircraft Guidance

Computer Graphics Inf4/MSc. Computer Graphics. Lecture 6 View Projection Taku Komura

A Framework for Dialogue Detection in Movies

Transcription:

The 2010 IEEE/RSJ International Conference on Intelligent Robots an Systems October 18-22, 2010, Taipei, Taiwan Using the isparity space to compute occupancy gris from stereo-vision Mathias Perrollaz, John-Davi Yoer, Anne Spalanzani an Christian Laugier Abstract The occupancy gri is a popular tool for probabilistic robotics, use for a variety of applications. Such gris are typically base on ata from range sensors (e.g. laser, ultrasoun), an the computation process is well known [1]. The use of stereo-vision in this framework is less common, an typically treats the stereo sensor as a istance sensor, or fails to account for the uncertainties specific to vision. In this paper, we propose a novel approach to compute occupancy gris from stereo-vision, for the purpose of intelligent vehicles. Occupancy is initially compute irectly in the stereoscopic sensor s isparity space, using the sensor s pixel-wise precision uring the computation process an allowing the hanling of occlusions in the observe area. It is also computationally efficient, since it uses the u-isparity approach to avoi processing a large point clou. In a secon stage, this isparity-space occupancy is transforme into a Cartesian space occupancy gri to be use by subsequent applications. In this paper, we present the metho an show results obtaine with real roa ata, comparing this approach with others. I. INTRODUCTION Creating (or having a priori) a moel of the local environment has always been one of the requirements for successful implementation of a mobile robot. One common approach has been to buil an occupancy gri. These gris are often create base on ata from one or more range sensors, such as laser or ultrasoun. Because of the imperfection in such sensors, it is current practice to create the gri by using a probabilistic approach [1][2]. While stereo camera pairs have been wiely use on mobile robots, their use for the creation of occupancy gris has been somewhat less common. In some cases, researchers have use the stereo cameras purely as a istance sensor, an use the same approach as with a laser sensor [3]. Others have use stereospecific methos, but have not completely consiere the nature of the stereoscopic ata. In [4] or [5], the authors only consier the first etecte object for each column an suppose that it occlues the fiel ahea. This paper presents a novel approach for the construction of occupancy gris using a stereo camera pair, specifically create for the application of an on-roa intelligent vehicle. As such, the occupancy gri create will be a plane representing the area in front of the vehicle. The iea is not to map the environnement at a global scale, but rather to estimate the free space in front of the vehicle. Also, processing time is critical for this application, as the system will be require to execute in real-time. Finally, it is important to see all M. Perrollaz (mathias.perrollaz@inrialpes.fr), A. Spalanzani (anne.spalanzani@inrialpes.fr) an C. Laugier (christian.laugier@inrialpes.fr) are with the INRIA Grenoble. J-D. Yoer (j-yoer@onu.eu) is with Ohio Northern University, currently invite professor at INRIA Grenoble. obstacles, even those which may be behin another obstacle, such as a peestrian stepping from behin a car. For example, if a car is parke on the curb, it may present no risk to the ego-vehicle. But a person stepping out from behin the car woul present a risk - an by the time the person is no longer occlue behin the car, it may be too late to safely stop the vehicle. One of the critical avantages of cameras over 2-D laser sensors for this application is precisely the ability to see such obstacles. A 3D laser sensor has similar avantages, but is much more expensive an slower than a vision sensor. The metho presente here provies a formal probabilistic moel to calculate the probability of occupancy base on the isparity space of the stereoscopic sensor. The metho also formally consiers the visibility of the ifferent regions of the image in the calculation, an thus can eal with partiallyocclue objects. Because of its use of the u-isparity space, it is computationally efficient. Finally, the metho formally consiers the reuction in accuracy of the isparity image with istance from the sensor. The paper will etail the methoology, an show results with ata from real images obtaine on the roa. Section II provies a brief review of the use of the isparity space, with specifics relate to the intelligent vehicle application. Section III explains the overall methoology. Section IV shows results, an compares the metho with other approaches. Finally, Section V iscusses future work an conclusions. II. THE DATA IN THE DISPARITY SPACE A. Geometrical consierations In this paper the stereoscopic sensor is consiere as perfectly rectifie. Cameras are suppose ientical an classically represente by a pinhole moel, (α u, α v, u 0, v 0 ) being the intrinsic parameters. Pixel coorinates in left an right cameras are respectively name (u l, v) an (u r, v). The length of the stereo baseline is b s. It shoul be note that this application only calls for the visual sensor to create an instantaneous occupancy gri of the area in front of the vehicle. A worl coorinate system is enote R w. Each point P of coorinates X w = (x w, y w, z w ) can be projecte onto the left an right image planes, respectively on positions (u l, v) an (u r, v). Consequently, in the isparity space associate to the stereoscopic sensor, the coorinates of P are U = (u,, v), with u = u l an = u l u r, namely the isparity value of the pixel. The u, an v axes efine the isparity coorinate system R. The transform U = F(X w ) is invertible, so the coorinates in R w can be retrieve from images coorinates through a reconstruction function. 978-1-4244-6676-4/10/$25.00 2010 IEEE 2721

For simplicity in notation, an without loss of generality, the yaw, pitch an roll angles of the camera, relative to R w, are set to zero. Assuming that the center of the stereo baseline is situate at position (x o, y o, z o ), the transform from the worl coorinate system to isparity space is given by [6]: B. The u-isparity approach u = u 0 + α u x w x o b s/2 y w y o v = z v 0 + α w z o v y w y o (1) = b α s u y w y o 1) The iea: The u-isparity approach [7] is a complement to the v-isparity originally escribe in [8]. The iea is to project the pixels of the isparity map along the columns, with accumulation. The resulting image is similar to a bir s eye view representation of the scene in the isparity space, as illustrate on figure 1. On this image, the lower lines correspon to close areas, while the upper lines contain information about the long istances. Vertical objects (i.e. with constant isparity, e.g. the peestrians) appear as portion of straight lines. an y = y w. Arbitrarily, one can ecie to set the etection plane to z w = 0. x bs Right camera xs0 z Os O ys0 Left camera zs0 y Detection plane Fig. 2. Simplifie geometrical configuration of the stereoscopic sensor, an reference to the common coorinate system R D. Consiering equation 1, it appears that an orthogonal projection on P D is equivalent to an orthogonal projection in R on any plane of constant v. Therefore, since computation of u-isparity images is not costly, this approach irectly implements the vertical projection on P D of the observe points from the scene. Moreover, it is equivalent to process the ata in the u-isparity plane or in P D. For the remainer of this paper, we will call U D the coorinates of a point in the u-isparity plan an X D its coorinates in the etection plane. The transform between U D an X D is given by the observation function G D : G D : R 2 R 2 U D X D (2) Fig. 1. Left image from a stereo pair (top), the associate isparity image (mi) an u-isparity representation of the scene (bottom). 2) The etection plane: For occupancy gri computation, we have to consier a etection plane P D, that is the support for the gri. As shown on figure 2, P D is chosen to be parallel to the plane efine by the baseline an the optical axes. This constaint is possible, since the objective is to buil an occupancy gri in the vehicle s frame. A coorinate system R D (O D, x, y ) is associate with the etection plane. For a given point P of the space, x = x w with: { x = x o s + b 2 + b(u uo) y = y o s + αubs 3) The alignment of rays: Another avantage of the u- isparity approach for occupancy gri computation is that, contrary to the cartesian representation, it lets appear as parrallel the rays of light that go through the camera matrix. Inee, a set of vertically aligne rays is represente by a column in the u-isparity image. In a similar way, the lines of the v-isparity image correspon to sets of horizontally aligne rays. This behavior is promising since it allows easy estimation of occlue areas. C. Roa-obstacle separation For the reminer of the paper, it is assume that pixels can be istinguishe as being from the roa surface or from the obstacles. There are several methos to o this, such as estimating the roa surface an thresholing the height of the pixels, as in [3]. We choose to use a ouble correlation framework, which exploits ifferent matching hypotheses for vertical an horizontal objects, as escribe in [9] an etaile in [6]. It provies immeiate classification of the pixels uring the matching process. After this classification, we obtain two isparity images I obst an I roa (3) an two 2722

u-isparity images, IU obst an IU roa, respectively containing pixels from the obstacles an from the roa surface. III. OCCUPANCY GRID COMPUTATION FROM THE A. The approach DISPARITY SPACE The approach presente here is to compute an occupancy gri irectly in the u-isparity plane. This gri will later be transforme into a cartesian gri, explicitly moeling the uncertainty of the stereoscopic sensor. There are two main avantages to this approach. First, this allows irect consieration of visible an occlue portions of the image. Secon, this approach allows use of equally-space measurement points to create the initial gri. By contrast, moving to a Cartesian space first woul give a varying ensity of measurements. B. Estimation of the occupancy of a cell We seek to calculate P(O U ), the probability that a cell U is occupie by an obstacle. This probability will epen on the visibility, V U, an on the confience on the observation of an obstacle, C U. V U an C U are binary ranom variables (i.e. P(V U = 1) measures the probability that the cell U is visible, P(C U = 1) is the probability that an obstacle has been observe in cell U). Experience provies knowlege about the shape of the probability ensity function P(O U V U, C U ), that is, the probability of a cell being occupie, knowing V U an C U. Inee, some bounary conitions of P(O U V U, C U ) are known. For example, if the cell is not visible, nothing is known about its occupancy, so: P(O U (V U = 0, C U = c) = 0.5, c 0, 1 (4) Similarly, if a cell is fully visible an there is full confience that an obstacle was observe, then: P(O U V U = 1, C U = 1) = 1 P FP (5) that is, the only way the cell is not occupie is in the event of a false positive. Also: P(O U V U = 1, C U = 0) = P FN (6) that is, a cell can only be occupie, when nothing is observe, if there was a false negative. P FP an P FN are respectively the probability that a false positive or a false negative can occur uring the matching process. These are assume to be constant an arbitrarily set accoring to the sensor s features. Finally, the laws of probability are use to obtain the full ecomposition of P(O U ): P(O U ) = v,c P(V U = v) P(C U = c) P(O U V U = v, C U = c) (7) V U an C U being boolean variables, this means: P(O U ) = P(V U = 1) P(C U = 1) (1 P FP ) +P(V U = 1) (1 P(C U = 1)) P FN +(1 P(V U = 1)) 0.5 (8) So to compute the occupancy probability, it is necessary to estimate the values P(V U = 1) an P(C U = 1) with respect to the isparity ata. Let us efine the maximum etection height h. For a given cell U of the gri, we efine the number of possible measurement points as: N P (U) = v 0 () v h () (9) v 0 () an v h () being respectively the v-coorinates of the pixels situate on the groun (z w = 0) an at the maximum etection height (z w = h) for the value of the isparity. Consiering equation 1: N P (U) = α v α u h. b s (10) The number of observe obstacle pixels for the cell is the number of possible pixels whose isparity value is. It can be irectly measure in the obstacle u-isparity image: N O (U) = I obst U (u, ) (11) 1) Estimation of the visibility of a cell: To estimate the probability that a cell U is visible, we classify the pixels of the obstacle isparity image whose coorinates are (u, v [v h, v 0 ]) (i.e. possible pixels): if I obst (u, v) >, the point (u,, v) is occlue, if I obst (u, v) = 0, there is no observation for the ray (u, v) (i.e. it is not visible), else the pixel (u,, v) is sai to be visible. The number of visible pixels for any cell U is N V (U). The probability of visibility is estimate to be the ratio between visible pixels an possible pixels: P(V U = 1) = N V (U) N P (U) (12) 2) Estimation of the confience of observation: We choose to express the confience on the observation of an obstacle as a function of the ratio: r O (U) = N O(U) N V (U) (13) This means that if more of the visible pixels are fille with an observation, we are more confient we have observe an obstacle. An exponential function is use to represent the knowlege that the confience shoul grow quickly with respect to the number of observe pixels: P(C U = 1) = 1 e r O (U) τ O (14) where τ O is a constant. Note that although C U is a binary variable (either true or false), this probability ensity function is continuous. 3) Resulting probability ensity function: Figure 3 illustrates equation 8, the probability ensity function of occupancy with respect to the visibility an to r O. For this plot, parameters are set to: P FP = 0.01, P FN = 0.05 an τ O = 0.15. These parameters were set manually. 2723

the surface S U (U) of a pixel U = (u, ) as the region of the u-isparity image elimite by the intervals [u 0.5, u+0.5[ an [ 0.5, + 0.5[. The area of influence of this pixel in the etection plane is: S X (U) = G(S U (U)). To compute the occupancy gri, the occupancy probability of a pixel U is simply attribute to the area S X (U) of the etection plane. For short istances, several pixels can have an influence on the same cell of the metric gri. Therefore, it is necessary to estimate the occupancy accoring to this set of ata. For this purpose, we choose to use a max estimator, which ensures a conservative estimation of the probability of occupancy: P(O X ) = max(p(o U )/X S X (U)) (15) Fig. 3. Probability ensity function P(O U ). C. Resulting gri in the u-isparity plane Knowing the probability ensity function P(O U V U, C U ), the computation of the occupancy gri in the u-isparity image is straightforwar. Figure 4 shows a etail of a scene an the associate occupancy gri in the u-isparity plane. Regions with obstacles appear in light color, while free space appears in ark color. It is noticeable that regions situate in front of obstacles are seen as free. Thanks to our approach which formally consiers the visibility, the peestrian is etecte even if it is partially occlue by the box. Fig. 4. Detail of a scene (a), associate isparity map (b) an occupancy gri in u-isparity. The box an the peestrian behin the box are both apparent on the u-isparity occupancy gri. This woul not have been possible without consiering the visibility. D. Computation of the occupancy gri in Cartesian space The cartesian occupancy gri requires the calculation of which pixels from the occupancy gri in u-isparity have an influence on a given cell of the cartesian gri. Let us efine A. Experimental setup IV. EXPERIMENTAL RESULTS The algorithm has been evaluate on sequences from the French LOVe project [10], ealing with peestrian etection. The images are taken with a pair of SMAl CMOS cameras, which provie images every 30 ms. They are reuce to quarter VGA resolution (320*240 pixels) before the matching process. The length of the stereo baseline is 43 cm. The observe region is 7.5m < x < 7.5m an 0m < y < 35m, maximum height is h = 2.0m, an the cell size is 0.25m 0.25m. The correlation winow measures 7pixels in with an 19pixels in height. The occupancy gri computation parameters are set to: P FP = 0.01, P FN = 0.05 an τ O = 0.15. B. Results Figure 5 shows typical results. For each of the cases (af), the left camera s image is shown in the top left, the u- isparity occupancy gri in the bottom left, an the cartesian occupancy gri on the right. For the occupancy gris, lighter colors inicate an increase likelihoo of occupation, while the backgroun color represents a probability of occupation of 0.5 (which means that there is no knowlege about the likelihoo of occupation of that cell). In case a), the closest obstacle is the truck to the right. In both occupancy gris, we can see the truck to the right, an a variety of obstacles at a further istance in front an to the left. In case b), we see a more cluttere scene, an in the cartesian gri, the various obstacles are visible. Left to right in the gri, note the oncoming car, the peestrian further away, the stoplight pole in the center, the truck as a large obstacle further away, an the car on the far right. Case c) shows that obstacles are seen at longer istance. Note that the area between the camera an the other vehicles is seen as free space. Case ) shows one of the avantages of this metho, as we can ientify both the car turning in front (the bottom right of the cartesian gri), an the sign behin it. This effect is easily seen in the u-isparity gri, on the right, where one can clearly see two obstacles. The algorithm also correctly notes that there is some likelihoo that the area between the car an the sign is unoccupie. Cases e) an f) show two scenes cluttere with peestrian traffic. In the u-isparity gri, it is obvious that objects are 2724

Fig. 5. Typical results obtaine with a ataset from the French project LOVe. For each result: the left image of the stereo pair, the occupancy gri in the u-isparity plane an the occupancy gri in cartesian space. etecte at a wie variety of istances. In the cartesian gri, note the peestrians, as well as a few of the vertical posts on the right. Because the obstacles are close to the vehicle, most of the occupancy gri in these cases remains unknown. Figure 6 compares this approach to two other methos of using a stereoscopic sensor to create an occupancy gri, using case c from figure 5. Metho a) etects the maximum isparity of each column (as in [5]) occupancy probability is P FN for higher isparity values, an 0.5 behin the etecte object. This approach is very sensitive to noise in the isparity map an can not percieve partially occlue parts of the scene. Metho b) relies only on filling the gri base on knowlege of the roa an obstacles pixels, as escribe in [6]. As such, much of the image has no information (leaing to large areas of 0.5 occupancy probability), an the resulting gri tens to have high certainty. Finally, the metho escribe in this paper is shown in c). The metho formally takes into account the probability of false measurements, making it less susceptible to noise, an fins the obsacle behin the car whereas metho a) foun only the part of the obstacle in front of the car. The probabilistic moel has also provie a more realistic variation in the occupancy probablity values. C. Computation time On a stanar laptop, without optimizations, the algorithm for the u-isparity occupancy gri computation runs in 10 ms. The conversion into cartesian gri runs in 40 ms. Consiering that all the columns of the occupancy gri in u- isparity can be processe inepenantly, many operations can be performe in parallel. Thus it can be efficiently implemente on a parellel architecture, e.g. a GPU, to furter reuce computation time. We are now working on such implementation of the algorithm on GPU, using NVIDIA s 2725

for his review of this probabilistic moel an Dr. Amaury Negre for its work on the GPU implementation. Dr Yoer woul like to thank INRIA an Ohio Northern University for allowing him to spen this year with the E-motion team at INRIA Grenoble. REFERENCES a) b) c) Fig. 6. Comparison of three approaches to compute the occupancy gri. a) etection of the maximum isparity of each colum, b) exploitation of obstacle an roa u-isparity images as measurement point, c) computation in the u-isparity plane. CUDA. Preliminary tests showe that it is possible to run the complete algorithm at vieo framerate (25fps), from the stereo matching to the construction of the cartesian gri. V. CONCLUSION AND FUTURE WORK The algorithm escribe in this paper offers several avantages over other approaches using stereo vision to create an occupancy gri. First, using the u-isparity as a starting point allows computationally efficient calculations. Secon, computing the initial occupancy gri in the u-isparity space allows for simpler calculation of the visibility. It also allows the use of ata with a pixel-wise precision all along the computation process. Thir, we have presente a formal means of consiering the occupation as a function of the probability that an obstacle is visible, an of the confience in the observation of the obstacle. In summary, the contribution of this paper is to calculate occupancy gris from stereo images in a computationally efficient way, which formally accounts for the probabilistic nature of the sensor. Much work remains to be one. It is expecte that this algorithm will soon be teste on a vehicle also equippe with ual laser range finers. The three sensors (2 lasers an stereo-vision) will all provie occupancy gris to a Bayesian Occupancy Filter. Comparing an fusing these occupancy gris will provie another means of evaluating the output of this algorithm. Thir, the metho will be upate to formally consier the semi-occlusions in the stereoscopic images (i.e. pixels next to isparity iscontinuities). We also hope to refine the approach by consiering relationships between parameters in the system (for example, the probability of a false negative, or false positive, is relate to the ensity of the isparity image). Finally, work is being one to filter the Cartesian gri base on the uncertainty of the stereo sensor, an to take avantage of pixels that are ientifie as roa pixels. [1] S. Thrun, W. Burgar, an D. Fox, Probabilistic Robotics. The MIT Press, 2005. [2] A. Elfes, Using occupancy gris for mobile robot perception an navigation, IEEE Computer, vol. 22-6, 1989. [3] C. Braillon, C. Praalier, K. Usher, J. Crowley, an C. Laugier, Occupancy gris from stereo an optical flow ata, in International Symposium on Experimental Robotics (ISER). [4] L. Matthies an A. Elfes, Integration of sonar an stereo range ata using a gri-base representation, in IEEE International Conference on Robotics an Automation (ICRA), 1988. [5] D. Murray an J. Little, Using real-time stereo vision for mobile robot navigation, Autonomous Robots,, vol. 8, January 2000. [6] M. Perrollaz, A. Spalanzani, an D. Aubert, A probabilistic representation of the uncertainty of stereo-vision an its application to obstacle etection, in IEEE Intelligent Vehicles Symposium (IV), 2010. [7] Z. Hu, F. Lamosa, an K. Uchimura, A complete u-v-isparity stuy for stereovision base 3 riving environment analysis, in International Conference on 3-D Digital Imaging an Moeling(3DIM), 2005. [8] R. Labayrae, D. Aubert, an J. Tarel, Real time obstacles etection on non flat roa geometry through v-isparity representation, in IEEE Intelligent Vehicles Symposium (IV), 2002. [9] P. Burt, L. Wixson, an G. Salgian, Electronically irecte focal stereo, in IEEE International Conference on Computer Vision (ICCV), vol. 0, 1995. [10] Love : http://love.univ-bpclermont.fr/. [11] C. Coue, C. Praalier, C. Laugier, T. Fraichar, an P. Bessiere, Bayesian occupancy filtering for multi-target tracking: an automotive application, International Journal of Robotics Research (IJRR), vol. 25, January 2006. ACKNOWLEDGMENT This work has been one within the context of the Arosyn project. The authors woul like to thank Dr. Pierre Bessiere 2726