Fuzzy Estimation and Segmentation for Laser Range Scans

Similar documents
Pedestrian Tracking Using Random Finite Sets

Laserscanner Based Cooperative Pre-Data-Fusion

Spatio temporal Segmentation using Laserscanner and Video Sequences

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

HOUGH TRANSFORM CS 6350 C V

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Sensory Augmentation for Increased Awareness of Driving Environment

Extended target tracking using PHD filters

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

Uncertainties: Representation and Propagation & Line Extraction from Range data

Pedestrian Detection Using Multi-layer LIDAR

Sensor Modalities. Sensor modality: Different modalities:

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING

EE368 Project: Visual Code Marker Detection

Optimizing Bio-Inspired Flow Channel Design on Bipolar Plates of PEM Fuel Cells

Simulation of a mobile robot with a LRF in a 2D environment and map building

Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA

OBJECT detection in general has many applications

Intelligent Robotics

Locally Weighted Learning for Control. Alexander Skoglund Machine Learning Course AASS, June 2005

QUASI-3D SCANNING WITH LASERSCANNERS

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

2 OVERVIEW OF RELATED WORK

This chapter explains two techniques which are frequently used throughout

Processing of distance measurement data

Fusing Radar and Scene Labeling Data for Multi-Object Vehicle Tracking

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera

Segmentation of Images

Building Reliable 2D Maps from 3D Features

Propagating these values through the probability density function yields a bound on the likelihood score that can be achieved by any position in the c

Basis Functions. Volker Tresp Summer 2017

IMPROVED LASER-BASED NAVIGATION FOR MOBILE ROBOTS

Real-Time Human Detection using Relational Depth Similarity Features

Time-to-Contact from Image Intensity

Gesture Recognition using Neural Networks

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion

3D-2D Laser Range Finder calibration using a conic based geometry shape

A Symmetry Operator and Its Application to the RoboCup

W4. Perception & Situation Awareness & Decision making

6 y [m] y [m] x [m] x [m]

3 No-Wait Job Shops with Variable Processing Times

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

DATA EMBEDDING IN TEXT FOR A COPIER SYSTEM

PEER Report Addendum.

Small-scale objects extraction in digital images

Filtering Images. Contents

Automatic 3D wig Generation Method using FFD and Robotic Arm

Learning to Segment Document Images

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

REPRESENTATION OF BIG DATA BY DIMENSION REDUCTION

Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart

FEATURE-BASED REGISTRATION OF RANGE IMAGES IN DOMESTIC ENVIRONMENTS

Real Time Recognition of Non-Driving Related Tasks in the Context of Highly Automated Driving

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Engineered Diffusers Intensity vs Irradiance

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

3.2 Level 1 Processing

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Nearest Neighbors Classifiers

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Improving Vision-Based Distance Measurements using Reference Objects

arxiv: v1 [cs.ro] 26 Nov 2018

A Feature Point Matching Based Approach for Video Objects Segmentation

Perimeter and Area Estimations of Digitized Objects with Fuzzy Borders

Using Layered Color Precision for a Self-Calibrating Vision System

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Navigation and Metric Path Planning

Irradiance Gradients. Media & Occlusions

Scanning Real World Objects without Worries 3D Reconstruction

3D VISUALIZATION OF SEGMENTED CRUCIATE LIGAMENTS 1. INTRODUCTION

Non-Parametric Modeling

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Robotics Programming Laboratory

Algorithm research of 3D point cloud registration based on iterative closest point 1

An Extended Line Tracking Algorithm

Occluded Facial Expression Tracking

Coarse-to-Fine Search Technique to Detect Circles in Images

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

1. Estimation equations for strip transect sampling, using notation consistent with that used to

Model Based Perspective Inversion

Effects Of Shadow On Canny Edge Detection through a camera

Pedestrian counting in video sequences using optical flow clustering

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2015

Subpixel Corner Detection Using Spatial Moment 1)

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Piecewise-Planar 3D Reconstruction with Edge and Corner Regularization

Loop detection and extended target tracking using laser data

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

Transcription:

2th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 Fuzzy Estimation and Segmentation for Laser Range Scans Stephan Reuter, Klaus C. J. Dietmayer Institute of Measurement, Control and Microtechnology University of Ulm, Germany stephan.reuter@uni-ulm.de Abstract In tracking applications objects are often regarded as point targets. For modern sensors the assumption that one object creates at most one measurement is often not fulfilled any more due to the increasing sensor resolution capabilities. Thus, an estimation of the number of objects is necessary. Using threshold based segmentations for the estimation leads to hard decisions. Using a fuzzy estimation and segmentation, hard decisions are avoided. In the proposed fuzzy segmentation method the segments are weighted according to the confidence in the segment. The results of the fuzzy segmentation are compared with the threshold based segmentation. Further, extended objects are created out of the fuzzy segments of two laser scanners. Keywords: Fuzzy logic, estimation, segmentation, extended targets, laser scanner, tracking. Introduction Modern sensors like e.g. laser range scanners deliver often more than one measurement per object due to the increasing resolutions. This fact is in conflict to the assumption that one object generates at most one measurement which is often used in tracking applications. Thus, modern tracking algorithms like e.g. those based on the random finite sets filter introduced by R. Mahler [] will not be able to estimate the number of targets correctly without pre-processing of the sensor data. The measurement points of a laser scanner are often grouped into segments by threshold based approaches. These segmentation algorithms are normally based on a distance criterion, e.g. in [2], and hard decision thresholds are used to determine if two measurement points belong to the same object or not. In scenarios with well separated objects, this heuristic segmentation provides very good results. But if the objects are close to each other, the heuristic thresholds can lead to incorrect segmentation results. In image processing segmentation methods based on fuzzy logic have been proposed, e.g. in [3] []. It is shown that in pictures with shading and noise, it is not possible to determine thresholds which lead to satisfying segmentation results. However, with a fuzzy segmentation it is possible to get satisfying results for the same picture. In this contribution we adapt the fuzzy segmentation to 978-0-98238-0- 2009 ISIF 97 measurements of laser scanners. The results of the fuzzy segmentation will be compared with a threshold based segmentation. In section 2 the sensor setup will be introduced. Then the filtering of measurement points from static and dynamic objects is described. The drawbacks of a standard threshold based segmentation are shown in section. The proposed fuzzy estimation and segmentation is introduced in section 5. Then, in section 6 a fusion approach for the fuzzy segments of two scanners will be proposed. 2 Scenario and Sensor Setup In the Transregional Collaborative Research Center SFB/TRR 62 "Companion-Technology for Cognitive Technical Systems" the communication between technical systems and human users is investigated. It is particularly focused on the consideration of so-called companion-features - properties like individuality, adaptivity, accessibility, co-operativity, trustworthyness, and the ability to react to the user s emotions appropriately and individually. Further, a companion system is able to completely adapt its functionality to the skills, preferences, requirements and the emotion of an individual user as well as to the current situation. The objective of the SFB/TRR62 is to develop a technology which makes a systematic construction of companion systems possible. In order to be able to provide these companion-features, the cognitive system needs a complete description of the environment and the mental state of the user. The demonstration scenario for the companion system is a large room with high pedestrian density. The pedestrians around the system have to be detected and tracked with uncertainties in state and existence. The results of the tracking are further used to generate regions of interest for other parts of the project like e.g. facial expression or gesture recognition. In this contribution we concentrate on the detection and estimation of the number of pedestrians. We use two multilayer laser scanners which are synchronized by an electronical control unit to observe the environment. Each sensor has up to 0 horizontal field of view. The scan frequency is chosen to 2.5 Hz in this work and we use a constant angular resolution of 0.25. The scanners are mounted in two corners of the laboratory. Further, a standard web cam is used for documentation purposes.

3 Filtering In the neighborhood of a companion system it is expected to have a large number of static objects (e.g. walls) and dynamic objects (e.g. pedestrians). Because we are not interested in the static objects, we want to remove the measurements of those objects. One approach to remove the measurement points of static objects would be to make a reference measurement of the environment of the companion system without any dynamic objects. This reference measurement can then be used to discard the measurements of the static objects. The disadvantage of this approach is that the system is not able to adapt to a changing environment. One possibility for an adaptive filtering of the static points is the use of an occupancy grid. The implementation used here is related to the online occupancy grid mapping shown in [5]. The occupancy grid is calculated separately for each laser scanner and since the scanners are mounted at fixed positions, we can use a polar grid map instead of a cartesian one. Thus, the size of the grid cells depends on the distance and the calculations are simplified. The field of view of the laser scanner is divided into grid cells c i. At every time step t k all measurement points are inserted into a measurement grid. In opposite to [5] we use smaller increments to avoid that standing or slow moving pedestrians lead to new statical objects. The occupancy likelihood of a grid cell c i is given by with p(c i z,...,z t ) = S + S S = p(c i z t ) p(c i z t ) p(c i z,...,z t ) p(c i z,...,z t ) where p(c i z t ) is the occupancy likelihood based on the current measurement grid and p(c i z,...,z t ) is the occupancy likelihood based on the online map calculated by the previous measurements. In Fig. an occupancy grid in polar coordinates is shown. Black grid cells correspond to cells which are occupied with very high probability. The brighter the cells, the smaller are the occupancy probabilities. Only points of a grid cell c i with p(c i z,...,z t ) > 0.999 are marked as static. This avoids to assign the static flag to measurement points which are probably from a moving object. In Fig. 2 the measurement points are marked either as static or as dynamic by using the occupancy grid shown in Fig.. Standard Segmentation Segmentation methods for laser range scans are normally based on the distance between the measurement points. A decision threshold is used to determine if the points belong to the same segment. Since the measurement points are ordered by the horizontal angle, the probability that two points () (2) 975 Figure : Occupancy grid in polar coordinates. 7 6 5 3 2 0 dynamic static 2 0 2 3 Figure 2: Measurement points marked with a blue star are static points, dynamic points are marked with a red square. belong to the same segment depends on the one hand on the gap between the horizontal angles and on the other hand on the radial distance of two succeeding scan points. The standard segmentation is based on a heuristic threshold d max for the maximum distance of two points that belong to the same segment. Further, a threshold φ max for the maximum angular distance is used. The disadvantage of this method is, that dependent on d max and φ max a system tends to return more or less segments than expected. Further, we do not get any information about the uncertainty of the number of segments and of the segment itself. In Fig. 3 we show the segmentation results for different thresholds for the maximum radial distance d max of two

succeeding points. The scan points are the echoes of two persons which are situated very close to each other. Thus, the correct segmentation would be the one of Fig. 3(c) with d max = 0.2m. We observe, that the segmentation returns either one, two or three segments by varying the distance threshold only in a range of cm. Because of the small differences, the threshold which leads to the right result in this case might lead to a wrong segmentation result in another case. 5 Fuzzy Segmentation 5. Fuzzy Connectedness Because we get the measurement points ordered by the horizontal angle, we only have to calculate the fuzzy connectedness of succeeding points. The connectedness of two succeeding points according to the angular difference is calculated by µ φ (m i,m i+ ) = c φ(m i,m i+ ) φres where φ(m i,m i+ ) is the distance between the horizontal angles of the measurements m i and m i+ and φ res is the angular resolution of the laser scanner. The variable c [0,] is a design parameter. Like for the angular difference, a connectedness for two succeeding points can also be calculated according to the radial distance. Due to the measurement noise of the scanner the connectedness has to be close to one for small distances. Further, using the size of the object type, we can determine an upper bound for the possible distance. Thus, the connectedness is calculated by (3) µ r (m i,m i+ ) = S( r,α,δ) () where S is a sigmoid function defined by S( r,α,δ) = 2 ( r α+δ 2δ 2 ( α r+δ 2δ 0 if r α δ ) 2 if α δ < r α ) 2 if α < r α + δ if r α + δ (5) α is the inflection point of the sigmoid function and the gradient of S is influenced by the parameter δ. For near range pedestrian tracking, the connectedness values can be approximated to be independent of each other. Thus, the combined fuzzy connectedness is given by 3. 3. 3. 0.2 0. 0.6 0.8 (a) raw data scan points 0.2 0. 0.6 0.8 (b) d max = 0.2m 0.2 0. 0.6 0.8 (c) d max = 0.2m µ(m i,m i+ ) = µ φ (m i,m i+ ) µ r (m i,m i+ ). (6) Since the equations in the next subsections depend only on the combined fuzzy connectedness, the results of the following subsections are also valid if the values are not independent of each other or if we would use other relations to calculate the fuzzy connectedness. 976 3. 0.2 0. 0.6 0.8 (d) d max = 0.7m Figure 3: Segmentation results for different values of d max

5.2 Path Strength Using the combined fuzzy connectedness of the previous subsection, we are able to specify the confidence about the connectedness of all points. We call this confidence analog to [] the strength of a path. The strength s of a path p = [m,m 2,...,m lp ] of length l p which contains l p measurement points is given by s(p) = min i<l p ( µ(mi,m i+ ) ). (7) we can rewrite (2) to ˆn(p) = l p i= = s(p) ( ) lp ( ) i s(p) i l p i=0 ( ) lp ( ) i s(p) + i s(p) = s(p) ( s(p) ) l p + s(p) (3) This equation is similar to the one given in [] with the difference, that we only have one possible path between the start and the end point. Thus, we do not have to calculate the maximum strength over all possible paths. Since µ [0,], the strength is bounded by 0 s(p). (8) 5.3 Estimation of the Number of Segments In order to estimate the number of segments, we have to split the path p into two parts. The most likely point for a splitting of a given path p is the connection with µ(m k,m k+ ) = s(p). Thus, we split the path p between these two measurements into p = [m,m 2,...,m k ] (9) p 2 = [m k+,m k+2,...,m lp ]. (0) where we use the binomial series. Regarding (3) it is easy to see, that for a fixed s(p) the estimated number of segments ˆn(p) is increasing if l p increases, because ( s(p)) for all s(p) ǫ [0,]. Further we observe that lim ˆn(p) = l p s(p) 0 + s(p) = s(p). () Thus, we have an upper limit to which the estimated number of segments converges for an infinite number of measurements. The lower limit is given by min(ˆn(p)) = s(p) ( s(p) ) + s(p) = (5) where we used l p =. In Fig. we show the estimated number of segments for 0 s(p) for different values of l p. The value for s(p) = 0 was calculated by the limit If the connectedness for several connections equals s(p), we split the path at the first of these connections. With a probability of s(p) the path p is exactly one segment. On the other hand, the path p is splitted into two paths p and p 2 with probability s(p). Since we can split p and p 2 again, we get the following recursive equation for the estimated number of segments: ˆn(p) = s(p) + ( s(p) ) (ˆn(p ) + ˆn(p 2 ) ). () Depending on the values for the fuzzy connectedness, we expect () to return values between and the number of measurement points l p of path p. In order to be able to interpret the recursive equation () we assume without loss of generality that µ(m i,m i+ ) = s(p) for all i. Further, we assume that always the first measurement point is separated from the rest of the points. With these assumptions, () simplifies to ˆn(p) = l p i= ( ) lp ( ) i. s(p) (2) i Thus, the equation is not recursive any more and we can analyze it analytically. Obviously, equation (2) returns ˆn = l p for s(p) = 0, because only the summand for i = is non-zero. For s(p) > 0 977 lim ˆn(p) = lim s(p) 0+ s(p) 0+ s(p) ( ( ) lp s(p) + O ( s(p) 2)) + s(p) ( ) lp = = l p, (6) where O ( s(p) 2) is an abbreviation for all terms which contain s(p) in higher orders than one. As expected, we observe that the lines get closer to the upper limit if we increase the length of a path. 5. Estimation of the Number of Pedestrians The fuzzy estimation shown in the previous subsection is so far not able to estimate the number of pedestrians, since all paths contribute to the number of segments according to the strength of the path. Thus, a path increases the estimated number of segments independent of its size, shape and other features. To be able to estimate the number of pedestrians, we have to weight each path with the probability that it is a measurement of a pedestrian. In order to get the probability for a path to contain a pedestrian, we need a model for a pedestrian. Since the laser scanners are mounted in that way, that we get measurements of the upper part of the human body, the body can be assumed to have an elliptical shape if we ignore the arms.

0 8 6 l p = l p = 2 l p = 5 l p = mean( n ) var( n) fuzzy 0.86 0.096 d max = 0.35 0.696 0.262 d max = 0.25 0.29 0.902 d max = 0.2 0.29 0.593 ˆn Table : Segmentation Results 2 0 0 0.2 0. 0.6 0.8 s(p) Figure : Estimated number of segments ˆn(p) for different path lengths l p and 0 < s(p) < Depending on the rotation of the pedestrian, we measure different extensions. The minimum extension corresponds to the side view of a small person. Obviously, the extension of a front or back view of a person is much larger. Since we do not know the rotation of the person in relation to the scanner and we further do not know if it is a small or a big person, all path sizes between the minimum and the maximum can be measurements of a pedestrian. An approximate extension of a path is given by w(p) = arctan ( δφ(p) ) r(p) (7) where δφ(p) is the angular difference between the first and the last point of a path p and r(p) is the mean of the radial distance of all points in p. Thus, the probability that path p is the measurement of a pedestrian can be calculated by P ped ( w(p) ) = S(w(p),lmin,δl) ( S(w(p),l max,δl) ) (8) where S is again a sigmoid function. The parameters l min and l max correspond to the minimum and maximum horizontal extension of a pedestrian and the gradient of the sigmoid functions is influenced by δl. We use here again a sigmoid function to avoid a hard decision due to the measured size. For simplicity we neglected the fact that the scanner probably observes only a part of the object due to occlusion in these equations. Obviously, it would be of course possible to take other features into account like for example the smoothness or the shape of the path. Now we can combine the probability P ped (w(p)) with the fuzzy estimation equation of the previous subsection. Thus, the estimated number of pedestrians is given by ˆn ped (p) = s(p) P ped (w(p)) + ( s(p)) (ˆn ped (p ) + ˆn ped (p 2 )) (9) 978 where the only difference to () is that the path strength s(p) is multiplied with the probability P ped (w(p)) of the path containing a pedestrian. Since 0 P ped (w(p)) the estimated number of objects can also be less than one. In Fig. 5 we show the difference between the estimated number of pedestrians and ground truth. n is defined as the algorithm result minus ground truth. For the estimation using on a threshold based segmentation, three different distance thresholds have been evaluated. We observe that for scans with well separated objects (e.g. for scans 35 to 50) the threshold based segmentation provides the correct results and the fuzzy estimation is very close to it. At time steps where the objects are very close to each other or partially occluded, the result of the fuzzy estimation is in the majority of cases closer to the ground truth than the estimations based on the thresholding segmentations. In Table the mean values of the absolute value of the difference between estimation and ground truth n = 3 and the variance of the difference are shown. Obviously, the fuzzy estimation leads to an improvement concerning the variance of n, while the mean value is not improved due to the tiny differences between the fuzzy result and ground truth for well separated objects. For the threshold d max = 0.2 the measurements are often splitted in too much tiny segments which are refused due to their small size. This leads to false sizes and center positions of the resulting segments and further to a loss of information, because measurement points may be thrown away. 5.5 Segment Construction In this subsection we will show how to determine segments based on the fuzzy connectedness. In the previous subsections we divided the path p which contains all measurements recursively into smaller paths. Each of the paths is now considered as one segment. The probability of one segment p i is given by P(p i ) = P ped (w(p i )) s(p i ) p k ǫp parent ( s(p k )) (20) where p parent are all parent paths which had to be divided to get p i. Further, the probability given by equation (20) can be interpreted as a measurement of the sensory existence probability for the path containing a pedestrian. This can then be intergrated in the calculation of the existence probability in tracking approaches like the Joint Integrated Probabilis-

0.5 0 0.5 n.5 2 2.5 Fuzzy Segmentation Standard, d max = 0.35 Standard, d max = 0.25 Standard, d max = 0.2 3 0 20 0 60 80 00 20 scan no. Figure 5: Difference between ground truth and segmentation results for threshold based and fuzzy segmentation. tic Data Association (JIPDA) [6] which not only estimate the state but also the existence of a track. In Fig. 6 the result of the fuzzy estimation for the same measurement as in Fig. 3 is shown. The estimated number of pedestrians is ˆn ped =.925. The ellipses in Fig. 6 enclose the possible paths and each ellipse contains the estimated number of pedestrians. We observe that the most likely segmentation is that the points within the black ellipse on the right side are measurements of one pedestrian and the rest of the points belong to a second pedestrian (blue ellipse). The fuzzy segmentation method contains all segmentation possibilities we can get with the threshold based segmentation. In difference to a threshold based segmentation method all segments are now weighted with an according probability. Thus, the hard decisions of threshold approaches are avoided and the decisions can be done by a tracking module. 6 Scanner Fusion In the previous section the estimation of the number of pedestrians and the construction of the segments was done using only one of the laser scanners in order to be able to use the method in systems where only one sensor is avail- 979.2. 3. 3.2 n=0.876 n=0.0868 n=0.0026 n=e n=0.9608 0.2 0. 0.6 0.8.2 Figure 6: Fuzzy segmentation of the same measurement as in Fig. 3. able. In our scenario we can improve the performance of the system by combining the information of the two scanners. Therefore, we use a cross validation of the segments. In this validation step each segment is transformed into the

coordinate system of the other scanner. Then it is checked if this segment corresponds to one or more segments of the other scanner, i.e. the boundary angles of the transformed segment are within the boundary angles of segments of the other scanner. If we find at least one corresponding segment, the segment is validated. Only if a segment a i of scanner is validated by a segment b j of scanner 2 and the other way round, a i and b j are cross validated. In Fig. 7 we show the cross validation for a scenario with three segments. Each segment has two boundary lines corresponding to the first and last measurement angle. If we take the measurement noise into account, segments and 3 will be cross validated. For segments and 2 the cross validation fails. This is due to the fact, that segment 2 is occluded by segment 3. In this case, segment can still be used to determine the possible extension of the occluded segment. In the next step all measurement points of cross validated 3. 3.2 3 2.8 2.6 2. 2.2 Segment Segment 2 Segment 3 Boundary Line Scanner Boundary Line Scanner 2 2 0.6 0. 0.2 0 0.2 0. 0.6 0.8.2 Figure 7: Cross validation of segments: segments and 3 are cross validated, segments and 2 not. segments are used to build an extended object. We use the principal component analysis (PCA) to fit an ellipse into the measurement points. To perform a PCA we first have to determine the center of the extended object. Two possible centers of the object are the crossing point of the bisecting lines of the boundary lines and the mean value of the measurement points. On the one hand the mean value is not a good estimate, since we do not get measurements all around the object. Thus, the center of the object would be closer to the scanners than expected. On the other hand the crossing point leads to an overestimation of the objects size and the center is moved away from the scanners. Hence, we use the center between the crossing point and the mean value as the objects center. The principal components can be used as an estimate about length and width of the object as well as its orientation. The probability for the object being a pedestrian is calculated according to the law of total probability. Because our confidence in the segments of each scanner is equal, this leads to P(a i,b j ) = ( ) P cv (a i ) + P cv (b j ) P ped (a i,b j ) (2) 2 with P cv (p i ) = P(p i ) P ped (w(p i )) (22) where a i and b j are the segments of scanner one and two, respectively, which are cross validated. The probability P(a i,b j ) is weighted as in equation (20) with a probability P ped (a i,b j ) for the fused object being a pedestrian. Thus, we have to use P cv (p i ) instead of P(p i ) in equation (2). Otherwise, the fused segment would be weighted twice because of its size. P ped (a i,b j ) is calculated using the estimated length and width of the cross validated segment in a two dimensional model for the size of a pedestrian. For both dimensions we use a model as in (8) with different parameters. In general, a segment can be cross validated with several segments of the other scanner. In this case we have to respect the number of cross validations of the considered segments in (2). This leads to P(a i,b j ) = ( P cv (a i ) + ) P cv (b j ) P ped (a i,b j ) 2 v v 2 (23) where v i is the number of cross validations of a segment of scanner i with segments of the other scanner. Obviously, equations (2) and (23) can be easily extended to support more than two sensors. In Fig. 8 the result of the fuzzy segmentation using two laser scanners is shown. The measurement points in the figure represent a scene with three pedestrians. We observe, that the fuzzy segmentation is able to determine the pedestrians correctly, although two of them are so close to each other, that they can not be resolved by any of the two scanners. 7 Conclusion A fuzzy estimation method to determine the number of pedestrians has been introduced. Further a fuzzy segmentation method which avoids the hard decisions of standard segmentations has been shown. Especially if persons are very close to each other, the fuzzy segmentation provides better results than the standard segmentation with fixed heuristic thresholds. The proposed fuzzy segmentation method can also be used for other objects by adapting the models used in this contribution. The weights of the determined segments can be used in modern tracking approaches in the calculations of the existence probability. 980

..2 points scanner points scanner 2 3. 3.2 3 2.8 2.6 0.5 0 0.5 Figure 8: Segments determined by the fusion approach. The thickness of the ellipses corresponds to the probability P ped. Acknowledgement This work is done within the Transregional Collaborative Research Center SFB/TRR 62 "Companion-Technology for Cognitive Technical Systems" funded by the German Research Foundation (DFG). References [] R. Mahler, Statistical Multisource-Multitarget Information Fusion, Artech House, Boston, 2007 [2] S. Wender, K. Fuerstenberg, and K. Dietmayer, Object Tracking and Classification for Intersection Scenarios Using a Multilayer Laserscanner, Proceedings of th World Congress on Intelligent Transportation Systems, Nagoya, Japan, 200 [3] B. Carvalho, C. Gau, G. Herman, and T. Kong, Algorithms for Fuzzy Segmentation, Pattern Analysis & Applications, Springer-Verlag, London, 999, pp. 73 8. [] J. Udupa and P. Saha, Fuzzy Connectedness and Image Segmentation, Proceedings of the IEEE, vol. 9, no. 0, October 2003, pp. 69 669 [5] T. Weiss, B. Schiele, and K. Dietmayer, Robust Driving path Detection in Urban and Highway Scenarios Using a Laser Scanner and Online Occupancy Grids, IEEE Intelligent Vehicles Symposium 2007, Istanbul, Turkey, June 3-5, 2007, pp. 8 89. [6] D. Musicki and R. Evans, Joint Integrated Probabilistic Data Association: JIPDA, IEEE Transactions on Aerospace and Electronic Systems, vol. 0, no. 3, July 200 98