Fuzzy Estimation and Segmentation for Laser Range Scans

Size: px
Start display at page:

Download "Fuzzy Estimation and Segmentation for Laser Range Scans"

Transcription

1 2th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 Fuzzy Estimation and Segmentation for Laser Range Scans Stephan Reuter, Klaus C. J. Dietmayer Institute of Measurement, Control and Microtechnology University of Ulm, Germany Abstract In tracking applications objects are often regarded as point targets. For modern sensors the assumption that one object creates at most one measurement is often not fulfilled any more due to the increasing sensor resolution capabilities. Thus, an estimation of the number of objects is necessary. Using threshold based segmentations for the estimation leads to hard decisions. Using a fuzzy estimation and segmentation, hard decisions are avoided. In the proposed fuzzy segmentation method the segments are weighted according to the confidence in the segment. The results of the fuzzy segmentation are compared with the threshold based segmentation. Further, extended objects are created out of the fuzzy segments of two laser scanners. Keywords: Fuzzy logic, estimation, segmentation, extended targets, laser scanner, tracking. Introduction Modern sensors like e.g. laser range scanners deliver often more than one measurement per object due to the increasing resolutions. This fact is in conflict to the assumption that one object generates at most one measurement which is often used in tracking applications. Thus, modern tracking algorithms like e.g. those based on the random finite sets filter introduced by R. Mahler [] will not be able to estimate the number of targets correctly without pre-processing of the sensor data. The measurement points of a laser scanner are often grouped into segments by threshold based approaches. These segmentation algorithms are normally based on a distance criterion, e.g. in [2], and hard decision thresholds are used to determine if two measurement points belong to the same object or not. In scenarios with well separated objects, this heuristic segmentation provides very good results. But if the objects are close to each other, the heuristic thresholds can lead to incorrect segmentation results. In image processing segmentation methods based on fuzzy logic have been proposed, e.g. in [3] []. It is shown that in pictures with shading and noise, it is not possible to determine thresholds which lead to satisfying segmentation results. However, with a fuzzy segmentation it is possible to get satisfying results for the same picture. In this contribution we adapt the fuzzy segmentation to ISIF 97 measurements of laser scanners. The results of the fuzzy segmentation will be compared with a threshold based segmentation. In section 2 the sensor setup will be introduced. Then the filtering of measurement points from static and dynamic objects is described. The drawbacks of a standard threshold based segmentation are shown in section. The proposed fuzzy estimation and segmentation is introduced in section 5. Then, in section 6 a fusion approach for the fuzzy segments of two scanners will be proposed. 2 Scenario and Sensor Setup In the Transregional Collaborative Research Center SFB/TRR 62 "Companion-Technology for Cognitive Technical Systems" the communication between technical systems and human users is investigated. It is particularly focused on the consideration of so-called companion-features - properties like individuality, adaptivity, accessibility, co-operativity, trustworthyness, and the ability to react to the user s emotions appropriately and individually. Further, a companion system is able to completely adapt its functionality to the skills, preferences, requirements and the emotion of an individual user as well as to the current situation. The objective of the SFB/TRR62 is to develop a technology which makes a systematic construction of companion systems possible. In order to be able to provide these companion-features, the cognitive system needs a complete description of the environment and the mental state of the user. The demonstration scenario for the companion system is a large room with high pedestrian density. The pedestrians around the system have to be detected and tracked with uncertainties in state and existence. The results of the tracking are further used to generate regions of interest for other parts of the project like e.g. facial expression or gesture recognition. In this contribution we concentrate on the detection and estimation of the number of pedestrians. We use two multilayer laser scanners which are synchronized by an electronical control unit to observe the environment. Each sensor has up to 0 horizontal field of view. The scan frequency is chosen to 2.5 Hz in this work and we use a constant angular resolution of The scanners are mounted in two corners of the laboratory. Further, a standard web cam is used for documentation purposes.

2 3 Filtering In the neighborhood of a companion system it is expected to have a large number of static objects (e.g. walls) and dynamic objects (e.g. pedestrians). Because we are not interested in the static objects, we want to remove the measurements of those objects. One approach to remove the measurement points of static objects would be to make a reference measurement of the environment of the companion system without any dynamic objects. This reference measurement can then be used to discard the measurements of the static objects. The disadvantage of this approach is that the system is not able to adapt to a changing environment. One possibility for an adaptive filtering of the static points is the use of an occupancy grid. The implementation used here is related to the online occupancy grid mapping shown in [5]. The occupancy grid is calculated separately for each laser scanner and since the scanners are mounted at fixed positions, we can use a polar grid map instead of a cartesian one. Thus, the size of the grid cells depends on the distance and the calculations are simplified. The field of view of the laser scanner is divided into grid cells c i. At every time step t k all measurement points are inserted into a measurement grid. In opposite to [5] we use smaller increments to avoid that standing or slow moving pedestrians lead to new statical objects. The occupancy likelihood of a grid cell c i is given by with p(c i z,...,z t ) = S + S S = p(c i z t ) p(c i z t ) p(c i z,...,z t ) p(c i z,...,z t ) where p(c i z t ) is the occupancy likelihood based on the current measurement grid and p(c i z,...,z t ) is the occupancy likelihood based on the online map calculated by the previous measurements. In Fig. an occupancy grid in polar coordinates is shown. Black grid cells correspond to cells which are occupied with very high probability. The brighter the cells, the smaller are the occupancy probabilities. Only points of a grid cell c i with p(c i z,...,z t ) > are marked as static. This avoids to assign the static flag to measurement points which are probably from a moving object. In Fig. 2 the measurement points are marked either as static or as dynamic by using the occupancy grid shown in Fig.. Standard Segmentation Segmentation methods for laser range scans are normally based on the distance between the measurement points. A decision threshold is used to determine if the points belong to the same segment. Since the measurement points are ordered by the horizontal angle, the probability that two points () (2) 975 Figure : Occupancy grid in polar coordinates dynamic static Figure 2: Measurement points marked with a blue star are static points, dynamic points are marked with a red square. belong to the same segment depends on the one hand on the gap between the horizontal angles and on the other hand on the radial distance of two succeeding scan points. The standard segmentation is based on a heuristic threshold d max for the maximum distance of two points that belong to the same segment. Further, a threshold φ max for the maximum angular distance is used. The disadvantage of this method is, that dependent on d max and φ max a system tends to return more or less segments than expected. Further, we do not get any information about the uncertainty of the number of segments and of the segment itself. In Fig. 3 we show the segmentation results for different thresholds for the maximum radial distance d max of two

3 succeeding points. The scan points are the echoes of two persons which are situated very close to each other. Thus, the correct segmentation would be the one of Fig. 3(c) with d max = 0.2m. We observe, that the segmentation returns either one, two or three segments by varying the distance threshold only in a range of cm. Because of the small differences, the threshold which leads to the right result in this case might lead to a wrong segmentation result in another case. 5 Fuzzy Segmentation 5. Fuzzy Connectedness Because we get the measurement points ordered by the horizontal angle, we only have to calculate the fuzzy connectedness of succeeding points. The connectedness of two succeeding points according to the angular difference is calculated by µ φ (m i,m i+ ) = c φ(m i,m i+ ) φres where φ(m i,m i+ ) is the distance between the horizontal angles of the measurements m i and m i+ and φ res is the angular resolution of the laser scanner. The variable c [0,] is a design parameter. Like for the angular difference, a connectedness for two succeeding points can also be calculated according to the radial distance. Due to the measurement noise of the scanner the connectedness has to be close to one for small distances. Further, using the size of the object type, we can determine an upper bound for the possible distance. Thus, the connectedness is calculated by (3) µ r (m i,m i+ ) = S( r,α,δ) () where S is a sigmoid function defined by S( r,α,δ) = 2 ( r α+δ 2δ 2 ( α r+δ 2δ 0 if r α δ ) 2 if α δ < r α ) 2 if α < r α + δ if r α + δ (5) α is the inflection point of the sigmoid function and the gradient of S is influenced by the parameter δ. For near range pedestrian tracking, the connectedness values can be approximated to be independent of each other. Thus, the combined fuzzy connectedness is given by (a) raw data scan points (b) d max = 0.2m (c) d max = 0.2m µ(m i,m i+ ) = µ φ (m i,m i+ ) µ r (m i,m i+ ). (6) Since the equations in the next subsections depend only on the combined fuzzy connectedness, the results of the following subsections are also valid if the values are not independent of each other or if we would use other relations to calculate the fuzzy connectedness (d) d max = 0.7m Figure 3: Segmentation results for different values of d max

4 5.2 Path Strength Using the combined fuzzy connectedness of the previous subsection, we are able to specify the confidence about the connectedness of all points. We call this confidence analog to [] the strength of a path. The strength s of a path p = [m,m 2,...,m lp ] of length l p which contains l p measurement points is given by s(p) = min i<l p ( µ(mi,m i+ ) ). (7) we can rewrite (2) to ˆn(p) = l p i= = s(p) ( ) lp ( ) i s(p) i l p i=0 ( ) lp ( ) i s(p) + i s(p) = s(p) ( s(p) ) l p + s(p) (3) This equation is similar to the one given in [] with the difference, that we only have one possible path between the start and the end point. Thus, we do not have to calculate the maximum strength over all possible paths. Since µ [0,], the strength is bounded by 0 s(p). (8) 5.3 Estimation of the Number of Segments In order to estimate the number of segments, we have to split the path p into two parts. The most likely point for a splitting of a given path p is the connection with µ(m k,m k+ ) = s(p). Thus, we split the path p between these two measurements into p = [m,m 2,...,m k ] (9) p 2 = [m k+,m k+2,...,m lp ]. (0) where we use the binomial series. Regarding (3) it is easy to see, that for a fixed s(p) the estimated number of segments ˆn(p) is increasing if l p increases, because ( s(p)) for all s(p) ǫ [0,]. Further we observe that lim ˆn(p) = l p s(p) 0 + s(p) = s(p). () Thus, we have an upper limit to which the estimated number of segments converges for an infinite number of measurements. The lower limit is given by min(ˆn(p)) = s(p) ( s(p) ) + s(p) = (5) where we used l p =. In Fig. we show the estimated number of segments for 0 s(p) for different values of l p. The value for s(p) = 0 was calculated by the limit If the connectedness for several connections equals s(p), we split the path at the first of these connections. With a probability of s(p) the path p is exactly one segment. On the other hand, the path p is splitted into two paths p and p 2 with probability s(p). Since we can split p and p 2 again, we get the following recursive equation for the estimated number of segments: ˆn(p) = s(p) + ( s(p) ) (ˆn(p ) + ˆn(p 2 ) ). () Depending on the values for the fuzzy connectedness, we expect () to return values between and the number of measurement points l p of path p. In order to be able to interpret the recursive equation () we assume without loss of generality that µ(m i,m i+ ) = s(p) for all i. Further, we assume that always the first measurement point is separated from the rest of the points. With these assumptions, () simplifies to ˆn(p) = l p i= ( ) lp ( ) i. s(p) (2) i Thus, the equation is not recursive any more and we can analyze it analytically. Obviously, equation (2) returns ˆn = l p for s(p) = 0, because only the summand for i = is non-zero. For s(p) > lim ˆn(p) = lim s(p) 0+ s(p) 0+ s(p) ( ( ) lp s(p) + O ( s(p) 2)) + s(p) ( ) lp = = l p, (6) where O ( s(p) 2) is an abbreviation for all terms which contain s(p) in higher orders than one. As expected, we observe that the lines get closer to the upper limit if we increase the length of a path. 5. Estimation of the Number of Pedestrians The fuzzy estimation shown in the previous subsection is so far not able to estimate the number of pedestrians, since all paths contribute to the number of segments according to the strength of the path. Thus, a path increases the estimated number of segments independent of its size, shape and other features. To be able to estimate the number of pedestrians, we have to weight each path with the probability that it is a measurement of a pedestrian. In order to get the probability for a path to contain a pedestrian, we need a model for a pedestrian. Since the laser scanners are mounted in that way, that we get measurements of the upper part of the human body, the body can be assumed to have an elliptical shape if we ignore the arms.

5 0 8 6 l p = l p = 2 l p = 5 l p = mean( n ) var( n) fuzzy d max = d max = d max = ˆn Table : Segmentation Results s(p) Figure : Estimated number of segments ˆn(p) for different path lengths l p and 0 < s(p) < Depending on the rotation of the pedestrian, we measure different extensions. The minimum extension corresponds to the side view of a small person. Obviously, the extension of a front or back view of a person is much larger. Since we do not know the rotation of the person in relation to the scanner and we further do not know if it is a small or a big person, all path sizes between the minimum and the maximum can be measurements of a pedestrian. An approximate extension of a path is given by w(p) = arctan ( δφ(p) ) r(p) (7) where δφ(p) is the angular difference between the first and the last point of a path p and r(p) is the mean of the radial distance of all points in p. Thus, the probability that path p is the measurement of a pedestrian can be calculated by P ped ( w(p) ) = S(w(p),lmin,δl) ( S(w(p),l max,δl) ) (8) where S is again a sigmoid function. The parameters l min and l max correspond to the minimum and maximum horizontal extension of a pedestrian and the gradient of the sigmoid functions is influenced by δl. We use here again a sigmoid function to avoid a hard decision due to the measured size. For simplicity we neglected the fact that the scanner probably observes only a part of the object due to occlusion in these equations. Obviously, it would be of course possible to take other features into account like for example the smoothness or the shape of the path. Now we can combine the probability P ped (w(p)) with the fuzzy estimation equation of the previous subsection. Thus, the estimated number of pedestrians is given by ˆn ped (p) = s(p) P ped (w(p)) + ( s(p)) (ˆn ped (p ) + ˆn ped (p 2 )) (9) 978 where the only difference to () is that the path strength s(p) is multiplied with the probability P ped (w(p)) of the path containing a pedestrian. Since 0 P ped (w(p)) the estimated number of objects can also be less than one. In Fig. 5 we show the difference between the estimated number of pedestrians and ground truth. n is defined as the algorithm result minus ground truth. For the estimation using on a threshold based segmentation, three different distance thresholds have been evaluated. We observe that for scans with well separated objects (e.g. for scans 35 to 50) the threshold based segmentation provides the correct results and the fuzzy estimation is very close to it. At time steps where the objects are very close to each other or partially occluded, the result of the fuzzy estimation is in the majority of cases closer to the ground truth than the estimations based on the thresholding segmentations. In Table the mean values of the absolute value of the difference between estimation and ground truth n = 3 and the variance of the difference are shown. Obviously, the fuzzy estimation leads to an improvement concerning the variance of n, while the mean value is not improved due to the tiny differences between the fuzzy result and ground truth for well separated objects. For the threshold d max = 0.2 the measurements are often splitted in too much tiny segments which are refused due to their small size. This leads to false sizes and center positions of the resulting segments and further to a loss of information, because measurement points may be thrown away. 5.5 Segment Construction In this subsection we will show how to determine segments based on the fuzzy connectedness. In the previous subsections we divided the path p which contains all measurements recursively into smaller paths. Each of the paths is now considered as one segment. The probability of one segment p i is given by P(p i ) = P ped (w(p i )) s(p i ) p k ǫp parent ( s(p k )) (20) where p parent are all parent paths which had to be divided to get p i. Further, the probability given by equation (20) can be interpreted as a measurement of the sensory existence probability for the path containing a pedestrian. This can then be intergrated in the calculation of the existence probability in tracking approaches like the Joint Integrated Probabilis-

6 n Fuzzy Segmentation Standard, d max = 0.35 Standard, d max = 0.25 Standard, d max = scan no. Figure 5: Difference between ground truth and segmentation results for threshold based and fuzzy segmentation. tic Data Association (JIPDA) [6] which not only estimate the state but also the existence of a track. In Fig. 6 the result of the fuzzy estimation for the same measurement as in Fig. 3 is shown. The estimated number of pedestrians is ˆn ped =.925. The ellipses in Fig. 6 enclose the possible paths and each ellipse contains the estimated number of pedestrians. We observe that the most likely segmentation is that the points within the black ellipse on the right side are measurements of one pedestrian and the rest of the points belong to a second pedestrian (blue ellipse). The fuzzy segmentation method contains all segmentation possibilities we can get with the threshold based segmentation. In difference to a threshold based segmentation method all segments are now weighted with an according probability. Thus, the hard decisions of threshold approaches are avoided and the decisions can be done by a tracking module. 6 Scanner Fusion In the previous section the estimation of the number of pedestrians and the construction of the segments was done using only one of the laser scanners in order to be able to use the method in systems where only one sensor is avail n=0.876 n= n= n=e n= Figure 6: Fuzzy segmentation of the same measurement as in Fig. 3. able. In our scenario we can improve the performance of the system by combining the information of the two scanners. Therefore, we use a cross validation of the segments. In this validation step each segment is transformed into the

7 coordinate system of the other scanner. Then it is checked if this segment corresponds to one or more segments of the other scanner, i.e. the boundary angles of the transformed segment are within the boundary angles of segments of the other scanner. If we find at least one corresponding segment, the segment is validated. Only if a segment a i of scanner is validated by a segment b j of scanner 2 and the other way round, a i and b j are cross validated. In Fig. 7 we show the cross validation for a scenario with three segments. Each segment has two boundary lines corresponding to the first and last measurement angle. If we take the measurement noise into account, segments and 3 will be cross validated. For segments and 2 the cross validation fails. This is due to the fact, that segment 2 is occluded by segment 3. In this case, segment can still be used to determine the possible extension of the occluded segment. In the next step all measurement points of cross validated Segment Segment 2 Segment 3 Boundary Line Scanner Boundary Line Scanner Figure 7: Cross validation of segments: segments and 3 are cross validated, segments and 2 not. segments are used to build an extended object. We use the principal component analysis (PCA) to fit an ellipse into the measurement points. To perform a PCA we first have to determine the center of the extended object. Two possible centers of the object are the crossing point of the bisecting lines of the boundary lines and the mean value of the measurement points. On the one hand the mean value is not a good estimate, since we do not get measurements all around the object. Thus, the center of the object would be closer to the scanners than expected. On the other hand the crossing point leads to an overestimation of the objects size and the center is moved away from the scanners. Hence, we use the center between the crossing point and the mean value as the objects center. The principal components can be used as an estimate about length and width of the object as well as its orientation. The probability for the object being a pedestrian is calculated according to the law of total probability. Because our confidence in the segments of each scanner is equal, this leads to P(a i,b j ) = ( ) P cv (a i ) + P cv (b j ) P ped (a i,b j ) (2) 2 with P cv (p i ) = P(p i ) P ped (w(p i )) (22) where a i and b j are the segments of scanner one and two, respectively, which are cross validated. The probability P(a i,b j ) is weighted as in equation (20) with a probability P ped (a i,b j ) for the fused object being a pedestrian. Thus, we have to use P cv (p i ) instead of P(p i ) in equation (2). Otherwise, the fused segment would be weighted twice because of its size. P ped (a i,b j ) is calculated using the estimated length and width of the cross validated segment in a two dimensional model for the size of a pedestrian. For both dimensions we use a model as in (8) with different parameters. In general, a segment can be cross validated with several segments of the other scanner. In this case we have to respect the number of cross validations of the considered segments in (2). This leads to P(a i,b j ) = ( P cv (a i ) + ) P cv (b j ) P ped (a i,b j ) 2 v v 2 (23) where v i is the number of cross validations of a segment of scanner i with segments of the other scanner. Obviously, equations (2) and (23) can be easily extended to support more than two sensors. In Fig. 8 the result of the fuzzy segmentation using two laser scanners is shown. The measurement points in the figure represent a scene with three pedestrians. We observe, that the fuzzy segmentation is able to determine the pedestrians correctly, although two of them are so close to each other, that they can not be resolved by any of the two scanners. 7 Conclusion A fuzzy estimation method to determine the number of pedestrians has been introduced. Further a fuzzy segmentation method which avoids the hard decisions of standard segmentations has been shown. Especially if persons are very close to each other, the fuzzy segmentation provides better results than the standard segmentation with fixed heuristic thresholds. The proposed fuzzy segmentation method can also be used for other objects by adapting the models used in this contribution. The weights of the determined segments can be used in modern tracking approaches in the calculations of the existence probability. 980

8 ..2 points scanner points scanner Figure 8: Segments determined by the fusion approach. The thickness of the ellipses corresponds to the probability P ped. Acknowledgement This work is done within the Transregional Collaborative Research Center SFB/TRR 62 "Companion-Technology for Cognitive Technical Systems" funded by the German Research Foundation (DFG). References [] R. Mahler, Statistical Multisource-Multitarget Information Fusion, Artech House, Boston, 2007 [2] S. Wender, K. Fuerstenberg, and K. Dietmayer, Object Tracking and Classification for Intersection Scenarios Using a Multilayer Laserscanner, Proceedings of th World Congress on Intelligent Transportation Systems, Nagoya, Japan, 200 [3] B. Carvalho, C. Gau, G. Herman, and T. Kong, Algorithms for Fuzzy Segmentation, Pattern Analysis & Applications, Springer-Verlag, London, 999, pp [] J. Udupa and P. Saha, Fuzzy Connectedness and Image Segmentation, Proceedings of the IEEE, vol. 9, no. 0, October 2003, pp [5] T. Weiss, B. Schiele, and K. Dietmayer, Robust Driving path Detection in Urban and Highway Scenarios Using a Laser Scanner and Online Occupancy Grids, IEEE Intelligent Vehicles Symposium 2007, Istanbul, Turkey, June 3-5, 2007, pp [6] D. Musicki and R. Evans, Joint Integrated Probabilistic Data Association: JIPDA, IEEE Transactions on Aerospace and Electronic Systems, vol. 0, no. 3, July

Pedestrian Tracking Using Random Finite Sets

Pedestrian Tracking Using Random Finite Sets 1th International Conference on Information Fusion Chicago, Illinois, USA, July -8, 2011 Pedestrian Tracking Using Random Finite Sets Stephan Reuter Institute of Measurement, Control, and Microtechnology

More information

Laserscanner Based Cooperative Pre-Data-Fusion

Laserscanner Based Cooperative Pre-Data-Fusion Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel

More information

Spatio temporal Segmentation using Laserscanner and Video Sequences

Spatio temporal Segmentation using Laserscanner and Video Sequences Spatio temporal Segmentation using Laserscanner and Video Sequences Nico Kaempchen, Markus Zocholl and Klaus C.J. Dietmayer Department of Measurement, Control and Microtechnology University of Ulm, D 89081

More information

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment Matching Evaluation of D Laser Scan Points using Observed Probability in Unstable Measurement Environment Taichi Yamada, and Akihisa Ohya Abstract In the real environment such as urban areas sidewalk,

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping AUTONOMOUS SYSTEMS LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping Maria Isabel Ribeiro Pedro Lima with revisions introduced by Rodrigo Ventura Instituto

More information

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots Mapping 1 Occupancy Mapping When solving the localization problem, we had a map of the world and tried

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping Lecturer: Alex Styler (in for Drew Bagnell) Scribe: Victor Hwang 1 1 Occupancy Mapping When solving the localization

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Extended target tracking using PHD filters

Extended target tracking using PHD filters Ulm University 2014 01 29 1(35) With applications to video data and laser range data Division of Automatic Control Department of Electrical Engineering Linöping university Linöping, Sweden Presentation

More information

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A. Mahphood, H. Arefi *, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran,

More information

Uncertainties: Representation and Propagation & Line Extraction from Range data

Uncertainties: Representation and Propagation & Line Extraction from Range data 41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty

More information

Pedestrian Detection Using Multi-layer LIDAR

Pedestrian Detection Using Multi-layer LIDAR 1 st International Conference on Transportation Infrastructure and Materials (ICTIM 2016) ISBN: 978-1-60595-367-0 Pedestrian Detection Using Multi-layer LIDAR Mingfang Zhang 1, Yuping Lu 2 and Tong Liu

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING Kenta Fukano 1, and Hiroshi Masuda 2 1) Graduate student, Department of Intelligence Mechanical Engineering, The University of Electro-Communications,

More information

EE368 Project: Visual Code Marker Detection

EE368 Project: Visual Code Marker Detection EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.

More information

Optimizing Bio-Inspired Flow Channel Design on Bipolar Plates of PEM Fuel Cells

Optimizing Bio-Inspired Flow Channel Design on Bipolar Plates of PEM Fuel Cells Excerpt from the Proceedings of the COMSOL Conference 2010 Boston Optimizing Bio-Inspired Flow Channel Design on Bipolar Plates of PEM Fuel Cells James A. Peitzmeier *1, Steven Kapturowski 2 and Xia Wang

More information

Simulation of a mobile robot with a LRF in a 2D environment and map building

Simulation of a mobile robot with a LRF in a 2D environment and map building Simulation of a mobile robot with a LRF in a 2D environment and map building Teslić L. 1, Klančar G. 2, and Škrjanc I. 3 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana,

More information

Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners

Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners Xiao Zhang, Wenda Xu, Chiyu Dong, John M. Dolan, Electrical and Computer Engineering, Carnegie Mellon University Robotics Institute,

More information

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA F2014-ACD-014 AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA 1 Roy Bours (*), 1 Martijn Tideman, 2 Ulrich Lages, 2 Roman Katz, 2 Martin Spencer 1 TASS International, Rijswijk, The

More information

OBJECT detection in general has many applications

OBJECT detection in general has many applications 1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural

More information

Locally Weighted Learning for Control. Alexander Skoglund Machine Learning Course AASS, June 2005

Locally Weighted Learning for Control. Alexander Skoglund Machine Learning Course AASS, June 2005 Locally Weighted Learning for Control Alexander Skoglund Machine Learning Course AASS, June 2005 Outline Locally Weighted Learning, Christopher G. Atkeson et. al. in Artificial Intelligence Review, 11:11-73,1997

More information

QUASI-3D SCANNING WITH LASERSCANNERS

QUASI-3D SCANNING WITH LASERSCANNERS QUASI-3D SCANNING WITH LASERSCANNERS V. Willhoeft, K. Ch. Fuerstenberg, IBEO Automobile Sensor GmbH, vwi@ibeo.de INTRODUCTION: FROM 2D TO 3D Laserscanners are laser-based range-finding devices. They create

More information

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Processing of distance measurement data

Processing of distance measurement data 7Scanprocessing Outline 64-424 Intelligent Robotics 1. Introduction 2. Fundamentals 3. Rotation / Motion 4. Force / Pressure 5. Frame transformations 6. Distance 7. Scan processing Scan data filtering

More information

Fusing Radar and Scene Labeling Data for Multi-Object Vehicle Tracking

Fusing Radar and Scene Labeling Data for Multi-Object Vehicle Tracking 11 Fusing Radar and Scene Labeling Data for Multi-Object Vehicle Tracking Alexander Scheel, Franz Gritschneder, Stephan Reuter, and Klaus Dietmayer Abstract: Scene labeling approaches which perform pixel-wise

More information

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera [10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera Image processing, pattern recognition 865 Kruchinin A.Yu. Orenburg State University IntBuSoft Ltd Abstract The

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

Propagating these values through the probability density function yields a bound on the likelihood score that can be achieved by any position in the c

Propagating these values through the probability density function yields a bound on the likelihood score that can be achieved by any position in the c Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2000. Maximum-Likelihood Template Matching Clark F. Olson Jet Propulsion Laboratory, California Institute of Technology 4800

More information

Basis Functions. Volker Tresp Summer 2017

Basis Functions. Volker Tresp Summer 2017 Basis Functions Volker Tresp Summer 2017 1 Nonlinear Mappings and Nonlinear Classifiers Regression: Linearity is often a good assumption when many inputs influence the output Some natural laws are (approximately)

More information

IMPROVED LASER-BASED NAVIGATION FOR MOBILE ROBOTS

IMPROVED LASER-BASED NAVIGATION FOR MOBILE ROBOTS Improved Laser-based Navigation for Mobile Robots 79 SDU International Journal of Technologic Sciences pp. 79-92 Computer Technology IMPROVED LASER-BASED NAVIGATION FOR MOBILE ROBOTS Muhammad AWAIS Abstract

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

Time-to-Contact from Image Intensity

Time-to-Contact from Image Intensity Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract

More information

Gesture Recognition using Neural Networks

Gesture Recognition using Neural Networks Gesture Recognition using Neural Networks Jeremy Smith Department of Computer Science George Mason University Fairfax, VA Email: jsmitq@masonlive.gmu.edu ABSTRACT A gesture recognition method for body

More information

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion Marek Schikora 1 and Benedikt Romba 2 1 FGAN-FKIE, Germany 2 Bonn University, Germany schikora@fgan.de, romba@uni-bonn.de Abstract: In this

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

A Symmetry Operator and Its Application to the RoboCup

A Symmetry Operator and Its Application to the RoboCup A Symmetry Operator and Its Application to the RoboCup Kai Huebner Bremen Institute of Safe Systems, TZI, FB3 Universität Bremen, Postfach 330440, 28334 Bremen, Germany khuebner@tzi.de Abstract. At present,

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

6 y [m] y [m] x [m] x [m]

6 y [m] y [m] x [m] x [m] An Error Detection Model for Ultrasonic Sensor Evaluation on Autonomous Mobile Systems D. Bank Research Institute for Applied Knowledge Processing (FAW) Helmholtzstr. D-898 Ulm, Germany Email: bank@faw.uni-ulm.de

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS 1 RONNIE O. SERFA JUAN, 2 CHAN SU PARK, 3 HI SEOK KIM, 4 HYEONG WOO CHA 1,2,3,4 CheongJu University E-maul: 1 engr_serfs@yahoo.com,

More information

DATA EMBEDDING IN TEXT FOR A COPIER SYSTEM

DATA EMBEDDING IN TEXT FOR A COPIER SYSTEM DATA EMBEDDING IN TEXT FOR A COPIER SYSTEM Anoop K. Bhattacharjya and Hakan Ancin Epson Palo Alto Laboratory 3145 Porter Drive, Suite 104 Palo Alto, CA 94304 e-mail: {anoop, ancin}@erd.epson.com Abstract

More information

PEER Report Addendum.

PEER Report Addendum. PEER Report 2017-03 Addendum. The authors recommend the replacement of Section 3.5.1 and Table 3.15 with the content of this Addendum. Consequently, the recommendation is to replace the 13 models and their

More information

Small-scale objects extraction in digital images

Small-scale objects extraction in digital images 102 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 Small-scale objects extraction in digital images V. Volkov 1,2 S. Bobylev 1 1 Radioengineering Dept., The Bonch-Bruevich State Telecommunications

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Automatic 3D wig Generation Method using FFD and Robotic Arm

Automatic 3D wig Generation Method using FFD and Robotic Arm International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 9 (2017) pp. 2104-2108 Automatic 3D wig Generation Method using FFD and Robotic Arm Md Saifur Rahman 1, Chulhyung

More information

Learning to Segment Document Images

Learning to Segment Document Images Learning to Segment Document Images K.S. Sesh Kumar, Anoop Namboodiri, and C.V. Jawahar Centre for Visual Information Technology, International Institute of Information Technology, Hyderabad, India Abstract.

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

REPRESENTATION OF BIG DATA BY DIMENSION REDUCTION

REPRESENTATION OF BIG DATA BY DIMENSION REDUCTION Fundamental Journal of Mathematics and Mathematical Sciences Vol. 4, Issue 1, 2015, Pages 23-34 This paper is available online at http://www.frdint.com/ Published online November 29, 2015 REPRESENTATION

More information

Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart

Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart Mehrez Kristou, Akihisa Ohya and Shin ichi Yuta Intelligent Robot Laboratory, University of Tsukuba,

More information

FEATURE-BASED REGISTRATION OF RANGE IMAGES IN DOMESTIC ENVIRONMENTS

FEATURE-BASED REGISTRATION OF RANGE IMAGES IN DOMESTIC ENVIRONMENTS FEATURE-BASED REGISTRATION OF RANGE IMAGES IN DOMESTIC ENVIRONMENTS Michael Wünstel, Thomas Röfer Technologie-Zentrum Informatik (TZI) Universität Bremen Postfach 330 440, D-28334 Bremen {wuenstel, roefer}@informatik.uni-bremen.de

More information

Real Time Recognition of Non-Driving Related Tasks in the Context of Highly Automated Driving

Real Time Recognition of Non-Driving Related Tasks in the Context of Highly Automated Driving Real Time Recognition of Non-Driving Related Tasks in the Context of Highly Automated Driving S. Enhuber 1, T. Pech 1, B. Wandtner 2, G. Schmidt 2, G. Wanielik 1 1 Professorship of Communications Engineering

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Engineered Diffusers Intensity vs Irradiance

Engineered Diffusers Intensity vs Irradiance Engineered Diffusers Intensity vs Irradiance Engineered Diffusers are specified by their divergence angle and intensity profile. The divergence angle usually is given as the width of the intensity distribution

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2016 Localization II Localization I 25.04.2016 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. 1. 3D AIRWAY TUBE RECONSTRUCTION. RELATED TO FIGURE 1 AND STAR METHODS

More information

3.2 Level 1 Processing

3.2 Level 1 Processing SENSOR AND DATA FUSION ARCHITECTURES AND ALGORITHMS 57 3.2 Level 1 Processing Level 1 processing is the low-level processing that results in target state estimation and target discrimination. 9 The term

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Nearest Neighbors Classifiers

Nearest Neighbors Classifiers Nearest Neighbors Classifiers Raúl Rojas Freie Universität Berlin July 2014 In pattern recognition we want to analyze data sets of many different types (pictures, vectors of health symptoms, audio streams,

More information

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data Cover Page Abstract ID 8181 Paper Title Automated extraction of linear features from vehicle-borne laser data Contact Author Email Dinesh Manandhar (author1) dinesh@skl.iis.u-tokyo.ac.jp Phone +81-3-5452-6417

More information

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993 Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

Improving Vision-Based Distance Measurements using Reference Objects

Improving Vision-Based Distance Measurements using Reference Objects Improving Vision-Based Distance Measurements using Reference Objects Matthias Jüngel, Heinrich Mellmann, and Michael Spranger Humboldt-Universität zu Berlin, Künstliche Intelligenz Unter den Linden 6,

More information

arxiv: v1 [cs.ro] 26 Nov 2018

arxiv: v1 [cs.ro] 26 Nov 2018 Fast Gaussian Process Occupancy Maps Yijun Yuan, Haofei Kuang and Sören Schwertfeger arxiv:1811.10156v1 [cs.ro] 26 Nov 2018 Abstract In this paper, we demonstrate our work on Gaussian Process Occupancy

More information

A Feature Point Matching Based Approach for Video Objects Segmentation

A Feature Point Matching Based Approach for Video Objects Segmentation A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer

More information

Perimeter and Area Estimations of Digitized Objects with Fuzzy Borders

Perimeter and Area Estimations of Digitized Objects with Fuzzy Borders Perimeter and Area Estimations of Digitized Objects with Fuzzy Borders Nataša Sladoje,, Ingela Nyström, and Punam K. Saha 2 Centre for Image Analysis, Uppsala, Sweden {natasa,ingela}@cb.uu.se 2 MIPG, Dept.

More information

Using Layered Color Precision for a Self-Calibrating Vision System

Using Layered Color Precision for a Self-Calibrating Vision System ROBOCUP2004 SYMPOSIUM, Instituto Superior Técnico, Lisboa, Portugal, July 4-5, 2004. Using Layered Color Precision for a Self-Calibrating Vision System Matthias Jüngel Institut für Informatik, LFG Künstliche

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

Navigation and Metric Path Planning

Navigation and Metric Path Planning Navigation and Metric Path Planning October 4, 2011 Minerva tour guide robot (CMU): Gave tours in Smithsonian s National Museum of History Example of Minerva s occupancy map used for navigation Objectives

More information

Irradiance Gradients. Media & Occlusions

Irradiance Gradients. Media & Occlusions Irradiance Gradients in the Presence of Media & Occlusions Wojciech Jarosz in collaboration with Matthias Zwicker and Henrik Wann Jensen University of California, San Diego June 23, 2008 Wojciech Jarosz

More information

Scanning Real World Objects without Worries 3D Reconstruction

Scanning Real World Objects without Worries 3D Reconstruction Scanning Real World Objects without Worries 3D Reconstruction 1. Overview Feng Li 308262 Kuan Tian 308263 This document is written for the 3D reconstruction part in the course Scanning real world objects

More information

3D VISUALIZATION OF SEGMENTED CRUCIATE LIGAMENTS 1. INTRODUCTION

3D VISUALIZATION OF SEGMENTED CRUCIATE LIGAMENTS 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 10/006, ISSN 164-6037 Paweł BADURA * cruciate ligament, segmentation, fuzzy connectedness,3d visualization 3D VISUALIZATION OF SEGMENTED CRUCIATE LIGAMENTS

More information

Non-Parametric Modeling

Non-Parametric Modeling Non-Parametric Modeling CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Introduction Non-Parametric Density Estimation Parzen Windows Kn-Nearest Neighbor

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA Abdullatif Alharthy, James Bethel School of Civil Engineering, Purdue University, 1284 Civil Engineering Building, West Lafayette, IN 47907

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Algorithm research of 3D point cloud registration based on iterative closest point 1

Algorithm research of 3D point cloud registration based on iterative closest point 1 Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,

More information

An Extended Line Tracking Algorithm

An Extended Line Tracking Algorithm An Extended Line Tracking Algorithm Leonardo Romero Muñoz Facultad de Ingeniería Eléctrica UMSNH Morelia, Mich., Mexico Email: lromero@umich.mx Moises García Villanueva Facultad de Ingeniería Eléctrica

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Coarse-to-Fine Search Technique to Detect Circles in Images

Coarse-to-Fine Search Technique to Detect Circles in Images Int J Adv Manuf Technol (1999) 15:96 102 1999 Springer-Verlag London Limited Coarse-to-Fine Search Technique to Detect Circles in Images M. Atiquzzaman Department of Electrical and Computer Engineering,

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

1. Estimation equations for strip transect sampling, using notation consistent with that used to

1. Estimation equations for strip transect sampling, using notation consistent with that used to Web-based Supplementary Materials for Line Transect Methods for Plant Surveys by S.T. Buckland, D.L. Borchers, A. Johnston, P.A. Henrys and T.A. Marques Web Appendix A. Introduction In this on-line appendix,

More information

Model Based Perspective Inversion

Model Based Perspective Inversion Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk

More information

Effects Of Shadow On Canny Edge Detection through a camera

Effects Of Shadow On Canny Edge Detection through a camera 1523 Effects Of Shadow On Canny Edge Detection through a camera Srajit Mehrotra Shadow causes errors in computer vision as it is difficult to detect objects that are under the influence of shadows. Shadow

More information

Pedestrian counting in video sequences using optical flow clustering

Pedestrian counting in video sequences using optical flow clustering Pedestrian counting in video sequences using optical flow clustering SHIZUKA FUJISAWA, GO HASEGAWA, YOSHIAKI TANIGUCHI, HIROTAKA NAKANO Graduate School of Information Science and Technology Osaka University

More information

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2015

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2015 Instance-based Learning CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2015 Outline Non-parametric approach Unsupervised: Non-parametric density estimation Parzen Windows K-Nearest

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

Piecewise-Planar 3D Reconstruction with Edge and Corner Regularization

Piecewise-Planar 3D Reconstruction with Edge and Corner Regularization Piecewise-Planar 3D Reconstruction with Edge and Corner Regularization Alexandre Boulch Martin de La Gorce Renaud Marlet IMAGINE group, Université Paris-Est, LIGM, École Nationale des Ponts et Chaussées

More information

Loop detection and extended target tracking using laser data

Loop detection and extended target tracking using laser data Licentiate seminar 1(39) Loop detection and extended target tracking using laser data Karl Granström Division of Automatic Control Department of Electrical Engineering Linköping University Linköping, Sweden

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2018 Localization II Localization I 16.04.2018 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis INF 4300 Digital Image Analysis HOUGH TRANSFORM Fritz Albregtsen 14.09.2011 Plan for today This lecture goes more in detail than G&W 10.2! Introduction to Hough transform Using gradient information to

More information