Detection of Parked Vehicles from a Radar Based Occupancy Grid

Size: px
Start display at page:

Download "Detection of Parked Vehicles from a Radar Based Occupancy Grid"

Transcription

1 2014 IEEE Intelligent Vehicles Symposium (IV) June 8-11, Dearborn, Michigan, USA Detection of Parked Vehicles from a Radar Based Occupancy Grid Renaud Dubé 1, Markus Hahn 2, Markus Schütz 2, Jürgen Dickmann 2 and Denis Gingras 1 Abstract For autonomous parking applications to become possible, knowledge about the parking environment is required. Therefore, a real-time algorithm for detecting parked vehicles from radar data is presented. These data are first accumulated in an occupancy grid from which objects are detected by applying techniques borrowed from the computer vision field. Two random forest classifiers are trained to recognize two categories of objects: parallel-parked vehicles and cross-parked vehicles. Performances of the classifiers are evaluated as well as the capacity of the complete system to detect parked vehicles in real world scenarios. I. INTRODUCTION Object detection and scene understanding are key components in Advanced Driver Assistance Systems. In order to enable autonomous driving in semi-structured environment such as parking lots, low level mapping must be further processed to achieve a high level of awareness [1] and [2]. According to Thrun [3] object maps offer several advantages over their metric and topological counterparts. Among others, they better represent situations where static objects can become dynamic and they are closer related to the human perception of the environment. Object maps particularly adapt well to situations where many instances of objects of the same type are present in the environment [4], which supports the application for detecting parked vehicles in a parking lot. A parked vehicle detection algorithm could be particularly useful for applications such as autonomous valet parking. As an example, a landmark-based localization algorithm could reduce its dependence on a particular landmark, knowing that it represents a vehicle which may not always be static. Similarly, this information could be useful for collision mitigation knowing that a vehicle, compared to a tree, can potentially move. The recognition of parked vehicles can also lead to the detection of free parking spots as demonstrated by [2]. Previous work has been done in order to detect parked vehicles from laser range data. Keat et al. [5] proposed an algorithm to extract vehicles from a SICK laser scan assuming that vehicles will be represented by an L shape. This assumption does not hold in dense cross-parking area where only vehicles front and back-ends bumpers are visible from laser data. Also, this strategy could not distinguish between a parked vehicle and the corner of a wall. Zhou et al. [2] demonstrated that it is possible to extract vehicle bumper from a laser scan. This solution solves the problem 1 Electrical and Computer Engineering Department, Université de Sherbrooke, Sherbrooke, Canada. {renaud.dube, denis.gingras}@usherbrooke.ca 2 Group Research and Advanced Engineering, Daimler AG, Ulm, Germany. {markus.hahn, markus.m.schuetz, juergen.dickmann}@daimler.com of dense cross parking but it is not applicable in parallel parking situations. Sun et al. [6] make an overview of the state of the art on camera based vehicle detection. Scharwächter et al. [7] introduce a camera based bag-of-features approach for scene segmentation of cluttered urban traffic scenarios. In this paper we present a real-time algorithm capable of detecting both parallel and cross-parked vehicles from radar data. A radar-based occupancy grid is built according to Elfes [8]. Parked vehicle candidates are then extracted from this grid. These candidates are described and classified in order to assert the presence of vehicles. The proposed method addresses the challenge of detecting both parallel and crossparked vehicles. A distinction is made between cross-parked vehicles, which stand perpendicular to the lane direction, and parallel-parked vehicle, which are obviously parallel to the lane direction. Finally, radar sensors are considered in our work because this technology is already present in the highend automotive market. Radar technology is also popular due to the reliable performances offered in different weather conditions and its relatively low price compared to laser scanners. The paper is structured as follows: Section II introduces the proposed algorithm for detecting parked vehicles from radar data. The experiment described in section III measures the system performances. Finally, section IV concludes the discussion and makes a brief outlook for future work. II. ALGORITHM DESCRIPTION The proposed parked vehicle detection algorithm consists of four main steps: occupancy grid generation, candidates selection, features extraction and classification. A. Occupancy grid generation The goal of this first step is to compute an occupancy grid, as introduced by Elfes [8], in order to accumulate the radar data over time. Our occupancy grid map M k = {m 1, m 2,..., m N } consists of N grid cells m i, which represent the environment as a 2D-space with equally sized cells. For the current parked vehicle detection application, the grid cell size is fixed to 0.1m x 0.1m. Each cell is a probabilistic variable describing the occupancy of this cell. Assuming that M k is the grid map at time k and that the grid cells are independent one to another, the occupancy grid map can be modeled as a posterior probability: P (M k Z 1:k, X 1:k ) = i P (m i Z 1:k, X 1:k ) (1) where P (m i Z 1:k, X 1:k ) is the inverse sensor model, which describes the probability of occupancy of the ith cell, given /14/$ IEEE 1415

2 Likelihood of occupancy the measurements Z 1..k and the dynamic object state X 1..k. Each measurement consists of n radar detections Z j = {z {1,j}, z {2,j},..., z {n,j} }. The occupancy value of each cell is calculated by a binary bayesian filter. In practice, the log posterior is used to integrate new measurements efficiently. Instead of performing multiplications, the usage of the log odds ratio simplifies the calculation to additions and avoids instabilities of calculating probabilities near zero or one. The log odds occupancy grid map is formalized as: L k (m i ) = log P (m i Z 1:k, X 1:k ) 1 P (m i Z 1:k, X 1:k ) The recursive formulation of map update in log odds ratio form is given by [9] : P (m i Z 1:k, X 1:k ) L k (m i ) = L k 1 (m i ) + log 1 P (m i Z 1:k, X 1:k ) L 0(m i ) (3) where L k 1 (m i ) and L 0 (m i ) are the previous and prior log odds values of grid cell i. Assuming that no prior knowledge is available, the prior probability of unknown cells is set to P 0 (m i ) = 0.5, the above equation produces the prior log odds ratio L 0 = 0. The log odds formulation of equation (2) can be inverted to obtain the corresponding probability of map M k. A radar based occupancy map is displayed in Fig 1. The map is projected into the image of our documentation camera so that one can appreciate how the radars perceive the environment. B. Object detection and candidate selection This second step selects parked vehicles candidates which will be described and classified in later steps. Two lists of candidates are built: one for cross-parked vehicles and one for parallel-parked vehicles. This step can be further divided into three substeps: the detection of interest objects, the estimation of these objects directions and the final choice of candidates. 1) Object detection: In order to detect objects of interest, the map M is converted to a binary grid B using threshold α as described in equations (4) and (5). (2) b i = 1 if m i α (4) b i = 0 if m i < α (5) This grid b is clustered by a 8-n connected component analysis providing an initial list of objects of interest. Objects containing too few cells are removed from this list since they can hardly represent a parked vehicle leading to the list O = {O 1, O 2,..., O N }. 2) Direction estimation: From the analysis of different parking scenarios, it is clear that the direction of parked vehicles is often correlated to the direction of the trajectory or to the main orientation of the lane structure. The problem of determining the vehicle direction is thus simplified by making the hypothesis that they are either perpendicular or parallel to the trajectory direction. Therefore, a direction ψ i which is perpendicular to the trajectory and pointing toward the object O i will be associated to every object. 100% 50% Fig. 1. Radar occupancy grid (top) and its projection into the documentation camera image (bottom). The map colors represent the probabilities of occupancy. The driven path is plotted as dashed white line. Fig. 2. Cutout map c i of a cross-parked vehicle. The zoomed map is obtained by rotating an object O i according to its direction ψ i which is perpenticular to the vehicle trajectory represented by a green line. 3) Candidate selection: The candidate selection procedure differs for cross-parked and parallel-parked vehicles. The description is divided accordingly. Cross-parked vehicles: In radar based occupancy grids representing dense parking lots, most often only one part of a cross-parked vehicle is observable. Knowing the direction ψ i of an object O i, a rotation is made toward a normalized direction, yielding a cutout map c i as displayed in Fig 2. 0% 1416

3 Note that the cutout map of each candidate is such oriented in order to provide the classifiers with consistent inputs. In order to determine if an object is a candidate for cross-parked vehicle, binary rules are applied to the dimensions of the cutout map c i. This decision yields a list C of Q crossparked vehicles candidates: C = {C 1, C 2,..., C Q } where C u = {O i, c i }, i 1... N (6) Parallel-parked vehicles: Extracting candidates for parallel-parked vehicles is more challenging since two different situations need to be considered. In its simplest form, the shape of a parallel vehicle can be completely covered by a single object in the occupancy grid. Applying similar dimension decision on the cutout map of each object gives an initial list of parallel-parked vehicle candidates. P = {P 1, P 2,..., P R } where P u = {O i, c i }, i 1... N (7) A parallel-parked vehicle can also be represented by two L- shaped distinct objects in the grid : one for the front of the vehicle and one for the back. This situation occurs at higher driving speeds or when driving very close to the vehicle, therefore reducing the number of radar detection of the target. In this case, every pair of close objects is considered. The same set of dimension rules determine if the cutout maps resulting from these object pairs represent parallel-vehicle candidates. (a) (c) Fig. 3. Cutout map samples of true cross-parked vehicles (a), false crossparked vehicles (b), true parallel-parked vehicles (c) and false parallelparked vehicles (d). Colors in cutout maps represent occupancy probabilities. (b) (d) P u, u = R + 1, R + 2,..., S (8) where P u = {O i, O j, c u }, i j, i 1... N, j 1... N (9) At this point, it is relevant to illustrate in Fig 3 some cutout map samples extracted from the radar data. Note in Fig 3 (d) that two cross-parked vehicles make a parallel-parked vehicle candidate. It is important to understand that the classifier for parallel-parked vehicles will also see instances of crossparked vehicles and will need to classify those as false parallel-parked vehicles. Similarly, the top left sample of Fig 3 (b) represents a part of a parallel-parked vehicle and the cross-parked vehicle classifier will have to assign the false cross-parked vehicle label to it. C. Feature extraction In order to determine if a candidate is effectively a parked vehicle, its cutout map first needs to be processed. The purpose of feature extraction is to compress the raw data and build candidates signatures suitable for classification. Given the input cutout map c associated to each candidate, four descriptors are computed resulting in feature vector f = [ f 1 f 2 f 3 f 4 ]. Both types of candidates are described using the same set of features. f 1 Candidate size : The height h and the width w of the cutout map (in meters). f 1 = [ h w ] (10) Normalised sum Bin number Fig. 4. Representation of a cross-parked vehicle cutout map c (top) and its corresponding border projection (bottom) f 2 Candidate response strength : The average value of every cell of the n x m cutout map c. f 2 = 1 m n c ij (11) mn i=1 j=1 f 3 Candidate shape : This feature, along with f 4, is used to represent the frequent U or L shape of parked vehicles in a radar based occupancy grid. The sum of every cell of the cutout map along its horizontal dimension is computed: b = ( ) m b 1 b 2... b n where b u = c iu (12) i=1 The resulting vector b is sub-sampled to a vector f 3 of definite length (1x10) and normalized. The sub-sampling to 1417

4 An example of this feature is shown in Fig. 5. Note the nonlinearity between 0 and π observable at bins 5 and 6 of Fig 5. The classifier presented in the following section will deal with this nonlinearity. D. Classification Average orientation [π] Number of bin Fig. 5. Local orientation map of a cross-parked vehicle (top) and its corresponding candidate shape (bottom) a definite length vector ensures the homogeneity between features extracted from cutout maps of different sizes. Fig. 4 illustrates this feature for a cross-parked vehicle. f 4 Candidate local orientation : This feature is a measure of the orientation of the object within the cutout map. By considering the cutout map as a textured image one can compute the local orientation at each point of the map according to Hong et al. [10]. The matrices G x (i, j) and G y (i, j) are first computed. They represent the gradients at every cell (i, j) of map c in the x and y axis respectively. Two 5x5 Gaussian gradient filters with σ = 1 are used for this purpose. Differently from [10], the local orientation is calculated at full resolution, i.e. at every cell (i, j): θ(i, j) = tan 1 ( Gy (i, j) G x (i, j) ) (13) The resulting local orientation map θ needs to be filtered due to the uncertainty in the measurements. An averaging filter cannot be directly applied to the local orientation map θ because of the nonlinearity between 0 and 180 which both represent horizontal orientation. The orientation map can be filtered by vector averaging [11]: θ(i, j) = 1 2 tan 1 ( W (i, j) sin (2θ(i, j)) W (i, j) cos (2θ(i, j)) ) (14) Where W (i, j) is a two-dimensional low-pass filter. A 7x7 Gaussian filter with σ = 3 is used. Similarly to f 3, the local orientation grid θ is averaged along its horizontal axis: d = ( ) d 1 d 2... d n where d u = 1 m θ iu m i=1 (15) Following the principle from f 3, d is subsampled to the 1 by 10 vector f 4. The candidates are classified using two trained random forest classifiers as introduced by Breiman [12]. Two classifiers are used: one for categorizing the candidates from list C and one for categorizing the candidates from list P. The classifiers are first presented followed by the supervised training process and the mean of combining the classification results. 1) Random forest classifier: The idea behind this classifier is to construct a multitude of different decision trees and to have them vote for the winning class. Randomness should be introduced in the training of each tree in order to lessen the correlation between them. Aside from this correlation level, classification performances of the forest are also defined by the individual strength of each tree. According to [12], random forest offers classification performances similar to the AdaBoost algorithm. Also, it is less sensitive to noise in the output label (such as a misslabeled candidate) since it does not concentrate its efforts on misclassified candidates. Finally, random forest classifiers can be used in order to evaluate the importance of particular features. In this manner, a longer list of considered features was reduced to the four features described in the previous section. 2) Training: In order to balance the correlation level and the individual strength of each tree, Breiman [12] proposes two solutions for inserting randomness during the supervised training process. First, each tree should be trained using a bootstrapped subset of the training data set. Second, every tree node should be trained using a random subset of features. The actual implementation of the random forest classifier is based on work from [13]. Some parameters are adjusted including the number of trees in the forest (50) and the number of random features for training each node (2). Also during bootstrapping of the training data, a weighted sampling is performed in order to balance between the two classes. That is, candidates from a class with lower a priori probability have higher chances to be selected during training. Given a set of features f the random forest classifiers assign a classification score w of being a parked vehicle to each candidate of lists C and P. 3) Classification results combination: In order to combine the results between the two classifiers, it is verified if candidates with a classification score w above 50% overlap with each other. In the case two positively classified candidates overlap, the candidate with the highest classification score is confirmed to be the parked vehicle at this position. In future work, we are interested to explore the application of Demptser-Shafer theory to resolve this kind of ambiguity in the decision process. 1418

5 TABLE I CROSS-PARKED VEHICLES CLASSIFICATION RESULTS Classified as vehicle Classified as non-vehicle True vehicle 95.9% ± 2.0% 4.1% ± 2.0% True non-vehicle 1.5% ± 0.7% 98.5% ± 0.7% TABLE II PARALLEL-PARKED VEHICLES CLASSIFICATION RESULTS Classified as vehicle Classified as non-vehicle True vehicle 95.1% ± 2.0% 4.9% ± 2.0% True non-vehicle 2.5% ± 0.9% 97.5% ± 0.9% 1 True positive rate Cross parked vehicles Parallel parked vehicles False positive rate Fig. 7. Average ROC curves for cross-parked and parallel-parked vehicles classification Fig. 6. Mercedes-Benz S 500 INTELLIGENT DRIVE research vehicle (top) and its radar configuration (bottom) A. Vehicle presentation III. EXPERIMENT The Mercedes-Benz S 500 INTELLIGENT DRIVE research vehicle is considered for this experiment. It is equipped with four short range radar sensors at the vehicle corners, providing a 360 environment perception. The sensors operate at 76 GHz and have a coverage up to 40 m with an accuracy below 0.15 m. The sensors single field of view is 140 with an accuracy below 1. The research vehicle and its sensors configuration are illustrated in Fig 6. B. Classification performances Radar data have been recorded during six different sequences ranging from a drive in a parking crowded with cross-parked vehicles to a drive on an urban street with dispersed parallel-parked vehicles. From these data, lists of candidate have been pre-processed and carefully labeled. As a result, two data sets were created. The cross-parked vehicles set contains 1119 samples (268 true vehicles and 851 false vehicles) while the parallel-parked vehicles set contains 1342 samples (300 true vehicles and 1042 false vehicles). Several samples are illustrated in Fig 3. In order to predict the performance of the classifiers, a repeated random sub-sampling validation was performed. Each data set was randomly sampled to a training set (2/3) and a validation set (1/3). Classifiers performances were evaluated using this division and the process is repeated for a total of 100 rounds. This type of cross-validation offers the possibility to select the training / validation ratio. However, some samples may never be considered in the validation phase. Over these 100 rounds, the classification results were averaged and summarized in Table I and Table II. Also, the respective averaged ROC curves were computed as shown in Fig. 7. Finally, the average accuracy of the classifiers is 97.9% ± 0.7% for cross-parked vehicles and 96.9% ± 0.7% for parallel-parked vehicles. C. Experiment scenarios Fig 8 depicts a parking scenario where both cross-parked vehicles and parallel-parked vehicles surround the trajectory. The green line represent the moving vehicle trajectory while the red and green boxes identify parked vehicles which are detected by the system. It is interesting to note that the classifiers correctly distinguished the two cross-parked SMART vehicles from the larger parallel-parked vehicle at the top right. 1419

6 Fig. 8. Scenario 1: Parked vehicle detection in a parking lot. Vehicle trajectory is represented by a green line. Red and green boxes respectively represent parallel-parked and cross-parked vehicles which are detected by the system. the classifiers were trained with prolonged integrated data. This issue could be solved in a next step by including time dependant features as well as information concerning the objects of interest positions relative to the moving vehicle position. Finally, this vehicle detection approach could be adapted in order to detect other common objects found in parking lots such as curbstones, posts and charging stations for HEVs. This could lead to the implementation of a cognitive map such as presented by [14]. A cognitive map could enable the vehicle to better interpret and understand the parking lot environment. As introduced, knowledge about the presence of parked vehicle could be particularly useful for several tasks of autonomous parking such as collision mitigation, localisation and parking spot detection. ACKNOWLEDGMENT We are thankful to Daimler AG who supported this work. Fig. 9. Scenario 2: Parked vehicle detection in urban area. Vehicle trajectory is represented by a green line. Red and green boxes respectively represent parallel-parked and cross-parked vehicles which are detected by the system. Fig 9 illustrates a situation on urban road with several street parked vehicles. The higher vehicle speed in this scenario results in a different appearance for parallel-parked vehicles having lower probability of occupancy on the side of the vehicle. IV. CONCLUSION This paper presented an algorithm for detecting both parallel and cross-parked vehicles from radar data. Out of these data is built an occupancy grid where candidates are extracted. Two random forest classifiers are used to assert the presence of parked vehicles in two different modes: crossparked and parallel-parked. During the algorithm development, two hypotheses were made. First, the parked vehicles orientation was assumed to be correlated to the direction of the trajectory. Instead of using the trajectory as an input, one could determine the structure of the parking such as presented by [1] and determine the vehicle orientation according to the lane orientation. This situation will not cover every case but important a priori knowledge can be obtained this way. Future work will explore L-shape fitting and orientation estimation by classification in order to improve vehicle detection performance. Second, only the parked vehicles behind the current moving vehicle were detected in order to make use of the radars installed at the back of the vehicle. The parked vehicles were labeled after a passage of the moving vehicle so that REFERENCES [1] D. Dolgov and S. Thrun, Autonomous driving in semi-structured environments: Mapping and planning, in Robotics and Automation, ICRA 09. IEEE International Conference on. IEEE, 2009, pp [2] J. Zhou, L. E. Navarro-Serment, and M. Herbert, Detection of parking spots using 2d range data, in Intelligent Transportation Systems (ITSC), th International IEEE Conference on. IEEE, 2012, pp [3] S. Thrun, Robotic mapping: A survey, Exploring artificial intelligence in the new millennium, pp. 1 35, [4] D. Anguelov, R. Biswas, D. Koller, B. Limketkai, and S. Thrun, Learning hierarchical object maps of non-stationary environments with mobile robots, in Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 2002, pp [5] C. Keat, C. Pradalier, and C. Laugier, Vehicle detection and car park mapping using laser scanner, in Intelligent Robots and Systems, 2005.(IROS 2005) IEEE/RSJ International Conference on. IEEE, 2005, pp [6] Z. Sun, G. Bebis, and R. Miller, On-road vehicle detection: A review, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 28, no. 5, pp , [7] T. Scharwächter, M. Enzweiler, U. Franke, and S. Roth, Efficient multi-cue scene segmentation, in Pattern Recognition. Springer, 2013, pp [8] A. Elfes, Using occupancy grids for mobile robot perception and navigation, Computer, vol. 22, no. 6, pp , [9] S. Thrun, W. Burgard, and D. Fox, Probabilistic robotics. MIT press Cambridge, 2005, vol. 1. [10] L. Hong, Y. Wan, and A. Jain, Fingerprint image enhancement: algorithm and performance evaluation, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 20, no. 8, pp , [11] S. Chikkerur, A. N. Cartwright, and V. Govindaraju, Fingerprint enhancement using stft analysis, Pattern Recognition, vol. 40, no. 1, pp , [12] L. Breiman, Random forests, Machine learning, vol. 45, no. 1, pp. 5 32, [13] P. Dollár, Piotr s Image and Video Matlab Toolbox (PMT), http: //vision.ucsd.edu/ pdollar/toolbox/doc/index.html. [14] S. Vasudevan, S. Gächter, V. Nguyen, and R. Siegwart, Cognitive maps for mobile robotsan object based approach, Robotics and Autonomous Systems, vol. 55, no. 5, pp ,

On Road Vehicle Detection using Shadows

On Road Vehicle Detection using Shadows On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu

More information

Laserscanner Based Cooperative Pre-Data-Fusion

Laserscanner Based Cooperative Pre-Data-Fusion Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel

More information

Simulation of a mobile robot with a LRF in a 2D environment and map building

Simulation of a mobile robot with a LRF in a 2D environment and map building Simulation of a mobile robot with a LRF in a 2D environment and map building Teslić L. 1, Klančar G. 2, and Škrjanc I. 3 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana,

More information

Detection of Parking Spots Using 2D Range Data

Detection of Parking Spots Using 2D Range Data Detection of Parking Spots Using 2D Range Data Jifu Zhou, Luis E. Navarro-Serment and Martial Hebert Abstract This paper addresses the problem of reliably detecting parking spots in semi-filled parking

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment Matching Evaluation of D Laser Scan Points using Observed Probability in Unstable Measurement Environment Taichi Yamada, and Akihisa Ohya Abstract In the real environment such as urban areas sidewalk,

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Improving Door Detection for Mobile Robots by fusing Camera and Laser-Based Sensor Data

Improving Door Detection for Mobile Robots by fusing Camera and Laser-Based Sensor Data Improving Door Detection for Mobile Robots by fusing Camera and Laser-Based Sensor Data Jens Hensler, Michael Blaich, and Oliver Bittel University of Applied Sciences Brauneggerstr. 55, 78462 Konstanz,

More information

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping AUTONOMOUS SYSTEMS LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping Maria Isabel Ribeiro Pedro Lima with revisions introduced by Rodrigo Ventura Instituto

More information

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen Stochastic Road Shape Estimation, B. Southall & C. Taylor Review by: Christopher Rasmussen September 26, 2002 Announcements Readings for next Tuesday: Chapter 14-14.4, 22-22.5 in Forsyth & Ponce Main Contributions

More information

Online Simultaneous Localization and Mapping in Dynamic Environments

Online Simultaneous Localization and Mapping in Dynamic Environments To appear in Proceedings of the Intl. Conf. on Robotics and Automation ICRA New Orleans, Louisiana, Apr, 2004 Online Simultaneous Localization and Mapping in Dynamic Environments Denis Wolf and Gaurav

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

Computer Vision. Exercise Session 10 Image Categorization

Computer Vision. Exercise Session 10 Image Categorization Computer Vision Exercise Session 10 Image Categorization Object Categorization Task Description Given a small number of training images of a category, recognize a-priori unknown instances of that category

More information

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING Kenta Fukano 1, and Hiroshi Masuda 2 1) Graduate student, Department of Intelligence Mechanical Engineering, The University of Electro-Communications,

More information

Uncertainties: Representation and Propagation & Line Extraction from Range data

Uncertainties: Representation and Propagation & Line Extraction from Range data 41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty

More information

Loop detection and extended target tracking using laser data

Loop detection and extended target tracking using laser data Licentiate seminar 1(39) Loop detection and extended target tracking using laser data Karl Granström Division of Automatic Control Department of Electrical Engineering Linköping University Linköping, Sweden

More information

Grid-based Localization and Online Mapping with Moving Objects Detection and Tracking: new results

Grid-based Localization and Online Mapping with Moving Objects Detection and Tracking: new results Grid-based Localization and Online Mapping with Moving Objects Detection and Tracking: new results Trung-Dung Vu, Julien Burlet and Olivier Aycard Laboratoire d Informatique de Grenoble, France firstname.lastname@inrialpes.fr

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2016 Localization II Localization I 25.04.2016 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Scene Text Detection Using Machine Learning Classifiers

Scene Text Detection Using Machine Learning Classifiers 601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Seminar Heidelberg University

Seminar Heidelberg University Seminar Heidelberg University Mobile Human Detection Systems Pedestrian Detection by Stereo Vision on Mobile Robots Philip Mayer Matrikelnummer: 3300646 Motivation Fig.1: Pedestrians Within Bounding Box

More information

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1 Robot Mapping SLAM Front-Ends Cyrill Stachniss Partial image courtesy: Edwin Olson 1 Graph-Based SLAM Constraints connect the nodes through odometry and observations Robot pose Constraint 2 Graph-Based

More information

차세대지능형자동차를위한신호처리기술 정호기

차세대지능형자동차를위한신호처리기술 정호기 차세대지능형자동차를위한신호처리기술 008.08. 정호기 E-mail: hgjung@mando.com hgjung@yonsei.ac.kr 0 . 지능형자동차의미래 ) 단위 system functions 운전자상황인식 얼굴방향인식 시선방향인식 졸음운전인식 운전능력상실인식 차선인식, 전방장애물검출및분류 Lane Keeping System + Adaptive Cruise

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2018 Localization II Localization I 16.04.2018 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving 3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving Quanwen Zhu, Long Chen, Qingquan Li, Ming Li, Andreas Nüchter and Jian Wang Abstract Finding road intersections in advance is

More information

Semantic Mapping and Reasoning Approach for Mobile Robotics

Semantic Mapping and Reasoning Approach for Mobile Robotics Semantic Mapping and Reasoning Approach for Mobile Robotics Caner GUNEY, Serdar Bora SAYIN, Murat KENDİR, Turkey Key words: Semantic mapping, 3D mapping, probabilistic, robotic surveying, mine surveying

More information

Vehicle Detection Method using Haar-like Feature on Real Time System

Vehicle Detection Method using Haar-like Feature on Real Time System Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.

More information

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization Marcus A. Brubaker (Toyota Technological Institute at Chicago) Andreas Geiger (Karlsruhe Institute of Technology & MPI Tübingen) Raquel

More information

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich. Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,

More information

Grid-Based Models for Dynamic Environments

Grid-Based Models for Dynamic Environments Grid-Based Models for Dynamic Environments Daniel Meyer-Delius Maximilian Beinhofer Wolfram Burgard Abstract The majority of existing approaches to mobile robot mapping assume that the world is, an assumption

More information

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images:

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images: MAPS AND MAPPING Features In the first class we said that navigation begins with what the robot can see. There are several forms this might take, but it will depend on: What sensors the robot has What

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Small-scale objects extraction in digital images

Small-scale objects extraction in digital images 102 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 Small-scale objects extraction in digital images V. Volkov 1,2 S. Bobylev 1 1 Radioengineering Dept., The Bonch-Bruevich State Telecommunications

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Bayes Filter Implementations Discrete filters, Particle filters Piecewise Constant Representation of belief 2 Discrete Bayes Filter Algorithm 1. Algorithm Discrete_Bayes_filter(

More information

High Level Sensor Data Fusion for Automotive Applications using Occupancy Grids

High Level Sensor Data Fusion for Automotive Applications using Occupancy Grids High Level Sensor Data Fusion for Automotive Applications using Occupancy Grids Ruben Garcia, Olivier Aycard, Trung-Dung Vu LIG and INRIA Rhônes Alpes Grenoble, France Email: firstname.lastname@inrialpes.fr

More information

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices Technical University of Cluj-Napoca Image Processing and Pattern Recognition Research Center www.cv.utcluj.ro Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving

More information

Detection and recognition of moving objects using statistical motion detection and Fourier descriptors

Detection and recognition of moving objects using statistical motion detection and Fourier descriptors Detection and recognition of moving objects using statistical motion detection and Fourier descriptors Daniel Toth and Til Aach Institute for Signal Processing, University of Luebeck, Germany toth@isip.uni-luebeck.de

More information

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation For Intelligent GPS Navigation and Traffic Interpretation Tianshi Gao Stanford University tianshig@stanford.edu 1. Introduction Imagine that you are driving on the highway at 70 mph and trying to figure

More information

Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking In Intersection Like Scenario

Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking In Intersection Like Scenario Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking In Intersection Like Scenario Qadeer Baig, Olivier Aycard, Trung Dung Vu and Thierry Fraichard Abstract Using multiple sensors in

More information

Introduction to Mobile Robotics Techniques for 3D Mapping

Introduction to Mobile Robotics Techniques for 3D Mapping Introduction to Mobile Robotics Techniques for 3D Mapping Wolfram Burgard, Michael Ruhnke, Bastian Steder 1 Why 3D Representations Robots live in the 3D world. 2D maps have been applied successfully for

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Unconstrained License Plate Detection Using the Hausdorff Distance

Unconstrained License Plate Detection Using the Hausdorff Distance SPIE Defense & Security, Visual Information Processing XIX, Proc. SPIE, Vol. 7701, 77010V (2010) Unconstrained License Plate Detection Using the Hausdorff Distance M. Lalonde, S. Foucher, L. Gagnon R&D

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

Perception IV: Place Recognition, Line Extraction

Perception IV: Place Recognition, Line Extraction Perception IV: Place Recognition, Line Extraction Davide Scaramuzza University of Zurich Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Outline of Today s lecture Place recognition using

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

Real-time Stereo Vision for Urban Traffic Scene Understanding

Real-time Stereo Vision for Urban Traffic Scene Understanding Proceedings of the IEEE Intelligent Vehicles Symposium 2000 Dearborn (MI), USA October 3-5, 2000 Real-time Stereo Vision for Urban Traffic Scene Understanding U. Franke, A. Joos DaimlerChrylser AG D-70546

More information

A Neural Network for Real-Time Signal Processing

A Neural Network for Real-Time Signal Processing 248 MalkofT A Neural Network for Real-Time Signal Processing Donald B. Malkoff General Electric / Advanced Technology Laboratories Moorestown Corporate Center Building 145-2, Route 38 Moorestown, NJ 08057

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing IROS 05 Tutorial SLAM - Getting it Working in Real World Applications Rao-Blackwellized Particle Filters and Loop Closing Cyrill Stachniss and Wolfram Burgard University of Freiburg, Dept. of Computer

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera

A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera Claudio Caraffi, Tomas Vojir, Jiri Trefny, Jan Sochman, Jiri Matas Toyota Motor Europe Center for Machine Perception,

More information

Mobile Robot Mapping and Localization in Non-Static Environments

Mobile Robot Mapping and Localization in Non-Static Environments Mobile Robot Mapping and Localization in Non-Static Environments Cyrill Stachniss Wolfram Burgard University of Freiburg, Department of Computer Science, D-790 Freiburg, Germany {stachnis burgard @informatik.uni-freiburg.de}

More information

Pedestrian Detection Using Multi-layer LIDAR

Pedestrian Detection Using Multi-layer LIDAR 1 st International Conference on Transportation Infrastructure and Materials (ICTIM 2016) ISBN: 978-1-60595-367-0 Pedestrian Detection Using Multi-layer LIDAR Mingfang Zhang 1, Yuping Lu 2 and Tong Liu

More information

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology Basics of Localization, Mapping and SLAM Jari Saarinen Aalto University Department of Automation and systems Technology Content Introduction to Problem (s) Localization A few basic equations Dead Reckoning

More information

Contextual priming for artificial visual perception

Contextual priming for artificial visual perception Contextual priming for artificial visual perception Hervé Guillaume 1, Nathalie Denquive 1, Philippe Tarroux 1,2 1 LIMSI-CNRS BP 133 F-91403 Orsay cedex France 2 ENS 45 rue d Ulm F-75230 Paris cedex 05

More information

Kaijen Hsiao. Part A: Topics of Fascination

Kaijen Hsiao. Part A: Topics of Fascination Kaijen Hsiao Part A: Topics of Fascination 1) I am primarily interested in SLAM. I plan to do my project on an application of SLAM, and thus I am interested not only in the method we learned about in class,

More information

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

Computing Occupancy Grids from Multiple Sensors using Linear Opinion Pools

Computing Occupancy Grids from Multiple Sensors using Linear Opinion Pools Computing Occupancy Grids from Multiple Sensors using Linear Opinion Pools Juan David Adarve, Mathias Perrollaz, Alexandros Makris, Christian Laugier To cite this version: Juan David Adarve, Mathias Perrollaz,

More information

Measuring the World: Designing Robust Vehicle Localization for Autonomous Driving. Frank Schuster, Dr. Martin Haueis

Measuring the World: Designing Robust Vehicle Localization for Autonomous Driving. Frank Schuster, Dr. Martin Haueis Measuring the World: Designing Robust Vehicle Localization for Autonomous Driving Frank Schuster, Dr. Martin Haueis Agenda Motivation: Why measure the world for autonomous driving? Map Content: What do

More information

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module www.lnttechservices.com Table of Contents Abstract 03 Introduction 03 Solution Overview 03 Output

More information

Online Signature Verification Technique

Online Signature Verification Technique Volume 3, Issue 1 ISSN: 2320-5288 International Journal of Engineering Technology & Management Research Journal homepage: www.ijetmr.org Online Signature Verification Technique Ankit Soni M Tech Student,

More information

Pedestrian Recognition Using High-definition LIDAR

Pedestrian Recognition Using High-definition LIDAR 20 IEEE Intelligent Vehicles Symposium (IV) Baden-Baden, Germany, June 5-9, 20 Pedestrian Recognition Using High-definition LIDAR Kiyosumi Kidono, Takeo Miyasaka, Akihiro Watanabe, Takashi Naito, and Jun

More information

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Thomas Röfer 1, Tim Laue 1, and Dirk Thomas 2 1 Center for Computing Technology (TZI), FB 3, Universität Bremen roefer@tzi.de,

More information

Data: a collection of numbers or facts that require further processing before they are meaningful

Data: a collection of numbers or facts that require further processing before they are meaningful Digital Image Classification Data vs. Information Data: a collection of numbers or facts that require further processing before they are meaningful Information: Derived knowledge from raw data. Something

More information

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for

More information

A Review on Cluster Based Approach in Data Mining

A Review on Cluster Based Approach in Data Mining A Review on Cluster Based Approach in Data Mining M. Vijaya Maheswari PhD Research Scholar, Department of Computer Science Karpagam University Coimbatore, Tamilnadu,India Dr T. Christopher Assistant professor,

More information

Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment

Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment Keiji NAGATANI 1, Takayuki Matsuzawa 1, and Kazuya Yoshida 1 Tohoku University Summary. During search missions

More information

Online Localization and Mapping with Moving Object Tracking in Dynamic Outdoor Environments

Online Localization and Mapping with Moving Object Tracking in Dynamic Outdoor Environments Online Localization and Mapping with Moving Object Tracking in Dynamic Outdoor Environments Trung-Dung Vu, Olivier Aycard, Nils Appenrodt To cite this version: Trung-Dung Vu, Olivier Aycard, Nils Appenrodt.

More information

A visual bag of words method for interactive qualitative localization and mapping

A visual bag of words method for interactive qualitative localization and mapping A visual bag of words method for interactive qualitative localization and mapping David FILLIAT ENSTA - 32 boulevard Victor - 7515 PARIS - France Email: david.filliat@ensta.fr Abstract Localization for

More information

Visual Recognition and Search April 18, 2008 Joo Hyun Kim

Visual Recognition and Search April 18, 2008 Joo Hyun Kim Visual Recognition and Search April 18, 2008 Joo Hyun Kim Introduction Suppose a stranger in downtown with a tour guide book?? Austin, TX 2 Introduction Look at guide What s this? Found Name of place Where

More information

3D Reconstruction of a Hopkins Landmark

3D Reconstruction of a Hopkins Landmark 3D Reconstruction of a Hopkins Landmark Ayushi Sinha (461), Hau Sze (461), Diane Duros (361) Abstract - This paper outlines a method for 3D reconstruction from two images. Our procedure is based on known

More information

Visually Augmented POMDP for Indoor Robot Navigation

Visually Augmented POMDP for Indoor Robot Navigation Visually Augmented POMDP for Indoor obot Navigation LÓPEZ M.E., BAEA., BEGASA L.M., ESCUDEO M.S. Electronics Department University of Alcalá Campus Universitario. 28871 Alcalá de Henares (Madrid) SPAIN

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping Lecturer: Alex Styler (in for Drew Bagnell) Scribe: Victor Hwang 1 1 Occupancy Mapping When solving the localization

More information

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING By Michael Lowney Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Minh Do May 2015

More information

Appearance-based Concurrent Map Building and Localization

Appearance-based Concurrent Map Building and Localization Appearance-based Concurrent Map Building and Localization Josep M. Porta and Ben J.A. Kröse IAS Group, Informatics Institute, University of Amsterdam Kruislaan 403, 1098SJ, Amsterdam, The Netherlands {porta,krose}@science.uva.nl

More information

Mapping of environment, Detection and Tracking of Moving Objects using Occupancy Grids

Mapping of environment, Detection and Tracking of Moving Objects using Occupancy Grids Mapping of environment, Detection and Tracking of Moving Objects using Occupancy Grids Trung-Dung Vu, Julien Burlet and Olivier Aycard Laboratoire d Informatique de Grenoble, France firstname.lastname@inrialpes.fr

More information

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Introduction The goal of this survey paper is to examine the field of robotic mapping and the use of FPGAs in various implementations.

More information

Multi-label classification using rule-based classifier systems

Multi-label classification using rule-based classifier systems Multi-label classification using rule-based classifier systems Shabnam Nazmi (PhD candidate) Department of electrical and computer engineering North Carolina A&T state university Advisor: Dr. A. Homaifar

More information

Cs : Computer Vision Final Project Report

Cs : Computer Vision Final Project Report Cs 600.461: Computer Vision Final Project Report Giancarlo Troni gtroni@jhu.edu Raphael Sznitman sznitman@jhu.edu Abstract Given a Youtube video of a busy street intersection, our task is to detect, track,

More information

IN computer vision develop mathematical techniques in

IN computer vision develop mathematical techniques in International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 1 Object Tracking Based On Tracking-Learning-Detection Rupali S. Chavan, Mr. S.M.Patil Abstract -In this paper; we

More information

Robot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard

Robot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard Robot Mapping Grid Maps Gian Diego Tipaldi, Wolfram Burgard 1 Features vs. Volumetric Maps Courtesy: D. Hähnel Courtesy: E. Nebot 2 Features So far, we only used feature maps Natural choice for Kalman

More information

Motion Planning for an Autonomous Helicopter in a GPS-denied Environment

Motion Planning for an Autonomous Helicopter in a GPS-denied Environment Motion Planning for an Autonomous Helicopter in a GPS-denied Environment Svetlana Potyagaylo Faculty of Aerospace Engineering svetapot@tx.technion.ac.il Omri Rand Faculty of Aerospace Engineering omri@aerodyne.technion.ac.il

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics FastSLAM Sebastian Thrun (abridged and adapted by Rodrigo Ventura in Oct-2008) The SLAM Problem SLAM stands for simultaneous localization and mapping The task of building a map while

More information

6 y [m] y [m] x [m] x [m]

6 y [m] y [m] x [m] x [m] An Error Detection Model for Ultrasonic Sensor Evaluation on Autonomous Mobile Systems D. Bank Research Institute for Applied Knowledge Processing (FAW) Helmholtzstr. D-898 Ulm, Germany Email: bank@faw.uni-ulm.de

More information

Articulated Pose Estimation with Flexible Mixtures-of-Parts

Articulated Pose Estimation with Flexible Mixtures-of-Parts Articulated Pose Estimation with Flexible Mixtures-of-Parts PRESENTATION: JESSE DAVIS CS 3710 VISUAL RECOGNITION Outline Modeling Special Cases Inferences Learning Experiments Problem and Relevance Problem:

More information