Loop detection and extended target tracking using laser data

Size: px
Start display at page:

Download "Loop detection and extended target tracking using laser data"

Transcription

1 Licentiate seminar 1(39) Loop detection and extended target tracking using laser data Karl Granström Division of Automatic Control Department of Electrical Engineering Linköping University Linköping, Sweden

2 Errata 2(39) Paper A "I accept the paper for publication subject to your addressing all the points raised in the review."

3 Automatic control 3(39) Automatic control can be defined as making a system behave the way that you want it to. Example: Automatic Cruise Control (ACC)

4 Automatic control 3(39) Automatic control can be defined as making a system behave the way that you want it to. Example: Automatic Cruise Control (ACC) Must know the state of the system, e.g. what is the current speed?

5 Autonomous vehicles / robots 4(39) Here focus on autonomous vehicles / robots. In the context of robots, knowing the state of the system includes knowing where the robot is, 1. Must be able to recognise previously visited places, i.e. loop detection.

6 Autonomous vehicles / robots 4(39) Here focus on autonomous vehicles / robots. In the context of robots, knowing the state of the system includes knowing where the robot is, 1. Must be able to recognise previously visited places, i.e. loop detection. knowing where other objects are. 2. Must be able to follow objects that are moving, i.e. (extended) target tracking.

7 Autonomous vehicles / robots 4(39) Here focus on autonomous vehicles / robots. In the context of robots, knowing the state of the system includes knowing where the robot is, 1. Must be able to recognise previously visited places, i.e. loop detection. knowing where other objects are. 2. Must be able to follow objects that are moving, i.e. (extended) target tracking. A fundamental requirement is that the robot must be able to sense the world here we have used data from laser range sensors.

8 Problem formulation and contributions 5(39) In this thesis the following two problems are addressed 1. design of a method that can compare pairs of laser data, such that the method may be used to detect loop closure in SLAM. Feature description of laser data AdaBoost used to learn classifier Evaluated using several experiments

9 Problem formulation and contributions 5(39) In this thesis the following two problems are addressed 1. design of a method that can compare pairs of laser data, such that the method may be used to detect loop closure in SLAM. Feature description of laser data AdaBoost used to learn classifier Evaluated using several experiments 2. design of a method for target tracking in scenarios where each target possibly gives rise to more than one measurement per time step. Implementation of Gaussian mixture PHD-filter for extended targets Simple method to partition measurements Evaluation using simulations and experiments

10 Presentation overview 6(39) 1. Automatic control 2. Autonomous vehicles / robots 3. Problem formulation and contributions 4. Laser data 5. Loop detection (L.D.) 6. Extended target tracking (E.T.T.) 7. Future work

11 Laser data motivation 7(39) Laser sensors are common Used frequently in robotics for at least a decade. Receiving increasing attention from e.g. the automotive industry.

12 Laser data basic 2D functionality 8(39) Measures distance to nearest object at certain angles y [m] x [m] Data is called point clouds.

13 Laser data basic 2D functionality 8(39) Measures distance to nearest object at certain angles y [m] y [m] x [m] Data is called point clouds x [m]

14 Laser data data examples 9(39) 2D laservpone.avi 3D laserhannover.avi

15 L.D. Simultaneous localisation and mapping? 1(39) SLAM Map the environment, localise robot in map. Y [m] X [m] State vector is history of poses. Each pose associated to point cloud describing the location.

16 L.D. Simultaneous localisation and mapping? 1(39) SLAM Map the environment, localise robot in map. Y [m] X [m] State vector is history of poses. Each pose associated to point cloud describing the location.

17 L.D. Simultaneous localisation and mapping? 1(39) SLAM Map the environment, localise robot in map. Y [m] X [m] State vector is history of poses. Each pose associated to point cloud describing the location.

18 L.D. problem definition 11(39) Loop closure detection place recognition. Pairwise comparison of data, here point clouds, p k = {p k i }N i=1, pk i R 3 Z [m] Z [m] Y [m] 2 2 X [m] 2 2 Y [m] 2 2 X [m] 2 Are p k and p l from the same location? Yes

19 L.D. problem definition 11(39) Loop closure detection place recognition. Pairwise comparison of data, here point clouds, p k = {p k i }N i=1, pk i R 3 Z [m] Z [m] Y [m] 2 2 X [m] 2 2 Y [m] 2 2 X [m] 2 Are p k and p l from the same location? Yes

20 L.D. problem definition 11(39) Loop closure detection place recognition. Pairwise comparison of data, here point clouds, p k = {p k i }N i=1, pk i R [m] [m] [m] [m] Are p k and p l from the same location? Yes

21 L.D. problem definition 11(39) Loop closure detection place recognition. Pairwise comparison of data, here point clouds, p k = {p k i }N i=1, pk i R [m] [m] [m] [m] Are p k and p l from the same location? Yes

22 L.D. motivation 12(39) Loop closure/place recognition is an important and difficult problem, especially in SLAM. Need a method thas is robust against misclassification, invariant to rotation and computationally inexpensive.

23 L.D. our approach 13(39) Learning phase Point cloud data sets p k = {p k i }N i=1, pk i RD Classification phase (part of SLAM) Point cloud pair: p k, p l Find training pairs: p i1, p i2, y i Extract features: F i1,i2, y i Extract features: F k,l Strong classifier: c (F k,l ) IF No match Next point cloud pair IF Match Use AdaBoost to learn a strong classifier. Point cloud registration Output strong classifier: c (F k,l ) esdf measurement update

24 L.D. feature motivation 14(39) Point clouds are described with features: Meaningful statistics describing shape etc Compact description of point cloud, n f N Easy comparison of p k and p l. Two types of features used, all invariant to rotation.

25 L.D. feature examples 15(39) Type 1: geometric and statistic properites. f 1 area in 2D, volume in 3D f 3 average range... Comparison: f i f i. Same place similar value small f i f i.

26 L.D. feature examples 15(39) Type 1: geometric and statistic properites. f 1 area in 2D, volume in 3D f 3 average range... Comparison: f i f i. Same place similar value small f i f i. Type 2: range histograms. f j bin size b j [.1m, 3m] Comparison: Cross correlation of f j :s. Same place similar f j High cross correlation.

27 L.D. AdaBoost 16(39) AdaBoost is used to learn a 2-class classifier. Iterative learning. Combination of simple, weak, classifiers f f 1 Initialise weights

28 L.D. AdaBoost 16(39) AdaBoost is used to learn a 2-class classifier. Iterative learning. Combination of simple, weak, classifiers. 1.9 c 1 = (f 2 <.33) f f 1 Minimise error Increase weights

29 L.D. AdaBoost 16(39) AdaBoost is used to learn a 2-class classifier. Iterative learning. Combination of simple, weak, classifiers. 1.9 c 1 = (f 2 <.33) f f 1 Minimise error Increase weights

30 L.D. AdaBoost 16(39) AdaBoost is used to learn a 2-class classifier. Iterative learning. Combination of simple, weak, classifiers c 1 = (f 2 <.33) c 2 = (f 1 >.53).7.6 f f 1 Minimise error Increase weights

31 L.D. AdaBoost 16(39) AdaBoost is used to learn a 2-class classifier. Iterative learning. Combination of simple, weak, classifiers c 1 = (f 2 <.33) c 2 = (f 1 >.53).7.6 f f 1 Minimise error Increase weights

32 L.D. AdaBoost 16(39) AdaBoost is used to learn a 2-class classifier. Iterative learning. Combination of simple, weak, classifiers c 1 = (f 2 <.33) c 2 = (f 1 >.53) c 3 = (f 1 >.84) f f 1 Minimise error Increase weights

33 L.D. AdaBoost 16(39) AdaBoost is used to learn a 2-class classifier. Iterative learning. Combination of simple, weak, classifiers c 1 = (f 2 <.33) c 2 = (f 1 >.53) c 3 = (f 1 >.84) f f 1 Minimise error Increase weights

34 L.D. AdaBoost 16(39) AdaBoost is used to learn a 2-class classifier. Iterative learning. Combination of simple, weak, classifiers c 1 = (f 2 <.33) c 2 = (f 1 >.53) c 3 = (f 1 >.84) Likelihood: f c = t α t c t t α t Initialise weights f 1

35 L.D. AdaBoost 16(39) AdaBoost is used to learn a 2-class classifier. Iterative learning. Combination of simple, weak, classifiers c 1 = (f 2 <.33) c 2 = (f 1 >.53) c 3 = (f 1 >.84) Likelihood: f c = t α t c t t α t Decision regions: c τ =.5 Initialise weights f 1

36 L.D. experiments 17(39) 1. Execution time 2. Number of weak classifiers needed 3. Most informative features 4. Receiver Operating Characteristic (ROC) 5. Comparison of 2D and 3D performance 6. Dependence to translation 7. Dynamic objects 8. Repetitive structures in the environment 9. Loop closure detection in SLAM

37 L.D. experiments 17(39) 1. Execution time 2. Number of weak classifiers needed 3. Most informative features 4. Receiver Operating Characteristic (ROC) 5. Comparison of 2D and 3D performance 6. Dependence to translation 7. Dynamic objects 8. Repetitive structures in the environment 9. Loop closure detection in SLAM

38 L.D. ROC / translation dependence 18(39) ROC: detection/false alarm trade-off 1.95 Detection at % false alarm. Outdoor 2D: 1m: 66% detection 2m: 34% detection 3m: 18% detection Outdoor 3D: 3m: 63% detection Indoor 3D: 1m: 53% detection Detection Rate Detection Rate False Alarm Rate hann2 AASS False Alarm Rate 1m 2m 3m

39 L.D. 3D SLAM experiment, data 19(39) c (F k,l ) trained on outdoor data. r max = 3m. N 17. SLAM experiment on indoor data. r max = 15m. N 11. Does c (F k,l ) work in SLAM experiments? Does c (F k,l ) generalise well between environments?

40 L.D. 3D SLAM experiment, results 2(39) ~5% detection, no false alarms, good environment generalisation. Z [m] ESDF DR Y [m] Y [m] X [m] X [m]

41 L.D. comparison 21(39) Comparison with loop closure detection methods for point clouds. 2D: Higher detection at low false alarm.

42 L.D. comparison 21(39) Comparison with loop closure detection methods for point clouds. 2D: Higher detection at low false alarm. 3D outdoor: 2 nd highest detection Faster than related work. Rotation invariant without assumptions.

43 L.D. comparison 21(39) Comparison with loop closure detection methods for point clouds. 2D: Higher detection at low false alarm. 3D outdoor: 2 nd highest detection Faster than related work. Rotation invariant without assumptions. 3D indoor: Lower detection Not enough training data from the same location.

44 L.D. comparison 21(39) Comparison with loop closure detection methods for point clouds. 2D: Higher detection at low false alarm. 3D outdoor: 2 nd highest detection Faster than related work. Rotation invariant without assumptions. 3D indoor: Lower detection Not enough training data from the same location. 2D and 3D: More sensitive to translation.

45 L.D. comparison 21(39) Comparison with loop closure detection methods for point clouds. 2D: Higher detection at low false alarm. 3D outdoor: 2 nd highest detection Faster than related work. Rotation invariant without assumptions. 3D indoor: Lower detection Not enough training data from the same location. 2D and 3D: More sensitive to translation. Much work using images difficult to compare different sensors.

46 L.D. conclusions 22(39) A machine learning approach for the loop closure detection problem using point clouds. > 4 rotation invariant features. Loop closure detected from arbitrary direction. Competitive detection for low false alarm (%). Fast to compute. Method generalises well between environments. SLAM experiments shows the method works in real problems.

47 E.T.T. Target tracking? 23(39) Target tracking Keep track of location of targets Unknown number, not always detected Noise and clutter Difficult data association

48 E.T.T. Target tracking? 23(39) Target tracking Keep track of location of targets Unknown number, not always detected Noise and clutter Difficult data association Early RADAR-airplane-tracking Targets behave as points

49 E.T.T. Target tracking? 23(39) Target tracking Keep track of location of targets Unknown number, not always detected Noise and clutter Difficult data association Early RADAR-airplane-tracking Targets behave as points Modern sensors no longer points Multiple measurements Definition: Extended targets are targets that potentially give rise to more than one measurement per time step.

50 E.T.T. motivation 24(39) Point target assumption is often not valid, e.g....laser sensors...automotive radar...camera images Need method that handles multiple measurements per target.

51 E.T.T. RFS 25(39) Approach: Use random finite sets, RFS. RFS Y = {y (i)} N y i=1 y (i) are random vectors, common assumption y (i) R n N y < is random, common assumption N y POIS (β) { No order, e.g. y (1), y (2)} = {y (2), y (1)}

52 E.T.T. setup 26(39) RFS of targets X k = {x (i) k }N x,k i=1 x (i) k+1 = F kx (i) k + G k w (i) k, w(i) k N (, Q k ). RFS of measurements Z k = {z (j) k }N z,k j=1 z (j) k = H k x (i) k + e (j) k, e(j) k N (, R k ). Goal: Compute target set estimate ˆX k using measurement sets Z k.

53 E.T.T. PHD filter 27(39) Bayes recursion computationally intractable [Mahler (27)].

54 E.T.T. PHD filter 27(39) Bayes recursion computationally intractable [Mahler (27)]. Approximation: propagate first order moment D k k (x Z (k)), called PHD-intensity. Prediction Correction {}}{{}}{... D k k (x Z (k) ) D k+1 k (x Z (k) ) D k+1 k+1 (x Z (k+1) )...

55 E.T.T. PHD filter 27(39) Bayes recursion computationally intractable [Mahler (27)]. Approximation: propagate first order moment D k k (x Z (k)), called PHD-intensity. Prediction Correction {}}{{}}{... D k k (x Z (k) ) D k+1 k (x Z (k) ) D k+1 k+1 (x Z (k+1) )... Analogous to α-β-filter, which propagates first order moment of random variable (i.e. mean vector).

56 E.T.T. PHD example 28(39) D k k (x).4.2 PHD-intensity is sum of Gaussians y [m] x [m] D k k (x Z (k)) = J k k j=1 ( ) w (j) k k N x m (j) k k, P(j) k k PHD-intensity

57 E.T.T. PHD example 28(39) D k+1 k (x) x PHD-intensity is sum of Gaussians y [m] x [m] D k k (x Z (k)) = J k k j=1 ( ) w (j) k k N x m (j) k k, P(j) k k Predicted intensity

58 E.T.T. PHD example 28(39) D k+1 k (x) x y [m] x [m] Cluttered set of measurements 4 PHD-intensity is sum of Gaussians D k k (x Z (k)) = J k k j=1 ( ) w (j) k k N x m (j) k k, P(j) k k

59 E.T.T. PHD example 28(39) D k+1 k+1 (x) PHD-intensity is sum of Gaussians y [m] x [m] D k k (x Z (k)) = J k k j=1 ( ) w (j) k k N x m (j) k k, P(j) k k Corrected intensity

60 E.T.T. measurement pseudo likelihood 29(39) Implementationt of prediction shown by [Vo and Ma, 26]. D k k 1 (x Z) is predicted PHD-intensity. Corrected PHD-intensity D k k (x Z) = L Zk (x) D k k 1 (x Z), where measurement pseudo-likelihood is given by ( L Zk (x) =1 1 e γ(x)) p D (x) + [Mahler, 29] e γ(x) γ (x) p D (x) ω p W d p Z k W p W z W φ z (x) λ k c k (z).

61 E.T.T. measurement partitioning 3(39) In each time step Z k must be partitioned. A partition p is a division of Z k into cells W. Important since more than one measurement can stem from the same target.

62 E.T.T. measurement partitioning example 31(39) Partition the measurement set Z k = { } z (1) k, z (2) k, z (3) k 1.5 Z 1.5 p p 2 y 1.5 z (2) k z (1) k y 1.5 W 1 1 y 1.5 W 2 1 z (3) k x x W x 1.5 p p p 5 y 1.5 W 3 2 W 3 1 y 1.5 W 4 2 W 4 1 y 1.5 W 5 2 W 5 3 W x x x

63 E.T.T. measurement partitioning method 32(39) Measurements belong to same cell W if distance is small. Partions p i where cells contain measurements < d i m apart.

64 E.T.T. measurement partitioning method 32(39) Measurements belong to same cell W if distance is small. Partions p i where cells contain measurements < d i m apart. Let {d m i } N z,k i=1 be set of measurement to measurement distances. Good partitions for d i corresponding to d min d m i < d max

65 E.T.T. measurement partitioning method 32(39) Measurements belong to same cell W if distance is small. Partions p i where cells contain measurements < d i m apart. Let {d m i } N z,k i=1 be set of measurement to measurement distances. Good partitions for d i corresponding to d min d m i < d max Use knowledge about scenario to determine d min and d max

66 E.T.T. measurement partitioning method example33(39) Y [m] Measurements X [m] Y [m] d = X [m] Y [m] d = X [m] d = d = d = Y [m] 2 Y [m] 2 Y [m] X [m] X [m] X [m]

67 E.T.T. simulation measurements 34(39) 1 True vs measured 5 X [m] Time 1 True vs measured 5 Y [m] Time True target track crossing at time k = 56. New target birth and target spawned at time k = 66. Evaluation against standard GM-PHD using OSPA metric.

68 E.T.T. simulation results 35(39) True Ext. PHD PHD OSPA, c=6, p= Ext. PHD PHD Time [s] Cardinality Time [s] OSPA

69 E.T.T. experiment 36(39) 2D laser range scanner Two human targets One target occluded by the other Variable probability of detection to handle occlusion targettracking.avi

70 E.T.T. conclusions 37(39) Implementation of GM-PHD-filter for extended targets. Simple method for measurement set partitioning. Variable p D to handle occlusion. The suggested implementation handles......unknown number of targets....noisy and cluttered measurement sets.

71 Future work 38(39) Loop detection: Decrease sensitivity to translation

72 Future work 38(39) Loop detection: Decrease sensitivity to translation Extended target tracking: Estimate target shape and size Cardinalised PHD-filter robust estimate of number of targets Compare to other extended target tracking solutions

73 Future work 38(39) Loop detection: Decrease sensitivity to translation Extended target tracking: Estimate target shape and size Cardinalised PHD-filter robust estimate of number of targets Compare to other extended target tracking solutions Further extensions Environment labeling of data Separation of background environment and dynamic targets Semi-supervised learning instead of supervised learning

74 The End 39(39) Thank you for listening!

Extended target tracking using PHD filters

Extended target tracking using PHD filters Ulm University 2014 01 29 1(35) With applications to video data and laser range data Division of Automatic Control Department of Electrical Engineering Linöping university Linöping, Sweden Presentation

More information

Learning to close loops from range data

Learning to close loops from range data Learning to close loops from range data Karl Granström, Thomas Schön and Juan I Nieto Linköping University Post Print N.B.: When citing this work, cite the original article. Original Publication: Karl

More information

Tracking Rectangular and Elliptical Extended Targets Using Laser Measurements

Tracking Rectangular and Elliptical Extended Targets Using Laser Measurements FUSION 211, Chicago, IL 1(18) Tracking Rectangular and Elliptical Extended Targets Using Laser Measurements Karl Granström, Christian Lundquist, Umut Orguner Division of Automatic Control Department of

More information

Learning to Close the Loop from 3D Point Clouds

Learning to Close the Loop from 3D Point Clouds Learning to Close the Loop from 3D Point Clouds Karl Granström and Thomas B. Schön Div. of Automatic Control, Dept. of Electrical Engineering Linköping University, Sweden {karl, schon}@isy.liu.se Abstract

More information

Pedestrian Group Tracking Using the GM-PHD Filter

Pedestrian Group Tracking Using the GM-PHD Filter EUSIPCO 2013, Marrakech, Morocco 1(15) Pedestrian Group Tracking Using the GM-PHD Filter Viktor Edman, Maria Andersson, Karl Granström, Fredrik Gustafsson SAAB AB, Linköping, Sweden Division of Sensor

More information

Linköping University Post Print. Learning to Close the Loop from 3D Point Clouds

Linköping University Post Print. Learning to Close the Loop from 3D Point Clouds Linköping University Post Print Learning to Close the Loop from 3D Point Clouds Karl Granström and Thomas Schön N.B.: When citing this work, cite the original article. 2 IEEE. Personal use of this material

More information

Recognizing Deformable Shapes. Salvador Ruiz Correa Ph.D. UW EE

Recognizing Deformable Shapes. Salvador Ruiz Correa Ph.D. UW EE Recognizing Deformable Shapes Salvador Ruiz Correa Ph.D. UW EE Input 3-D Object Goal We are interested in developing algorithms for recognizing and classifying deformable object shapes from range data.

More information

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping AUTONOMOUS SYSTEMS LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping Maria Isabel Ribeiro Pedro Lima with revisions introduced by Rodrigo Ventura Instituto

More information

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 8: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

Advanced Techniques for Mobile Robotics Bag-of-Words Models & Appearance-Based Mapping

Advanced Techniques for Mobile Robotics Bag-of-Words Models & Appearance-Based Mapping Advanced Techniques for Mobile Robotics Bag-of-Words Models & Appearance-Based Mapping Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Motivation: Analogy to Documents O f a l l t h e s e

More information

Topological Mapping. Discrete Bayes Filter

Topological Mapping. Discrete Bayes Filter Topological Mapping Discrete Bayes Filter Vision Based Localization Given a image(s) acquired by moving camera determine the robot s location and pose? Towards localization without odometry What can be

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Multi-label classification using rule-based classifier systems

Multi-label classification using rule-based classifier systems Multi-label classification using rule-based classifier systems Shabnam Nazmi (PhD candidate) Department of electrical and computer engineering North Carolina A&T state university Advisor: Dr. A. Homaifar

More information

Practical Course WS12/13 Introduction to Monte Carlo Localization

Practical Course WS12/13 Introduction to Monte Carlo Localization Practical Course WS12/13 Introduction to Monte Carlo Localization Cyrill Stachniss and Luciano Spinello 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Bayes Filter

More information

Robotic Navigation - Experience Gained with RADAR

Robotic Navigation - Experience Gained with RADAR Robotic Navigation - Experience Gained with RADAR Martin Adams Dept. Electrical Engineering, AMTC Universidad de Chile (martin@ing.uchile.cl) Presentation Outline 1. Mapping, Tracking & SLAM at U. Chile.

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

Object Classification Using Tripod Operators

Object Classification Using Tripod Operators Object Classification Using Tripod Operators David Bonanno, Frank Pipitone, G. Charmaine Gilbreath, Kristen Nock, Carlos A. Font, and Chadwick T. Hawley US Naval Research Laboratory, 4555 Overlook Ave.

More information

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 7: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: EKF-based SLAM Dr. Kostas Alexis (CSE) These slides have partially relied on the course of C. Stachniss, Robot Mapping - WS 2013/14 Autonomous Robot Challenges Where

More information

Passive Multi Target Tracking with GM-PHD Filter

Passive Multi Target Tracking with GM-PHD Filter Passive Multi Target Tracking with GM-PHD Filter Dann Laneuville DCNS SNS Division Toulon, France dann.laneuville@dcnsgroup.com Jérémie Houssineau DCNS SNS Division Toulon, France jeremie.houssineau@dcnsgroup.com

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Recognizing Deformable Shapes. Salvador Ruiz Correa (CSE/EE576 Computer Vision I)

Recognizing Deformable Shapes. Salvador Ruiz Correa (CSE/EE576 Computer Vision I) Recognizing Deformable Shapes Salvador Ruiz Correa (CSE/EE576 Computer Vision I) Input 3-D Object Goal We are interested in developing algorithms for recognizing and classifying deformable object shapes

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Study of Viola-Jones Real Time Face Detector

Study of Viola-Jones Real Time Face Detector Study of Viola-Jones Real Time Face Detector Kaiqi Cen cenkaiqi@gmail.com Abstract Face detection has been one of the most studied topics in computer vision literature. Given an arbitrary image the goal

More information

Estimating Human Pose in Images. Navraj Singh December 11, 2009

Estimating Human Pose in Images. Navraj Singh December 11, 2009 Estimating Human Pose in Images Navraj Singh December 11, 2009 Introduction This project attempts to improve the performance of an existing method of estimating the pose of humans in still images. Tasks

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing ECG782: Multidimensional Digital Signal Processing Object Recognition http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Knowledge Representation Statistical Pattern Recognition Neural Networks Boosting

More information

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology Basics of Localization, Mapping and SLAM Jari Saarinen Aalto University Department of Automation and systems Technology Content Introduction to Problem (s) Localization A few basic equations Dead Reckoning

More information

Object Segmentation and Tracking in 3D Video With Sparse Depth Information Using a Fully Connected CRF Model

Object Segmentation and Tracking in 3D Video With Sparse Depth Information Using a Fully Connected CRF Model Object Segmentation and Tracking in 3D Video With Sparse Depth Information Using a Fully Connected CRF Model Ido Ofir Computer Science Department Stanford University December 17, 2011 Abstract This project

More information

Adaptive Gesture Recognition System Integrating Multiple Inputs

Adaptive Gesture Recognition System Integrating Multiple Inputs Adaptive Gesture Recognition System Integrating Multiple Inputs Master Thesis - Colloquium Tobias Staron University of Hamburg Faculty of Mathematics, Informatics and Natural Sciences Technical Aspects

More information

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu

More information

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard Robot Mapping A Short Introduction to the Bayes Filter and Related Models Gian Diego Tipaldi, Wolfram Burgard 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Recursive

More information

Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki

Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki 2011 The MathWorks, Inc. 1 Today s Topics Introduction Computer Vision Feature-based registration Automatic image registration Object recognition/rotation

More information

Object recognition (part 1)

Object recognition (part 1) Recognition Object recognition (part 1) CSE P 576 Larry Zitnick (larryz@microsoft.com) The Margaret Thatcher Illusion, by Peter Thompson Readings Szeliski Chapter 14 Recognition What do we mean by object

More information

Simultaneous Localization and Mapping

Simultaneous Localization and Mapping Sebastian Lembcke SLAM 1 / 29 MIN Faculty Department of Informatics Simultaneous Localization and Mapping Visual Loop-Closure Detection University of Hamburg Faculty of Mathematics, Informatics and Natural

More information

User-assisted Segmentation and 3D Motion Tracking. Michael Fleder Sudeep Pillai Jeremy Scott

User-assisted Segmentation and 3D Motion Tracking. Michael Fleder Sudeep Pillai Jeremy Scott User-assisted Segmentation and 3D Motion Tracking Michael Fleder Sudeep Pillai Jeremy Scott 3D Object Tracking Virtual reality and animation Imitation in robotics Autonomous driving Augmented reality Motivation

More information

Multi-Targets Tracking Based on Bipartite Graph Matching

Multi-Targets Tracking Based on Bipartite Graph Matching BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 14, Special Issue Sofia 014 Print ISSN: 1311-970; Online ISSN: 1314-4081 DOI: 10.478/cait-014-0045 Multi-Targets Tracking Based

More information

Uncertainties: Representation and Propagation & Line Extraction from Range data

Uncertainties: Representation and Propagation & Line Extraction from Range data 41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty

More information

CRF Based Point Cloud Segmentation Jonathan Nation

CRF Based Point Cloud Segmentation Jonathan Nation CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to

More information

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at

More information

Seminar Heidelberg University

Seminar Heidelberg University Seminar Heidelberg University Mobile Human Detection Systems Pedestrian Detection by Stereo Vision on Mobile Robots Philip Mayer Matrikelnummer: 3300646 Motivation Fig.1: Pedestrians Within Bounding Box

More information

Subject-Oriented Image Classification based on Face Detection and Recognition

Subject-Oriented Image Classification based on Face Detection and Recognition 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Parameter Sensitive Detectors

Parameter Sensitive Detectors Boston University OpenBU Computer Science http://open.bu.edu CAS: Computer Science: Technical Reports 2007 Parameter Sensitive Detectors Yuan, Quan Boston University Computer Science Department https://hdl.handle.net/244/680

More information

Recognizing Deformable Shapes. Salvador Ruiz Correa UW Ph.D. Graduate Researcher at Children s Hospital

Recognizing Deformable Shapes. Salvador Ruiz Correa UW Ph.D. Graduate Researcher at Children s Hospital Recognizing Deformable Shapes Salvador Ruiz Correa UW Ph.D. Graduate Researcher at Children s Hospital Input 3-D Object Goal We are interested in developing algorithms for recognizing and classifying deformable

More information

Template Matching Rigid Motion

Template Matching Rigid Motion Template Matching Rigid Motion Find transformation to align two images. Focus on geometric features (not so much interesting with intensity images) Emphasis on tricks to make this efficient. Problem Definition

More information

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1 Robot Mapping SLAM Front-Ends Cyrill Stachniss Partial image courtesy: Edwin Olson 1 Graph-Based SLAM Constraints connect the nodes through odometry and observations Robot pose Constraint 2 Graph-Based

More information

Shape Descriptor using Polar Plot for Shape Recognition.

Shape Descriptor using Polar Plot for Shape Recognition. Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that

More information

(W: 12:05-1:50, 50-N202)

(W: 12:05-1:50, 50-N202) 2016 School of Information Technology and Electrical Engineering at the University of Queensland Schedule of Events Week Date Lecture (W: 12:05-1:50, 50-N202) 1 27-Jul Introduction 2 Representing Position

More information

Gene Clustering & Classification

Gene Clustering & Classification BINF, Introduction to Computational Biology Gene Clustering & Classification Young-Rae Cho Associate Professor Department of Computer Science Baylor University Overview Introduction to Gene Clustering

More information

Automatic Initialization of the TLD Object Tracker: Milestone Update

Automatic Initialization of the TLD Object Tracker: Milestone Update Automatic Initialization of the TLD Object Tracker: Milestone Update Louis Buck May 08, 2012 1 Background TLD is a long-term, real-time tracker designed to be robust to partial and complete occlusions

More information

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Overview Problem statement: Given a scan and a map, or a scan and a scan,

More information

Collective Classification for Labeling of Places and Objects in 2D and 3D Range Data

Collective Classification for Labeling of Places and Objects in 2D and 3D Range Data Collective Classification for Labeling of Places and Objects in 2D and 3D Range Data Rudolph Triebel 1, Óscar Martínez Mozos2, and Wolfram Burgard 2 1 Autonomous Systems Lab, ETH Zürich, Switzerland rudolph.triebel@mavt.ethz.ch

More information

Generalizing Random-Vector SLAM with Random Finite Sets

Generalizing Random-Vector SLAM with Random Finite Sets Generalizing Random-Vector SLAM with Random Finite Sets Keith Y. K. Leung, Felipe Inostroza, Martin Adams Advanced Mining Technology Center AMTC, Universidad de Chile, Santiago, Chile eith.leung@amtc.uchile.cl,

More information

Tracking Rectangular and Elliptical Extended Targets Using Laser Measurements

Tracking Rectangular and Elliptical Extended Targets Using Laser Measurements 4th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, Tracing Rectangular and Elliptical Extended Targets Using Laser Measurements Karl Granström, Christian Lundquist, Umut

More information

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation.

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation. Equation to LaTeX Abhinav Rastogi, Sevy Harris {arastogi,sharris5}@stanford.edu I. Introduction Copying equations from a pdf file to a LaTeX document can be time consuming because there is no easy way

More information

LOAM: LiDAR Odometry and Mapping in Real Time

LOAM: LiDAR Odometry and Mapping in Real Time LOAM: LiDAR Odometry and Mapping in Real Time Aayush Dwivedi (14006), Akshay Sharma (14062), Mandeep Singh (14363) Indian Institute of Technology Kanpur 1 Abstract This project deals with online simultaneous

More information

Artificial Intelligence. Programming Styles

Artificial Intelligence. Programming Styles Artificial Intelligence Intro to Machine Learning Programming Styles Standard CS: Explicitly program computer to do something Early AI: Derive a problem description (state) and use general algorithms to

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Object and Class Recognition I:

Object and Class Recognition I: Object and Class Recognition I: Object Recognition Lectures 10 Sources ICCV 2005 short courses Li Fei-Fei (UIUC), Rob Fergus (Oxford-MIT), Antonio Torralba (MIT) http://people.csail.mit.edu/torralba/iccv2005

More information

Introduction to Medical Imaging (5XSA0) Module 5

Introduction to Medical Imaging (5XSA0) Module 5 Introduction to Medical Imaging (5XSA0) Module 5 Segmentation Jungong Han, Dirk Farin, Sveta Zinger ( s.zinger@tue.nl ) 1 Outline Introduction Color Segmentation region-growing region-merging watershed

More information

The K-modes and Laplacian K-modes algorithms for clustering

The K-modes and Laplacian K-modes algorithms for clustering The K-modes and Laplacian K-modes algorithms for clustering Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science University of California, Merced http://faculty.ucmerced.edu/mcarreira-perpinan

More information

Correspondence. CS 468 Geometry Processing Algorithms. Maks Ovsjanikov

Correspondence. CS 468 Geometry Processing Algorithms. Maks Ovsjanikov Shape Matching & Correspondence CS 468 Geometry Processing Algorithms Maks Ovsjanikov Wednesday, October 27 th 2010 Overall Goal Given two shapes, find correspondences between them. Overall Goal Given

More information

Template Matching Rigid Motion. Find transformation to align two images. Focus on geometric features

Template Matching Rigid Motion. Find transformation to align two images. Focus on geometric features Template Matching Rigid Motion Find transformation to align two images. Focus on geometric features (not so much interesting with intensity images) Emphasis on tricks to make this efficient. Problem Definition

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 12 Combining

More information

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,

More information

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping CSE-571 Probabilistic Robotics Fast-SLAM Mapping Particle Filters Represent belief by random samples Estimation of non-gaussian, nonlinear processes Sampling Importance Resampling (SIR) principle Draw

More information

Introduction to Mobile Robotics

Introduction to Mobile Robotics Introduction to Mobile Robotics Clustering Wolfram Burgard Cyrill Stachniss Giorgio Grisetti Maren Bennewitz Christian Plagemann Clustering (1) Common technique for statistical data analysis (machine learning,

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models Introduction ti to Embedded dsystems EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping Gabe Hoffmann Ph.D. Candidate, Aero/Astro Engineering Stanford University Statistical Models

More information

D-Separation. b) the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C.

D-Separation. b) the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C. D-Separation Say: A, B, and C are non-intersecting subsets of nodes in a directed graph. A path from A to B is blocked by C if it contains a node such that either a) the arrows on the path meet either

More information

Simulation of a mobile robot with a LRF in a 2D environment and map building

Simulation of a mobile robot with a LRF in a 2D environment and map building Simulation of a mobile robot with a LRF in a 2D environment and map building Teslić L. 1, Klančar G. 2, and Škrjanc I. 3 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana,

More information

Observing people with multiple cameras

Observing people with multiple cameras First Short Spring School on Surveillance (S 4 ) May 17-19, 2011 Modena,Italy Course Material Observing people with multiple cameras Andrea Cavallaro Queen Mary University, London (UK) Observing people

More information

Estimation of Automotive Pitch, Yaw, and Roll using Enhanced Phase Correlation on Multiple Far-field Windows

Estimation of Automotive Pitch, Yaw, and Roll using Enhanced Phase Correlation on Multiple Far-field Windows Estimation of Automotive Pitch, Yaw, and Roll using Enhanced Phase Correlation on Multiple Far-field Windows M. Barnada, C. Conrad, H. Bradler, M. Ochs and R. Mester Visual Sensorics and Information Processing

More information

Tracking system. Danica Kragic. Object Recognition & Model Based Tracking

Tracking system. Danica Kragic. Object Recognition & Model Based Tracking Tracking system Object Recognition & Model Based Tracking Motivation Manipulating objects in domestic environments Localization / Navigation Object Recognition Servoing Tracking Grasping Pose estimation

More information

Pedestrian Detection Using Multi-layer LIDAR

Pedestrian Detection Using Multi-layer LIDAR 1 st International Conference on Transportation Infrastructure and Materials (ICTIM 2016) ISBN: 978-1-60595-367-0 Pedestrian Detection Using Multi-layer LIDAR Mingfang Zhang 1, Yuping Lu 2 and Tong Liu

More information

Designing Applications that See Lecture 7: Object Recognition

Designing Applications that See Lecture 7: Object Recognition stanford hci group / cs377s Designing Applications that See Lecture 7: Object Recognition Dan Maynes-Aminzade 29 January 2008 Designing Applications that See http://cs377s.stanford.edu Reminders Pick up

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Tracking Multiple Moving Objects with a Mobile Robot

Tracking Multiple Moving Objects with a Mobile Robot Tracking Multiple Moving Objects with a Mobile Robot Dirk Schulz 1 Wolfram Burgard 2 Dieter Fox 3 Armin B. Cremers 1 1 University of Bonn, Computer Science Department, Germany 2 University of Freiburg,

More information

Outline. Target Tracking: Lecture 1 Course Info + Introduction to TT. Course Info. Course Info. Course info Introduction to Target Tracking

Outline. Target Tracking: Lecture 1 Course Info + Introduction to TT. Course Info. Course Info. Course info Introduction to Target Tracking REGLERTEKNIK Outline AUTOMATIC CONTROL Target Tracking: Lecture 1 Course Info + Introduction to TT Emre Özkan emre@isy.liu.se Course info Introduction to Target Tracking Division of Automatic Control Department

More information

METHODS FOR TARGET DETECTION IN SAR IMAGES

METHODS FOR TARGET DETECTION IN SAR IMAGES METHODS FOR TARGET DETECTION IN SAR IMAGES Kaan Duman Supervisor: Prof. Dr. A. Enis Çetin December 18, 2009 Bilkent University Dept. of Electrical and Electronics Engineering Outline Introduction Target

More information

LEARNING TO GENERATE CHAIRS WITH CONVOLUTIONAL NEURAL NETWORKS

LEARNING TO GENERATE CHAIRS WITH CONVOLUTIONAL NEURAL NETWORKS LEARNING TO GENERATE CHAIRS WITH CONVOLUTIONAL NEURAL NETWORKS Alexey Dosovitskiy, Jost Tobias Springenberg and Thomas Brox University of Freiburg Presented by: Shreyansh Daftry Visual Learning and Recognition

More information

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

SCALE INVARIANT FEATURE TRANSFORM (SIFT) 1 SCALE INVARIANT FEATURE TRANSFORM (SIFT) OUTLINE SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion 2 SIFT BACKGROUND Scale-invariant feature transform SIFT: to detect

More information

Simultaneous Localization and Mapping with Moving Object Tracking in 3D Range Data

Simultaneous Localization and Mapping with Moving Object Tracking in 3D Range Data Simultaneous Localization and Mapping with Moving Object Tracking in 3D Range Data Peng Mun Siew and Richard Linares Department of Aerospace Engineering and Mechanics, University of Minnesota, MN, 55455

More information

Processing of distance measurement data

Processing of distance measurement data 7Scanprocessing Outline 64-424 Intelligent Robotics 1. Introduction 2. Fundamentals 3. Rotation / Motion 4. Force / Pressure 5. Frame transformations 6. Distance 7. Scan processing Scan data filtering

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Lidar-based Methods for Tracking and Identification

Lidar-based Methods for Tracking and Identification Lidar-based Methods for Tracking and Identification Master s Thesis in Systems, Control & Mechatronics Marko Cotra Michael Leue Department of Signals & Systems Chalmers University of Technology Gothenburg,

More information

Domain Adaptation For Mobile Robot Navigation

Domain Adaptation For Mobile Robot Navigation Domain Adaptation For Mobile Robot Navigation David M. Bradley, J. Andrew Bagnell Robotics Institute Carnegie Mellon University Pittsburgh, 15217 dbradley, dbagnell@rec.ri.cmu.edu 1 Introduction An important

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 11: Non-Parametric Techniques.

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 11: Non-Parametric Techniques. . Non-Parameteric Techniques University of Cambridge Engineering Part IIB Paper 4F: Statistical Pattern Processing Handout : Non-Parametric Techniques Mark Gales mjfg@eng.cam.ac.uk Michaelmas 23 Introduction

More information

3D Point Cloud Processing

3D Point Cloud Processing 3D Point Cloud Processing The image depicts how our robot Irma3D sees itself in a mirror. The laser looking into itself creates distortions as well as changes in intensity that give the robot a single

More information

Information page for written examinations at Linköping University TER2

Information page for written examinations at Linköping University TER2 Information page for written examinations at Linköping University Examination date 2016-08-19 Room (1) TER2 Time 8-12 Course code Exam code Course name Exam name Department Number of questions in the examination

More information

Pedestrian Tracking Using Random Finite Sets

Pedestrian Tracking Using Random Finite Sets 1th International Conference on Information Fusion Chicago, Illinois, USA, July -8, 2011 Pedestrian Tracking Using Random Finite Sets Stephan Reuter Institute of Measurement, Control, and Microtechnology

More information

Clustering. Supervised vs. Unsupervised Learning

Clustering. Supervised vs. Unsupervised Learning Clustering Supervised vs. Unsupervised Learning So far we have assumed that the training samples used to design the classifier were labeled by their class membership (supervised learning) We assume now

More information

Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching

Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching Correlated Correspondences [ASP*04] A Complete Registration System [HAW*08] In this session Advanced Global Matching Some practical

More information

Multimedia Retrieval Exercise Course 10 Bag of Visual Words and Support Vector Machine

Multimedia Retrieval Exercise Course 10 Bag of Visual Words and Support Vector Machine Multimedia Retrieval Exercise Course 10 Bag of Visual Words and Support Vector Machine Kimiaki Shirahama, D.E. Research Group for Pattern Recognition Institute for Vision and Graphics University of Siegen,

More information