Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.
|
|
- Neal Booth
- 5 years ago
- Views:
Transcription
1 Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction from laser scans Zürich Autonomous Systems Lab
2 2 Today s lecture From object recognition to scene/place recognition Section 4.6 Omndirectional Cameras (today s exercise) Section Uncertainties: Section 4.7 Representation + Propagation Line extraction from laser scans
3 3 Object Recognition Q: Is this Book present in the Scene? Scene Book
4 4 Object Recognition Q: Is this Book present in the Scene? Extract keypoints in both images
5 5 Object Recognition Q: Is this Book present in the Scene? Look for corresponding matches Most of the Book s keypoints are present in the Scene A: The Book is present in the Scene
6 6 Taking this a step further Find an object in an image? Find an object in multiple images? Find multiple objects in multiple images?
7 7 Bag of Words Extension to scene/place recognition: Is this image in my database? Robot: Have I been to this place before? loop closure problem, kidnapped robot problem Use analogies from text retrieval: Visual Words Vocabulary of Visual Words Bag of Words (BOW) approach
8 Images for this slide are courtesy of Mark Cummins R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL 8 Building the Visual Vocabulary Image Collection Extract Features Cluster Descriptors Descriptors space Examples of Visual Words:
9 9 Video Google: one image a thousand words? Image courtesy of Andrew Zisserman These features map to the same visual word Video Google [J.Sivic and A. Zisserman, ICCV 2003] Demo:
10 10 Efficient Place/Object Recognition Same principle: we can describe a scene as a collection of words and look up in the database for images with a similar collection of words What if we need to find an object/scene in a database of millions of images? Build Vocabulary Tree via hierarchical clustering Use the Inverted File system: each node in the tree is associated with a list of images containing an instance of this node [D. Nister and H. Stewénius, CVPR 2006]
11 11 Extract Features Descriptors Space Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
12 12 Extract Features Descriptors Space Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
13 13 Extract Features Descriptors Space Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
14 14 Extract Features Descriptors Space Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
15 15 Descriptors Space Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
16 16 Vocabulary Tree: Hierarchical Partitioning Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
17 17 Vocabulary Tree: Hierarchical Partitioning Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
18 18 Vocabulary Tree: Hierarchical Partitioning Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
19 19 Populate Vocabulary Tree Model images Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
20 20 Populate Vocabulary Tree Model images Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
21 21 Populate Vocabulary Tree Model images Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
22 22 Populate Vocabulary Tree Model images Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
23 23 Populate Vocabulary Tree Model images Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
24 24 Populate Vocabulary Tree Model images Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
25 25 Populate Vocabulary Tree Model images Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
26 26 Populate Vocabulary Tree Model images Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
27 27 Populate Vocabulary Tree Model images Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
28 28 Populate Vocabulary Tree Model images Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
29 29 Lookup a test mage Model images Test image Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
30 30 Lookup a test mage Model images Test image Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
31 31 Lookup a test mage Model images Test image Slide based on D. Nister s slides R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
32 32 Robust object/scene recognition Visual Vocabulary holds appearance information but discards the spatial relationships between features Two images with the same features shuffled around in the image will be a 100% match according to this technique Usually appearance alone is enough, however, if different arrangements of the same features are expected then one might use geometric verification: Test the k most similar images to the query image for geometric consistency (e.g. using RANSAC)
33 33 Earlier approach to place recognition Image Histograms Aim: achieve compact representation of each scene Smooth the input image (using a Gaussian operator) Image representation = a set of 1D histograms describing the R,G,B components of the current image and its {Hue, Saturation, Luminance} Image Similarity Criterion: divergence of histograms (between query image and database) Make use of the entire image (instead of salient keypoints as in BOW approaches) maximize the camera s field of view use omni-directional cameras R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
34 34 Short intro to Omni-directional Vision
35 35 Omni-directional Cameras ~180º FOV >180º FOV ~360º FOV wide FOV dioptric camera (e.g. fisheye) catadioptric camera (camera + shaped mirror) polydioptric camera (multiple cameras with overlapping FOV)
36 36 Omni-cam: example image
37 37 Omni-directional Vision: Why? Very popular in the robotics community Omni-vision offers: Tracking of landmarks in all directions Tracking of landmarks for longer periods of time better accuracy in motion estimation and map building Google Street View uses omni-directional snapshots to build photorealistic 3D maps
38 38 Central Omnidirectional Cameras A vision system is central when all optical rays intersect to a single point in 3D: projection center, single effective viewpoint Perspective Camera: a central projection system Non-Central Catadioptric Camera Central Catadioptric Camera mirror mirror Image plane (CCD) Single effective viewpoint is the camera optical center camera camera
39 39 Central Omnidirectional Cameras Central catadioptric cameras can be built by appropriate choice of Mirror shape, and Central Catadioptric Camera Distance of camera mirror mirror Why is a single effective point so desirable? Every pixel in the captured image, measures the irradiance of light in one particular direction Allows the generation of geometrically correct perspective images camera
40 40 Today s Exercise 3 How to Build a Range Finder using an Omnidirectional Camera Guidance notes on: R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
41 41 Uncertainties: Representation and Propagation & Line Extraction from Range data
42 42 Uncertainty Representation Section of the book Sensing in the real world is always uncertain How can uncertainty be represented or quantified? How does uncertainty propagate? fusing uncertain inputs into a system, what is the resulting uncertainty? What is the merit of all this for mobile robotics?
43 43 Uncertainty Representation (2) Use a Probability Density Function (PDF) to characterize the statistical properties of a variable X: Expected value of variable X: Variance of variable X
44 44 Gaussian Distribution Most common PDF for characterizing uncertainties: Gaussian 68.26% 95.44% 99.72%
45 45 The Error Propagation Law Imagine extracting a line based on point measurements with uncertainties. Model parameters in polar coordinates [ (r, a) uniquely identifies a line ] r a The question: What is the uncertainty of the extracted line knowing the uncertainties of the measurement points that contribute to it? R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
46 R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL 46 The Error Propagation Law Error propagation in a multiple-input multi-output system with n inputs and m outputs. X 1 Y 1 X i System Y i X n Y m Y j f j ( X1... X n)
47 47 The Error Propagation Law 1D case of a nonlinear error propagation problem It can be shown that the output covariance matrix C Y is given by the error propagation law: where C X : covariance matrix representing the input uncertainties C Y : covariance matrix representing the propagated uncertainties for the outputs. F X : is the Jacobian matrix defined as: which is the transpose of the gradient of f(x). R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
48 48 Example: line extraction from laser scans
49 49 Line Extraction (1) Point-Line distance If each measurement is equally uncertain then sum of sq. errors: r r i x i (r i, q i ) d i Goal: minimize S when selecting (r, a) solve the system a q i Unweighted Least Squares R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
50 50 Line Extraction (2) Point-Line distance Each sensor measurement, may have its own, unique uncertainty x i (r i, q i ) r d i Weighted Least Squares a
51 51 Line Extraction (2) Weighted least squares and solving the system: The uncertainty s i of each measurement is proportional to the measured distance r i Gives the line parameters: ˆ 2 ri ~ N( ri, s r ) i If what is the uncertainty in the line (r, a)? ~ ( ˆ 2 q N q, s ) i i q i R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
52 Error Propagation: Line extraction The uncertainty of each measurement x i (r i, q i ) is described by the covariance matrix: The uncertainty in the line (r, a) is described by the covariance matrix: Define: Jacobian: r r r C r s s s s a a a a ) ( 0 0 ) ( i i i i diag diag C x q q r r q r s s s s s s i i i i i i i i r r r r F q q r r q a q a r a r a rq i i C xi q r s s Assuming that r i, q i are independent T x r F C F C rq rq a From Error Propagation Law =?
53 Autonomous Mobile Robots Feature Extraction from Range Data: Line extraction Split and merge Linear regression RANSAC Hough-Transform Zürich Autonomous Systems Lab
54 54 Extracting Features from Range Data photograph of corridor at ASL raw 3D scan plane segmentation result extracted planes for every cube
55 55 Extracting Features from Range Data goal: extract planar features from a dense point cloud example: left wall ceiling back wall right wall example scene showing a part of a corridor of the lab floor same scene represented as dense point cloud generated by a rotating laser scanner
56 56 Extracting Features from Range Data Map of the ASL hallway built using line segments
57 57 Extracting Features from Range Data Map of the ASL hallway built using orthogonal planes constructed from line segments
58 58 Features from Range Data: Motivation Point Cloud extract Lines / Planes Why Features: Raw data: huge amount of data to be stored Compact features require less storage Provide rich and accurate information Basis for high level features (e.g. more abstract features, objects) Here, we will study line segments The simplest geometric structure Suitable for most office-like environments R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
59 59 Line Extraction: The Problem Extract lines from a Range scan (i.e. a point cloud) Three main problems: How many lines are there? Segmentation: Which points belong to which line? Line Fitting/Extraction: Given points that belong to a line, how to estimate the line parameters? Algorithms we will see: 1. Split and merge 2. Linear regression 3. RANSAC 4. Hough-Transform R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
60 60 Algorithm 1: Split-and-Merge (standard) The most popular algorithm which is originated from computer vision. A recursive procedure of fitting and splitting. A slightly different version, called Iterative-End-Point-Fit, simply connects the end points for line fitting. Initialise set S to contain all points Split Fit a line to points in current set S Find the most distant point to the line If distance > threshold split & repeat with left and right point sets Merge If two consecutive segments are close/collinear enough, obtain the common line and find the most distant point If distance <= threshold, merge both segments R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
61 61 Algorithm 1: Split-and-Merge (standard) The most popular algorithm which is originated from computer vision. A recursive procedure of fitting and splitting. A slightly different version, called Iterative-End-Point-Fit, simply connects the end points for line fitting. Initialise set S to contain all points Split Fit a line to points in current set S Find the most distant point to the line If distance > threshold split & repeat with left and right point sets Merge If two consecutive segments are close/collinear enough, obtain the common line and find the most distant point If distance <= threshold, merge both segments R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
62 62 Algorithm 1: Split-and-Merge (standard) The most popular algorithm which is originated from computer vision. A recursive procedure of fitting and splitting. A slightly different version, called Iterative-End-Point-Fit, simply connects the end points for line fitting. Initialise set S to contain all points Split Fit a line to points in current set S Find the most distant point to the line If distance > threshold split & repeat with left and right point sets Merge If two consecutive segments are close/collinear enough, obtain the common line and find the most distant point If distance <= threshold, merge both segments R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
63 63 Algorithm 1: Split-and-Merge (standard) The most popular algorithm which is originated from computer vision. A recursive procedure of fitting and splitting. A slightly different version, called Iterative-End-Point-Fit, simply connects the end points for line fitting. Initialise set S to contain all points Split Fit a line to points in current set S Find the most distant point to the line If distance > threshold split & repeat with left and right point sets Merge If two consecutive segments are close/collinear enough, obtain the common line and find the most distant point If distance <= threshold, merge both segments R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
64 64 Algorithm 1: Split-and-Merge (Iterative-End-Point-Fit)
65 65 Algorithm 1: Split-and-Merge
66 66 Algorithm 2: Line-Regression Uses a sliding window of size N f The points within each sliding window are fitted by a segment Then adjacent segments are merged if their line parameters are close Line-Regression N f = 3 Initialize sliding window size N f Fit a line to every N f consecutive points (i.e. in each window) Compute a Line Fidelity Array: each element contains the sum of Mahalanobis distances between 3 consecutive windows (Mahalanobis distance used as a measure of similarity) Scan Fidelity array for consecutive elements < threshold (using a clustering algorithm). For every Fidelity Array element < Threshold, construct a new line segment Merge overlapping line segments + recompute line parameters for each segment R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
67 67 Algorithm 2: Line-Regression Uses a sliding window of size N f The points within each sliding window are fitted by a segment Then adjacent segments are merged if their line parameters are close Line-Regression N f = 3 Initialize sliding window size N f Fit a line to every N f consecutive points (i.e. in each window) Compute a Line Fidelity Array: each element contains the sum of Mahalanobis distances between 3 consecutive windows (Mahalanobis distance used as a measure of similarity) Scan Fidelity array for consecutive elements < threshold (using a clustering algorithm). For every Fidelity Array element < Threshold, construct a new line segment Merge overlapping line segments + recompute line parameters for each segment R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
68 68 Algorithm 2: Line-Regression Uses a sliding window of size N f The points within each sliding window are fitted by a segment Then adjacent segments are merged if their line parameters are close Line-Regression N f = 3 Initialize sliding window size N f Fit a line to every N f consecutive points (i.e. in each window) Compute a Line Fidelity Array: each element contains the sum of Mahalanobis distances between 3 consecutive windows (Mahalanobis distance used as a measure of similarity) Scan Fidelity array for consecutive elements < threshold (using a clustering algorithm). For every Fidelity Array element < Threshold, construct a new line segment Merge overlapping line segments + recompute line parameters for each segment R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
69 69 Algorithm 2: Line-Regression Uses a sliding window of size N f The points within each sliding window are fitted by a segment Then adjacent segments are merged if their line parameters are close Line-Regression N f = 3 Initialize sliding window size N f Fit a line to every N f consecutive points (i.e. in each window) Compute a Line Fidelity Array: each element contains the sum of Mahalanobis distances between 3 consecutive windows (Mahalanobis distance used as a measure of similarity) Scan Fidelity array for consecutive elements < threshold (using a clustering algorithm). For every Fidelity Array element < Threshold, construct a new line segment Merge overlapping line segments + recompute line parameters for each segment R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
70 70 Algorithm 2: Line-Regression Uses a sliding window of size N f The points within each sliding window are fitted by a segment Then adjacent segments are merged if their line parameters are close Line-Regression N f = 3 Initialize sliding window size N f Fit a line to every N f consecutive points (i.e. in each window) Compute a Line Fidelity Array: each element contains the sum of Mahalanobis distances between 3 consecutive windows (Mahalanobis distance used as a measure of similarity) Scan Fidelity array for consecutive elements < threshold (using a clustering algorithm). For every Fidelity Array element < Threshold, construct a new line segment Merge overlapping line segments + recompute line parameters for each segment R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
71 71 Algorithm 2: Line-Regression Uses a sliding window of size N f The points within each sliding window are fitted by a segment Then adjacent segments are merged if their line parameters are close Line-Regression N f = 3 Initialize sliding window size N f Fit a line to every N f consecutive points (i.e. in each window) Compute a Line Fidelity Array: each element contains the sum of Mahalanobis distances between 3 consecutive windows (Mahalanobis distance used as a measure of similarity) Scan Fidelity array for consecutive elements < threshold (using a clustering algorithm). For every Fidelity Array element < Threshold, construct a new line segment Merge overlapping line segments + recompute line parameters for each segment R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
72 72 Algorithm 3: RANSAC RANSAC = RANdom SAmple Consensus. It is a generic and robust fitting algorithm of models in the presence of outliers (i.e. points which do not satisfy a model) Generally applicable algorithm to any problem where the goal is to identify the inliers which satisfy a predefined model. Typical applications in robotics are: line extraction from 2D range data, plane extraction from 3D range data, feature matching, structure from motion, RANSAC is an iterative method and is non-deterministic in that the probability to find a set free of outliers increases as more iterations are used Drawback: A nondeterministic method, results are different between runs.
73 73 Algorithm 3: RANSAC
74 74 Algorithm 3: RANSAC Select sample of 2 points at random
75 75 Algorithm 3: RANSAC Select sample of 2 points at random Calculate model parameters that fit the data in the sample
76 76 RANSAC Select sample of 2 points at random Calculate model parameters that fit the data in the sample Calculate error function for each data point
77 77 Algorithm 3: RANSAC Select sample of 2 points at random Calculate model parameters that fit the data in the sample Calculate error function for each data point Select data that support current hypothesis
78 78 Algorithm 3: RANSAC Select sample of 2 points at random Calculate model parameters that fit the data in the sample Calculate error function for each data point Select data that support current hypothesis Repeat sampling
79 79 Algorithm 3: RANSAC Select sample of 2 points at random Calculate model parameters that fit the data in the sample Calculate error function for each data point Select data that support current hypothesis Repeat sampling
80 80 Algorithm 3: RANSAC Set with the maximum number of inliers obtained within k iterations
81 81 Algorithm 3: RANSAC ( for line extraction from 2D range data )
82 82 Algorithm 3: RANSAC How many iterations does RANSAC need? We cannot know in advance if the observed set contains the max. no. inliers ideally: check all possible combinations of 2 points in a dataset of N points. No. all pairwise combinations: N(N-1)/2 computationally infeasible if N is too large. example: laser scan of 360 points need to check all 360*359/2= 64,620 possibilities! Do we really need to check all possibilities or can we stop RANSAC after iterations? Checking a subset of combinations is enough if we have a rough estimate of the percentage of inliers in our dataset This can be done in a probabilistic way
83 83 Algorithm 3: RANSAC How many iterations does RANSAC need? w = number of inliers / N where N : tot. no. data points w : fraction of inliers in the dataset = probability of selecting an inlier-point Let p : probability of finding a set of points free of outliers Assumption: the 2 points necessary to estimate a line are selected independently w 2 = prob. that both points are inliers 1-w 2 = prob. that at least one of these two points is an outlier Let k : no. RANSAC iterations executed so far ( 1-w 2 ) k = prob. that RANSAC never selects two points that are both inliers. 1-p = ( 1-w 2 ) k and therefore : k log(1 log(1 p) ) 2 w R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
84 84 Algorithm 3: RANSAC How many iterations does RANSAC need? The number of iterations k is k log(1 log(1 p) ) 2 w knowing the fraction of inliers w, after k RANSAC iterations we will have a prob. p of finding a set of points free of outliers. Example: if we want a probability of success p=99% and we know that w=50% k=16 iterations, which is much less than the number of all possible combinations! In practice we need only a rough estimate of w. More advanced implementations of RANSAC estimate the fraction of inliers & adaptively set it on every iteration. R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
85 85 Algorithm 4: Hough-Transform Hough Transform uses a voting scheme
86 86 Algorithm 4: Hough-Transform A line in the image corresponds to a point in Hough space b 0 m 0
87 87 Algorithm 4: Hough-Transform What does a point (x 0, y 0 ) in the image space map to in the Hough space?
88 88 Algorithm 4: Hough-Transform What does a point (x 0, y 0 ) in the image space map to in the Hough space?
89 89 Algorithm 4: Hough-Transform Where is the line that contains both (x 0, y 0 ) and (x 1, y 1 )? It is the intersection of the lines b = x 0 m + y 0 and b = x 1 m + y 1 R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
90 90 Algorithm 4: Hough-Transform Each point in image space, votes for line-parameters in Hough parameter space
91 91 Algorithm 4: Hough-Transform Problems with the (m,b) space: Unbounded parameter domain Vertical lines require infinite m Alternative: polar representation Each point in image space will map to a sinusoid in the (θ,ρ) parameter space R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
92 92 Algorithm 4: Hough-Transform 1. Initialize accumulator H to all zeros 2. for each edge point (x,y) in the image for all θ in [0,180] Compute ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 end end 3. Find the values of (θ, ρ) where H(θ, ρ) is a local maximum 4. The detected line in the image is given by ρ = x cos θ + y sin θ
93 93 Algorithm 4: Hough-Transform
94 94 Algorithm 4: Hough-Transform
95 95 Algorithm 4: Hough-Transform
96 96 Algorithm 4: Hough-Transform Effect of Noise
97 97 Algorithm 4: Hough-Transform Application: Lane detection
98 r R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL 98 Example Door detection using Hough Transform q Hough Transform
99 99 Comparison of Line Extraction Algorithms Complexity Speed (Hz) False positives Precision Split-and-Merge N logn % +++ Incremental S N 600 6% +++ Line-Regression N N f % +++ RANSAC S N k 30 30% ++++ Hough-Transform S N N C + S N R N C 10 30% ++++ Expectation Maximization S N 1 N 2 N 1 50% ++++ Split-and-merge, Incremental and Line-Regression: fastest Deterministic & make use of the sequential ordering of raw scan points (: points captured according to the rotation direction of the laser beam) If applied on randomly captured points only last 3 algorithms would segment all lines. RANSAC, HT and EM: produce greater precision more robust to outliers
Uncertainties: Representation and Propagation & Line Extraction from Range data
41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty
More informationPerception IV: Place Recognition, Line Extraction
Perception IV: Place Recognition, Line Extraction Davide Scaramuzza University of Zurich Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Outline of Today s lecture Place recognition using
More informationLecture 12 Recognition
Institute of Informatics Institute of Neuroinformatics Lecture 12 Recognition Davide Scaramuzza http://rpg.ifi.uzh.ch/ 1 Lab exercise today replaced by Deep Learning Tutorial by Antonio Loquercio Room
More informationLecture 12 Recognition
Institute of Informatics Institute of Neuroinformatics Lecture 12 Recognition Davide Scaramuzza 1 Lab exercise today replaced by Deep Learning Tutorial Room ETH HG E 1.1 from 13:15 to 15:00 Optional lab
More informationLecture 12 Recognition. Davide Scaramuzza
Lecture 12 Recognition Davide Scaramuzza Oral exam dates UZH January 19-20 ETH 30.01 to 9.02 2017 (schedule handled by ETH) Exam location Davide Scaramuzza s office: Andreasstrasse 15, 2.10, 8050 Zurich
More informationKeypoint-based Recognition and Object Search
03/08/11 Keypoint-based Recognition and Object Search Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Notices I m having trouble connecting to the web server, so can t post lecture
More informationAn Extended Line Tracking Algorithm
An Extended Line Tracking Algorithm Leonardo Romero Muñoz Facultad de Ingeniería Eléctrica UMSNH Morelia, Mich., Mexico Email: lromero@umich.mx Moises García Villanueva Facultad de Ingeniería Eléctrica
More informationHough Transform and RANSAC
CS4501: Introduction to Computer Vision Hough Transform and RANSAC Various slides from previous courses by: D.A. Forsyth (Berkeley / UIUC), I. Kokkinos (Ecole Centrale / UCL). S. Lazebnik (UNC / UIUC),
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationObject Recognition with Invariant Features
Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationRobot localization method based on visual features and their geometric relationship
, pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department
More informationInstance-level recognition II.
Reconnaissance d objets et vision artificielle 2010 Instance-level recognition II. Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique, Ecole Normale
More informationZürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary
Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza ETH Master Course: 151-0854-00L Autonomous Mobile Robots Summary 2 Lecture Overview Mobile Robot Control Scheme knowledge, data base mission
More informationLecture 8 Fitting and Matching
Lecture 8 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi-model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation
More informationInstance-level recognition part 2
Visual Recognition and Machine Learning Summer School Paris 2011 Instance-level recognition part 2 Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,
More informationMobile Robots Summery. Autonomous Mobile Robots
Mobile Robots Summery Roland Siegwart Mike Bosse, Marco Hutter, Martin Rufli, Davide Scaramuzza, (Margarita Chli, Paul Furgale) Mobile Robots Summery 1 Introduction probabilistic map-based localization
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationA Systems View of Large- Scale 3D Reconstruction
Lecture 23: A Systems View of Large- Scale 3D Reconstruction Visual Computing Systems Goals and motivation Construct a detailed 3D model of the world from unstructured photographs (e.g., Flickr, Facebook)
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationCSE 527: Introduction to Computer Vision
CSE 527: Introduction to Computer Vision Week 5 - Class 1: Matching, Stitching, Registration September 26th, 2017 ??? Recap Today Feature Matching Image Alignment Panoramas HW2! Feature Matches Feature
More informationKalman Filter Based. Localization
Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Kalman Filter Based Localization & SLAM Zürich Autonomous
More informationHomographies and RANSAC
Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position
More informationFeature Detectors and Descriptors: Corners, Lines, etc.
Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood
More informationhuman vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC
COS 429: COMPUTER VISON Segmentation human vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC Reading: Chapters 14, 15 Some of the slides are credited to:
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based
More informationLecture 9 Fitting and Matching
Lecture 9 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi- model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation
More informationImage Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58
Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify
More informationDETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION
DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,
More informationIndexing local features and instance recognition May 14 th, 2015
Indexing local features and instance recognition May 14 th, 2015 Yong Jae Lee UC Davis Announcements PS2 due Saturday 11:59 am 2 We can approximate the Laplacian with a difference of Gaussians; more efficient
More informationFeature Matching + Indexing and Retrieval
CS 1699: Intro to Computer Vision Feature Matching + Indexing and Retrieval Prof. Adriana Kovashka University of Pittsburgh October 1, 2015 Today Review (fitting) Hough transform RANSAC Matching points
More informationImage correspondences and structure from motion
Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.
More informationPerception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.
Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction
More informationModel Fitting: The Hough transform I
Model Fitting: The Hough transform I Guido Gerig, CS6640 Image Processing, Utah Credit: Svetlana Lazebnik (Computer Vision UNC Chapel Hill, 2008) Fitting Parametric Models: Beyond Lines Choose a parametric
More informationSensor Modalities. Sensor modality: Different modalities:
Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature
More informationIndexing local features and instance recognition May 16 th, 2017
Indexing local features and instance recognition May 16 th, 2017 Yong Jae Lee UC Davis Announcements PS2 due next Monday 11:59 am 2 Recap: Features and filters Transforming and describing images; textures,
More informationIntelligent Robotics
64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural
More informationEECS 442 Computer vision. Fitting methods
EECS 442 Computer vision Fitting methods - Problem formulation - Least square methods - RANSAC - Hough transforms - Multi-model fitting - Fitting helps matching! Reading: [HZ] Chapters: 4, 11 [FP] Chapters:
More informationLecture 9: Hough Transform and Thresholding base Segmentation
#1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting
More informationCS 664 Image Matching and Robust Fitting. Daniel Huttenlocher
CS 664 Image Matching and Robust Fitting Daniel Huttenlocher Matching and Fitting Recognition and matching are closely related to fitting problems Parametric fitting can serve as more restricted domain
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationRANSAC and some HOUGH transform
RANSAC and some HOUGH transform Thank you for the slides. They come mostly from the following source Dan Huttenlocher Cornell U Matching and Fitting Recognition and matching are closely related to fitting
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationVisuelle Perzeption für Mensch- Maschine Schnittstellen
Visuelle Perzeption für Mensch- Maschine Schnittstellen Vorlesung, WS 2009 Prof. Dr. Rainer Stiefelhagen Dr. Edgar Seemann Institut für Anthropomatik Universität Karlsruhe (TH) http://cvhci.ira.uka.de
More informationRobot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1
Robot Mapping SLAM Front-Ends Cyrill Stachniss Partial image courtesy: Edwin Olson 1 Graph-Based SLAM Constraints connect the nodes through odometry and observations Robot pose Constraint 2 Graph-Based
More informationCOS Lecture 10 Autonomous Robot Navigation
COS 495 - Lecture 10 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization
More informationRobotics Programming Laboratory
Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car
More informationStereo SLAM. Davide Migliore, PhD Department of Electronics and Information, Politecnico di Milano, Italy
Stereo SLAM, PhD migliore@elet.polimi.it Department of Electronics and Information, Politecnico di Milano, Italy What is a Stereo Camera? Slide n 2 Do you remember the pin-hole camera? What is a Stereo
More informationINFO0948 Fitting and Shape Matching
INFO0948 Fitting and Shape Matching Renaud Detry University of Liège, Belgium Updated March 31, 2015 1 / 33 These slides are based on the following book: D. Forsyth and J. Ponce. Computer vision: a modern
More informationVisual localization using global visual features and vanishing points
Visual localization using global visual features and vanishing points Olivier Saurer, Friedrich Fraundorfer, and Marc Pollefeys Computer Vision and Geometry Group, ETH Zürich, Switzerland {saurero,fraundorfer,marc.pollefeys}@inf.ethz.ch
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Object Recognition in Large Databases Some material for these slides comes from www.cs.utexas.edu/~grauman/courses/spring2011/slides/lecture18_index.pptx
More informationClassifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao
Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao Motivation Image search Building large sets of classified images Robotics Background Object recognition is unsolved Deformable shaped
More informationIMAGE MATCHING - ALOK TALEKAR - SAIRAM SUNDARESAN 11/23/2010 1
IMAGE MATCHING - ALOK TALEKAR - SAIRAM SUNDARESAN 11/23/2010 1 : Presentation structure : 1. Brief overview of talk 2. What does Object Recognition involve? 3. The Recognition Problem 4. Mathematical background:
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationFitting. Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! EECS Fall 2014! Foundations of Computer Vision!
Fitting EECS 598-08 Fall 2014! Foundations of Computer Vision!! Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! Readings: FP 10; SZ 4.3, 5.1! Date: 10/8/14!! Materials on these
More informationAutomatic Photo Popup
Automatic Photo Popup Derek Hoiem Alexei A. Efros Martial Hebert Carnegie Mellon University What Is Automatic Photo Popup Introduction Creating 3D models from images is a complex process Time-consuming
More informationDavide Scaramuzza. University of Zurich
Davide Scaramuzza University of Zurich Robotics and Perception Group http://rpg.ifi.uzh.ch/ Scaramuzza, D., Fraundorfer, F., Visual Odometry: Part I - The First 30 Years and Fundamentals, IEEE Robotics
More informationRecognition. Topics that we will try to cover:
Recognition Topics that we will try to cover: Indexing for fast retrieval (we still owe this one) Object classification (we did this one already) Neural Networks Object class detection Hough-voting techniques
More informationModel Fitting. Introduction to Computer Vision CSE 152 Lecture 11
Model Fitting CSE 152 Lecture 11 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 10: Grouping and Model Fitting What to do with edges? Segment linked edge chains into curve features (e.g.,
More informationCS4733 Class Notes, Computer Vision
CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision
More informationModel Fitting, RANSAC. Jana Kosecka
Model Fitting, RANSAC Jana Kosecka Fitting: Overview If we know which points belong to the line, how do we find the optimal line parameters? Least squares What if there are outliers? Robust fitting, RANSAC
More informationSpring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots
Spring 2016 Localization II Localization I 25.04.2016 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception
More informationMultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A
MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A. 205-206 Pietro Guccione, PhD DEI - DIPARTIMENTO DI INGEGNERIA ELETTRICA E DELL INFORMAZIONE POLITECNICO DI BARI
More information3D Environment Reconstruction
3D Environment Reconstruction Using Modified Color ICP Algorithm by Fusion of a Camera and a 3D Laser Range Finder The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15,
More informationVisual Recognition and Search April 18, 2008 Joo Hyun Kim
Visual Recognition and Search April 18, 2008 Joo Hyun Kim Introduction Suppose a stranger in downtown with a tour guide book?? Austin, TX 2 Introduction Look at guide What s this? Found Name of place Where
More informationRANSAC: RANdom Sampling And Consensus
CS231-M RANSAC: RANdom Sampling And Consensus Roland Angst rangst@stanford.edu www.stanford.edu/~rangst CS231-M 2014-04-30 1 The Need for RANSAC Why do I need RANSAC? I know robust statistics! Robust Statistics
More informationIntroduction to Computer Vision
Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics
More informationLecture 8: Fitting. Tuesday, Sept 25
Lecture 8: Fitting Tuesday, Sept 25 Announcements, schedule Grad student extensions Due end of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review
More informationFitting. Fitting. Slides S. Lazebnik Harris Corners Pkwy, Charlotte, NC
Fitting We ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features in the image by grouping multiple features according
More informationVideo Google: A Text Retrieval Approach to Object Matching in Videos
Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic, Frederik Schaffalitzky, Andrew Zisserman Visual Geometry Group University of Oxford The vision Enable video, e.g. a feature
More information10/03/11. Model Fitting. Computer Vision CS 143, Brown. James Hays. Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem
10/03/11 Model Fitting Computer Vision CS 143, Brown James Hays Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem Fitting: find the parameters of a model that best fit the data Alignment:
More informationLarge Scale 3D Reconstruction by Structure from Motion
Large Scale 3D Reconstruction by Structure from Motion Devin Guillory Ziang Xie CS 331B 7 October 2013 Overview Rome wasn t built in a day Overview of SfM Building Rome in a Day Building Rome on a Cloudless
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuuki Kaahara Fitting and Alignment Computer Vision Szeliski 6.1 James Has Acknowledgment: Man slides from Derek Hoiem, Lana Lazebnik, and
More informationComputer Vision 6 Segmentation by Fitting
Computer Vision 6 Segmentation by Fitting MAP-I Doctoral Programme Miguel Tavares Coimbra Outline The Hough Transform Fitting Lines Fitting Curves Fitting as a Probabilistic Inference Problem Acknowledgements:
More informationObject Recognition and Augmented Reality
11/02/17 Object Recognition and Augmented Reality Dali, Swans Reflecting Elephants Computational Photography Derek Hoiem, University of Illinois Last class: Image Stitching 1. Detect keypoints 2. Match
More informationREFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES
REFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES T. Yamakawa a, K. Fukano a,r. Onodera a, H. Masuda a, * a Dept. of Mechanical Engineering and Intelligent Systems, The University of Electro-Communications,
More informationFeature Matching and RANSAC
Feature Matching and RANSAC Recognising Panoramas. [M. Brown and D. Lowe,ICCV 2003] [Brown, Szeliski, Winder, CVPR 2005] with a lot of slides stolen from Steve Seitz, Rick Szeliski, A. Efros Introduction
More informationFeature-based methods for image matching
Feature-based methods for image matching Bag of Visual Words approach Feature descriptors SIFT descriptor SURF descriptor Geometric consistency check Vocabulary tree Digital Image Processing: Bernd Girod,
More informationLocalization and Map Building
Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationEnsemble of Bayesian Filters for Loop Closure Detection
Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information
More informationFeature Matching and Robust Fitting
Feature Matching and Robust Fitting Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Project 2 questions? This
More informationVision-based Mobile Robot Localization and Mapping using Scale-Invariant Features
Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points
More informationFinal Exam Assigned: 11/21/02 Due: 12/05/02 at 2:30pm
6.801/6.866 Machine Vision Final Exam Assigned: 11/21/02 Due: 12/05/02 at 2:30pm Problem 1 Line Fitting through Segmentation (Matlab) a) Write a Matlab function to generate noisy line segment data with
More informationAccurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion
007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,
More informationPart-based and local feature models for generic object recognition
Part-based and local feature models for generic object recognition May 28 th, 2015 Yong Jae Lee UC Davis Announcements PS2 grades up on SmartSite PS2 stats: Mean: 80.15 Standard Dev: 22.77 Vote on piazza
More informationFrom Structure-from-Motion Point Clouds to Fast Location Recognition
From Structure-from-Motion Point Clouds to Fast Location Recognition Arnold Irschara1;2, Christopher Zach2, Jan-Michael Frahm2, Horst Bischof1 1Graz University of Technology firschara, bischofg@icg.tugraz.at
More informationLocal Features and Bag of Words Models
10/14/11 Local Features and Bag of Words Models Computer Vision CS 143, Brown James Hays Slides from Svetlana Lazebnik, Derek Hoiem, Antonio Torralba, David Lowe, Fei Fei Li and others Computer Engineering
More informationSrikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah
School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Presentation Outline 1 2 3 Sample Problem
More informationIntroduction to Autonomous Mobile Robots
Introduction to Autonomous Mobile Robots second edition Roland Siegwart, Illah R. Nourbakhsh, and Davide Scaramuzza The MIT Press Cambridge, Massachusetts London, England Contents Acknowledgments xiii
More informationFitting (LMedS, RANSAC)
Fitting (LMedS, RANSAC) Thursday, 23/03/2017 Antonis Argyros e-mail: argyros@csd.uoc.gr LMedS and RANSAC What if we have very many outliers? 2 1 Least Median of Squares ri : Residuals Least Squares n 2
More information7630 Autonomous Robotics Probabilities for Robotics
7630 Autonomous Robotics Probabilities for Robotics Basics of probability theory The Bayes-Filter Introduction to localization problems Monte-Carlo-Localization Based on material from R. Triebel, R. Kästner
More information3D object recognition used by team robotto
3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object
More informationLarge Scale Image Retrieval
Large Scale Image Retrieval Ondřej Chum and Jiří Matas Center for Machine Perception Czech Technical University in Prague Features Affine invariant features Efficient descriptors Corresponding regions
More informationCS 4495 Computer Vision Classification 3: Bag of Words. Aaron Bobick School of Interactive Computing
CS 4495 Computer Vision Classification 3: Bag of Words Aaron Bobick School of Interactive Computing Administrivia PS 6 is out. Due Tues Nov 25th, 11:55pm. One more assignment after that Mea culpa This
More informationApplication questions. Theoretical questions
The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list
More informationHISTOGRAMS OF ORIENTATIO N GRADIENTS
HISTOGRAMS OF ORIENTATIO N GRADIENTS Histograms of Orientation Gradients Objective: object recognition Basic idea Local shape information often well described by the distribution of intensity gradients
More informationTowards a visual perception system for LNG pipe inspection
Towards a visual perception system for LNG pipe inspection LPV Project Team: Brett Browning (PI), Peter Rander (co PI), Peter Hansen Hatem Alismail, Mohamed Mustafa, Joey Gannon Qri8 Lab A Brief Overview
More informationImage stitching. Announcements. Outline. Image stitching
Announcements Image stitching Project #1 was due yesterday. Project #2 handout will be available on the web later tomorrow. I will set up a webpage for artifact voting soon. Digital Visual Effects, Spring
More information