Image Processing and Computer Vision
|
|
- David Ellis
- 5 years ago
- Views:
Transcription
1 1 / 1 Image Processing and Computer Vision Detecting primitives and rigid objects in Images Martin de La Gorce martin.de-la-gorce@enpc.fr February 2015
2 Detecting lines 2 / 1 How can we detect simple primitive such as lines or circles in an image? We can use the canny edge detector to get a list of edges points We try to approximate this set of points by straight lines or circles. We will see two different methods to detect lines that generalize to other kind of primitives Hough transform RANSAC
3 Line parameterization 3 / 1 a line can be paramterized by two parameters its angle θ [0, π] its signed distance ρ R to the center of the image y y x positif negatif
4 Line parameterization We denote l(θ, ρ) R 2 the line corresponding to the parameters (θ, ρ). we have l(θ, ρ) = {p R 2 < p, ˆn(θ) >= ρ} with ˆn(θ) the unit vector point in the direction θ This rewrites ˆn(θ) = [cos(θ), sin(θ)] l(θ, ρ) = {p R 2 p x cos(θ) + p y sin(θ) = ρ} y p x 4 / 1
5 Distance of a point to the line 5 / 1 Given a line l(θ, ρ) corresponding to the parameters (θ, ρ) and a point p R 2 we denote d(p, l(θ, ρ)) is the euclidian distance of the point p to the line l(θ, ρ) d(p, l(θ, ρ)) = < p, ˆn(θ) > ρ (1) With ˆρ(θ, p) = p x cos(θ) + p y sin(θ) y = ˆρ(θ, p) ρ (2) ^ (, p) p x
6 Line score Given a set of points p 1,..., p N and the parameters (θ, ρ) we can calculate a score for the line: S(θ, p) = N [d(p k, l(θ, ρ)) < τ] k=0 Where [true] = 1 and [false] = 0. τ is a limit distance beyong wich we consider that the point is too far from the line to belong to the line y x The green points are refered as inliers and the black points as outliers.s(θ, p) is therefore the number of inliers 6 / 1
7 Brute Force 7 / 1 Our goal is to finds the parameters that give the best scores, i.e the line that contain a maximum of point in their vicinity. We can discretize the space of paramter on a 2D grid using discretization steps Δ θ and Δ ρ A brute force approach consists in Evaluating S(θ, p) all combination of parameters in the discretized set
8 Brute Force 8 / 1 The brute force algorithm would write for j in range(len(thetas)): for i in range(len(rhos)): score[i,j]=0 for k in range(len(points)): if dist_to_line(points[k],rhos[i],thetas[j])<tau: score[i,j]+=1 the complexity of the naive implementation is of order N θ N ρ N
9 Acceleration 9 / 1 Given a point p and an angle θ there is a unique line that contains p with the angle θ: l(θ, ρ) = {p R 2 p x cos(θ) + p y sin(θ) = ρ} So p l(θ, ρ) ρ = p x cos(θ) + p y sin(θ) y p x
10 Acceleration 10 / 1 Given a point p and an angle θ there is a set of parallel lines with angle θ that passes near p at a distance smaller than τ d(p, l(θ, ρ)) < τ ρ [ˆρ(θ, p) τ, ˆρ(θ, p) + τ] With ˆρ(θ, p) = p x cos(θ) + p y sin(θ) y p x
11 Acceleration 11 / 1 A better approach than the brute force consist in looping over the points p i and then angles θ 0,..., θ Nθ 1and avoid the loop over all the ρ 0,..., ρ Nρ 1 using the equivalence d(p, l(θ, ρ)) < τ ρ [ˆρ(θ, p) τ, ˆρ(θ, p) + τ] Assuming that the ρ 0,..., ρ Nρ 1 are evenly spaced with Δρ.We have ρ i = ρ 1 + iδρ d(p, l(θ, ρ i )) < τ ρ i [ˆρ(θ, p) τ, ˆρ(θ, p) + τ] ρ 0 + iδρ [ˆρ(θ, p) τ, ˆρ(θ, p) + τ] i [ f (ˆρ(θ, p) τ), f (ˆρ(θ, p) + τ) ] with f (ρ) = ((ρ ρ 0 )/Δρ) the function that allows to retrieve the index i of some ρ i in the set ρ 0,..., ρ Nρ 1 i.e f (ρ i ) = i
12 Acceleration the complexity is now N N θ τ/δ ρ 12 / 1 given the angle θ we can now avoid looping over the entire range of discrete ρ and loop only in the the range given by: d(p, l(θ, ρ i )) < τ i [ f (ˆρ(θ, p) τ), f (ˆρ(θ, p) + τ) ] This leads to the pseudo code: for j in range(len(thetas)): for i in range(len(rhos)): score[i,j]=0 for k in range(len(points)): for j in range(len(thetas)): rho_min = hat_rho(p[k],theta[i])-tau rho_max = rho_min+ 2*tau i_rho_min = ceil((rho_min-rhos[0])/delta_rho) i_rho_max = floor(rho_max-rhos[0])/delta_rho) for i from i_rho_min to i_rho_max: if i>=0 and i<len(rhos): score[i,j]+=1
13 Hough transform 13 / 1 Finally, assuming that we have τ = Δρ/2 the interval [ˆρ(θ, p) τ, ˆρ(θ, p) + τ] is of length Δρ and thus contains exactly one element p i of the set {ρ 0,..., ρ Nρ 1} with i = round(f (ˆρ(θ, p)) = round((ρ ρ 0 )/Δρ) we get what is called the Hough transform for i in range(len(theta)): for j in range(len(rho)): score0[i,j]=0 for k in range(len(points)): for j in range(len(theta)): i=round(hat_rho(p[k],theta[j])-rhos[0])/delta_rho score[i,j]+=1
14 Hough transform 14 / 1 We can visualize the accumulation in the parameter space for two points as follows:
15 Hough transform 15 / 1 By detecting edges in an image and accumulating for all the edge points in this image, this gives use a score image referred as the accumulator that looks like A line in the original image corresponds to a peak in this transformed image. We select the local maximums above some inliers count threshold of this image to get a set of lines.
16 Hough transform 16 / 1
17 Line refinement 17 / 1 Once we found primitive candidates we can refine its location using a smoother version of the line score S smooth (θ, p) = N h(d(p i, l(θ, ρ))) i=0 with h the function { (1 (x/τ) h(x) = 2 ) 3 ) if x τ 0 if if x > τ
18 Line refinement 18 / 1 We can maximize S smooth (θ, p) using a gradient ascent or a robust non linear least squares minimization method (see links). Note that we could use this smoother energy within the Hough transform
19 Hough transform 19 / 1 The Hough transform generalizes other primitives types that can be described with an analytic equation involving few parameters. Will will use a N-dimensional accumulator for primitives that are parameterized with N parameters. The size of the accumulator increases exponentially with the number of parameters
20 Circles Hough transform 20 / 1 For example a circle detector will require a 3D accumulator for the radius and the two coordinates of the center If we fix the radius we have two parameters left for the position of the center: edges hough transform detected circle
21 Circles Hough transform 21 / 1 we want to find the local maximums of S(x, y, r) = N [ p k (x, y) [r τ, r + τ]] k=0 given a fixed radius r we first construct the set of point whose distance to the origin is within the range [r τ, r + τ]: C r = {(x, y) N 2 r τ x 2 + y 2 r + τ} given r this can be done once for all using some circle rasterization technique. the function S rewrites : S(x, y, r) = N [(p k (x, y)) C r ] k=0
22 Circles Hough transform 22 / 1 we have : S(x, y, r) = N [(p k (x, y)) C r ] k=0 by posing Δ = p k (x, y) we have (p k (x, y)) C r (x, y) = p k Δ and Δ C r Once C r computed we can loop of the points in the image and C r to increment the accumulators for j in range(len(thetas)): for i in range(len(rhos)): score[i,j]=0 for k in range(len(points)): for delta in Cr score[p[k,0]-delta[0],p[k,1]-delta[1]]+=1
23 Generalized Hough transform 23 / 1 Suppose we gave a non-parametric rigid shape described by a set of points s 1,..., s m this shape can be rigidly transformed T (θ, t, s i ) = R θ s i + t wit R θ the rotation matrix [ ] cos(θ) sin(θ) R θ = sin(θ) cos(θ) We want to find the transformation that aligns the shape points s 1,..., s m with edges points p 1,..., p n in the image. We can formulate a score function: S(θ, t) = n [min j p i T (θ, t, s j ) 2 < τ] (3) i=1 This counts the number of observed point that fall near the transformed shape and thus is similar to the cost used previously
24 Generalized hough transform S(θ, t) = n [min j p i T (θ, t, s j ) 2 < τ] i=1 We will consider for now that the rotation angle θ is fixed and thus we use a 2D accumulator for the translation We can discretize t = t 1,..., t k and compute all S(θ, t k ) using three nested loops over t,i and j, we can define the set C θ = {p N 2 min j p R θ s i 2 < τ} We have N S(θ, t) = [(p k t) C θ ] k=0 and we can use the same method as for circles 24 / 1
25 Generalized hough transform C θ might be large, instead : We use a new criterion 1 S(θ, t) = n m [ p i T (θ, t, s j ) < τ] i=1 j=1 This criterion counts the number of pairs of point image/mode that are near each other after translation of the model. We can discretize t = t 1,..., t k and compute all S(θ, t k ) using three nested loops over t,i and j, this is not very efficient... 1 the fact that we removed the min operator a now use only sums gives us freedom on the order of the parameters in the nested loops when we perform summation 25 / 1
26 Generalized hough transform 26 / 1 We notice that a given pair (p i, s j ) will contribute to increase the score of a very small set of translations t k : p i (R θ s j + t k )) < τ t k (p i R θ s j ) < τ We can use nested loops for over i, j and then enumerate only the few t values in the disk of center p i R θ s j and radius τ. We can further speed up by accumulating only for t = p i R θ s j and then convolve the accumulator by a disk of radius τ If we do not want to keep θ fixed, we use discretized angles θ 1,..., θ l, use a 3D accumulator and add a loop over θ 2 2 this approach has been proposed by Merlin and Farber
27 Generalized hough transform: Using normals 27 / 1 By computing edge normals for both the model (ˆn s1,..., ˆn sm ) and the edge points in the image (ˆn p1,..., ˆn pn ), we can change to criterion to S(θ, t) = n m [( p i T ((θ, t, s j ) < τ)& ˆn pi, R θˆn sj < τ n ] i=1 j=1 This now counts the number of pairs of point image/mode that are near each other and that have similar normal after the rotation of the model. Thanks to the addition normal agreement constraint, from a given pair (p i, s j ) we can retrieve an interval for the rotation angle θ, and the pair will contribute to increase the score of a very small set of translations/rotation pairs (t, θ) which will accelerate the voting.
28 Generalized hough transform 28 / 1 Some approximation that lead to an acceleration: Instead of having one high-dimensional array, we store a few two dimensional projections with common coordinates (e.g, (t x, t y ), (t x, θ), (t y, θ), S xy (t x, t y ) = θ S(θ, t x, t y ) (4) S xθ (t x, θ) = t y S(θ, t x, t y ) (5) S yθ (t y, θ) = t x S(θ, t x, t y ) (6) Find consistent peaks in these lower dimensional arrays
29 RANSAC 29 / 1 Another generic approach to find primitives in a set of points is called he RANSAC method for "RANdom SAmple Consensus" The idea is to sample randomly just enough point to estimate the parameters of the unique primitive that contain these points (2 point for lines, 3 points for circles) Count how many points in the whole set of points agree with the generated primitive repeat this step many time and keep the primitive that contains the largest number of inlier points.
30 RANSAC 30 / 1 Set of points :
31 RANSAC 31 / 1 Select two points at random :
32 RANSAC 32 / 1 find the paramters of the line that fit the two points:
33 RANSAC 33 / 1 calculate the distance of all the points to the line:
34 RANSAC 34 / 1 select data that support the current hypothesis:
35 RANSAC 35 / 1 repeat sampling:
36 RANSAC 36 / 1 repeat sampling:
37 RANSAC 37 / 1 repeat sampling:
38 RANSAC 38 / 1 best_score=inf for t in range(nb_iteration): i=random(len(points)) j=random(len(points)) rho,heta=get_line_paramters(points[i],points[j]) score=0 for k in range(len(points)): if dist_to_line(points[k],rho,theta)<tau: score+=1 if score>best_score: best_rho=rho best_theta=teta
39 RANSAC 39 / 1 It is a non-deterministic algorithm in the sense that it produces a reasonable result only with a certain probability this probability increasing as more iterations are allowed. Using some statistics we can estimate how many test should be done to get a good chance of finding the best line the time needed to get a good solution increases with the number of outliers as we get more chance to pick an outlier at each random selection RANSAC usually performs badly when the number of inliers is less than 50%
40 Refinement: ICP 40 / 1 Once we found candidate poses using the generalized hough transform we can perform refinement using an iterative approach. A classical method to align two set of points given an initial estimate of the alinement is called the Iterative closest point (ICP) algorithm consists in minimizing iteratively this energy E(θ, t) = m min i p i T (θ, t, s j ) 2 j=1 Note that the summation is done of the points s j instead of the point p i as done in eqn.3 For each point in the shape the image we look for the closest point in the image.
41 Refinement: ICP 41 / 1 E(θ, t) = m min i p i T (θ, t, s j ) 2 j=1 We reformulate this problem by adding a vector of auxiliary variables L = (l 1,..., l m ) {1,..., n} m that encode for each point of the shape the index of the corresponding point in the image: m E (θ, t, L) = p lj T (θ, t, s j ) 2 j=1
42 ICP 42 / 1 E (θ, t, L) = m p lj T (θ, t, s j ) 2 j=1 Similarly to the kmeans algorithm, we will minimize E alternatively with respect to (θ, t) and with respect to L. the minimization with respect to L is done by solving m independant problems: l j = argmin i p i T (θ, t, s j ) 2. For each poitn f the shape we look for he closest point in the image the minimization with respect to t is done by displacing the shape with the mean of the point differences t k+1 = t k + 1 m m p lj T (θ, t k, s j ) j=1
43 ICP 43 / 1 E (θ, t, L) = m p lj T (θ, t, s j ) 2 j=1 the minimization with respect to θ is more complex and require a singular value decomposition [1, 2]. Alternativley we can use a gauss-newton iterative minimization (see last slides)
44 ICP 44 / 1 iteration 1 iteration 2 iteration 3 iteration 4 iteration 5 iteration 6
45 Non-linear least squares 45 / 1 Given n functions f i (Θ) from R m into Rwith Θ=(θ 1,..., θ m ), we want to minimize the function S(Θ) = n f i (Θ) 2 i=1 This is refered in te literature an non-linear least squares problem
46 Gauss-Newton S(Θ) = n f i (Θ) 2 This can be solved using the iterative minimization. At each iteration, we linearize f i (Θ) around Θ t : we get S(Θ) i=1 f i (Θ) f i (Θ t ) + f i (Θ t )(Θ Θ t ) n (f i (Θ t ) + f i (Θ t )(Θ Θ t )) 2 = F + J f (Θ Θ t ) 2 i=1 With F the vector [f 1 (Θ t ),..., f n (Θ t )] and J f the jacobian matrix evaluated at location Θ t : f 1 f θ1 Θt 1 θm Θt J f =..... f n θ1 Θt f n θm Θt 46 / 1
47 Gauss-Newton 47 / 1 We have from the previousl slide S(Θ) F + J f (Θ Θ t ) 2 We get the next estimate Θ t+1 as the minimum of this approximation. The gradient write and we get the minium for 2J T f (F + J f (Θ Θ t )) Θ t+1 = Θ t (J T f J f ) 1 J T f F This iterative method is called the Gauss-Newton algorithm
48 Links 48 / 1 tutorial on Robust Non-Linear Least-Squares and applications to computer vision
49 48 / 1 [1] Arun, K., Huang, T., and Blostein, S Least-Squares Fitting of Two 3-D Point Sets. Trans. PAMI, Vol. 9, No. 5, 1987 [2] Horn, B. Closed-Form Solution of Absolute Orientation Using Unit Quaternions.JOSA A, Vol. 4, No. 4, 1987.
HOUGH TRANSFORM CS 6350 C V
HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges
More informationFeature Matching and Robust Fitting
Feature Matching and Robust Fitting Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Project 2 questions? This
More informationFeature Detectors and Descriptors: Corners, Lines, etc.
Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood
More informationModel Fitting. Introduction to Computer Vision CSE 152 Lecture 11
Model Fitting CSE 152 Lecture 11 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 10: Grouping and Model Fitting What to do with edges? Segment linked edge chains into curve features (e.g.,
More informationE0005E - Industrial Image Analysis
E0005E - Industrial Image Analysis The Hough Transform Matthew Thurley slides by Johan Carlson 1 This Lecture The Hough transform Detection of lines Detection of other shapes (the generalized Hough transform)
More informationRANSAC and some HOUGH transform
RANSAC and some HOUGH transform Thank you for the slides. They come mostly from the following source Dan Huttenlocher Cornell U Matching and Fitting Recognition and matching are closely related to fitting
More informationLecture 9 Fitting and Matching
Lecture 9 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi- model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation
More informationLecture 8 Fitting and Matching
Lecture 8 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi-model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationINFO0948 Fitting and Shape Matching
INFO0948 Fitting and Shape Matching Renaud Detry University of Liège, Belgium Updated March 31, 2015 1 / 33 These slides are based on the following book: D. Forsyth and J. Ponce. Computer vision: a modern
More informationHomography estimation
RANSAC continued Homography estimation x w? ~x img H~x w Homography estimation? x img ~x w = H 1 ~x img Homography estimation (0,0) (1,0) (6.4,2.8) (8.0,2.9) (5.6, 4.0) (7.8, 4.2) (0,1) (1,1) Ah =0s.tkhk
More informationCS 664 Image Matching and Robust Fitting. Daniel Huttenlocher
CS 664 Image Matching and Robust Fitting Daniel Huttenlocher Matching and Fitting Recognition and matching are closely related to fitting problems Parametric fitting can serve as more restricted domain
More informationFinding 2D Shapes and the Hough Transform
CS 4495 Computer Vision Finding 2D Shapes and the Aaron Bobick School of Interactive Computing Administrivia Today: Modeling Lines and Finding them CS4495: Problem set 1 is still posted. Please read the
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationFitting. Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! EECS Fall 2014! Foundations of Computer Vision!
Fitting EECS 598-08 Fall 2014! Foundations of Computer Vision!! Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! Readings: FP 10; SZ 4.3, 5.1! Date: 10/8/14!! Materials on these
More informationLecture 15: Segmentation (Edge Based, Hough Transform)
Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................
More informationIssues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou
an edge image, nd line or curve segments present Given the image. in Line and Curves Detection 1 Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationLecture 9: Hough Transform and Thresholding base Segmentation
#1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting
More informationEECS 442 Computer vision. Fitting methods
EECS 442 Computer vision Fitting methods - Problem formulation - Least square methods - RANSAC - Hough transforms - Multi-model fitting - Fitting helps matching! Reading: [HZ] Chapters: 4, 11 [FP] Chapters:
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based
More information10/03/11. Model Fitting. Computer Vision CS 143, Brown. James Hays. Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem
10/03/11 Model Fitting Computer Vision CS 143, Brown James Hays Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem Fitting: find the parameters of a model that best fit the data Alignment:
More informationFitting. Lecture 8. Cristian Sminchisescu. Slide credits: K. Grauman, S. Seitz, S. Lazebnik, D. Forsyth, J. Ponce
Fitting Lecture 8 Cristian Sminchisescu Slide credits: K. Grauman, S. Seitz, S. Lazebnik, D. Forsyth, J. Ponce Fitting We want to associate a model with observed features [Fig from Marszalek & Schmid,
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuuki Kaahara Fitting and Alignment Computer Vision Szeliski 6.1 James Has Acknowledgment: Man slides from Derek Hoiem, Lana Lazebnik, and
More informationHOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis
INF 4300 Digital Image Analysis HOUGH TRANSFORM Fritz Albregtsen 14.09.2011 Plan for today This lecture goes more in detail than G&W 10.2! Introduction to Hough transform Using gradient information to
More informationUncertainties: Representation and Propagation & Line Extraction from Range data
41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty
More informationEN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform
EN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform The goal of this fourth lab is to ˆ Learn how to detect corners, and use them in a tracking application ˆ Learn how to describe
More informationVision par ordinateur
Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric
More informationFitting. Fitting. Slides S. Lazebnik Harris Corners Pkwy, Charlotte, NC
Fitting We ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features in the image by grouping multiple features according
More informationCS 231A Computer Vision (Winter 2014) Problem Set 3
CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition
More informationElaborazione delle Immagini Informazione Multimediale. Raffaella Lanzarotti
Elaborazione delle Immagini Informazione Multimediale Raffaella Lanzarotti HOUGH TRANSFORM Paragraph 4.3.2 of the book at link: szeliski.org/book/drafts/szeliskibook_20100903_draft.pdf Thanks to Kristen
More informationEdge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels
Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface
More informationEdge detection. Gradient-based edge operators
Edge detection Gradient-based edge operators Prewitt Sobel Roberts Laplacian zero-crossings Canny edge detector Hough transform for detection of straight lines Circle Hough Transform Digital Image Processing:
More information3D Point Cloud Processing
3D Point Cloud Processing The image depicts how our robot Irma3D sees itself in a mirror. The laser looking into itself creates distortions as well as changes in intensity that give the robot a single
More informationHough Transform and RANSAC
CS4501: Introduction to Computer Vision Hough Transform and RANSAC Various slides from previous courses by: D.A. Forsyth (Berkeley / UIUC), I. Kokkinos (Ecole Centrale / UCL). S. Lazebnik (UNC / UIUC),
More informationMultimedia Computing: Algorithms, Systems, and Applications: Edge Detection
Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More informationStraight Lines and Hough
09/30/11 Straight Lines and Hough Computer Vision CS 143, Brown James Hays Many slides from Derek Hoiem, Lana Lazebnik, Steve Seitz, David Forsyth, David Lowe, Fei-Fei Li Project 1 A few project highlights
More informationTypes of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection
Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image
More informationLecture 4: Finding lines: from detection to model fitting
Lecture 4: Finding lines: from detection to model fitting Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today Edge detection Canny edge detector Line fitting Hough Transform RANSAC (Problem
More informationFeature Selection. Ardy Goshtasby Wright State University and Image Fusion Systems Research
Feature Selection Ardy Goshtasby Wright State University and Image Fusion Systems Research Image features Points Lines Regions Templates 2 Corners They are 1) locally unique and 2) rotationally invariant
More informationAnnouncements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10
Announcements Assignment 2 due Tuesday, May 4. Edge Detection, Lines Midterm: Thursday, May 6. Introduction to Computer Vision CSE 152 Lecture 10 Edges Last Lecture 1. Object boundaries 2. Surface normal
More informationOBJECT detection in general has many applications
1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for
More informationModel Fitting, RANSAC. Jana Kosecka
Model Fitting, RANSAC Jana Kosecka Fitting: Overview If we know which points belong to the line, how do we find the optimal line parameters? Least squares What if there are outliers? Robust fitting, RANSAC
More informationLecture 8: Fitting. Tuesday, Sept 25
Lecture 8: Fitting Tuesday, Sept 25 Announcements, schedule Grad student extensions Due end of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review
More informationAn edge is not a line... Edge Detection. Finding lines in an image. Finding lines in an image. How can we detect lines?
Edge Detection An edge is not a line... original image Cann edge detector Compute image derivatives if gradient magnitude > τ and the value is a local maimum along gradient direction piel is an edge candidate
More informationGENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES. D. H. Ballard Pattern Recognition Vol. 13 No
GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES D. H. Ballard Pattern Recognition Vol. 13 No. 2 1981 What is the generalized Hough (Huff) transform used for? Hough transform is a way of encoding
More informationCS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018
CS 395T Lecture 12: Feature Matching and Bundle Adjustment Qixing Huang October 10 st 2018 Lecture Overview Dense Feature Correspondences Bundle Adjustment in Structure-from-Motion Image Matching Algorithm
More informationModel Fitting: The Hough transform I
Model Fitting: The Hough transform I Guido Gerig, CS6640 Image Processing, Utah Credit: Svetlana Lazebnik (Computer Vision UNC Chapel Hill, 2008) Fitting Parametric Models: Beyond Lines Choose a parametric
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationIntroduction. Chapter Overview
Chapter 1 Introduction The Hough Transform is an algorithm presented by Paul Hough in 1962 for the detection of features of a particular shape like lines or circles in digitalized images. In its classical
More informationAutomatic Image Alignment (feature-based)
Automatic Image Alignment (feature-based) Mike Nese with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2006 Today s lecture Feature
More informationComputer Vision 6 Segmentation by Fitting
Computer Vision 6 Segmentation by Fitting MAP-I Doctoral Programme Miguel Tavares Coimbra Outline The Hough Transform Fitting Lines Fitting Curves Fitting as a Probabilistic Inference Problem Acknowledgements:
More informationFitting: Voting and the Hough Transform April 23 rd, Yong Jae Lee UC Davis
Fitting: Voting and the Hough Transform April 23 rd, 2015 Yong Jae Lee UC Davis Last time: Grouping Bottom-up segmentation via clustering To find mid-level regions, tokens General choices -- features,
More informationPattern recognition systems Lab 3 Hough Transform for line detection
Pattern recognition systems Lab 3 Hough Transform for line detection 1. Objectives The main objective of this laboratory session is to implement the Hough Transform for line detection from edge images.
More informationAn Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy
An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy Chenyang Xu 1, Siemens Corporate Research, Inc., Princeton, NJ, USA Xiaolei Huang,
More informationIndex. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263
Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine
More informationProf. Kristen Grauman
Fitting Prof. Kristen Grauman UT Austin Fitting Want to associate a model with observed features [Fig from Marszalek & Schmid, 2007] For eample, the model could be a line, a circle, or an arbitrary shape.
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationOther Linear Filters CS 211A
Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationPhoto by Carl Warner
Photo b Carl Warner Photo b Carl Warner Photo b Carl Warner Fitting and Alignment Szeliski 6. Computer Vision CS 43, Brown James Has Acknowledgment: Man slides from Derek Hoiem and Grauman&Leibe 2008 AAAI
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationMatching and Tracking
Matching and Tracking Goal: develop matching procedures that can recognize and track objects when objects are partially occluded image cannot be segmented by thresholding Key questions: How do we represent
More informationEstimate Geometric Transformation
1 of 9 15/12/2009 11:05 Estimate Geometric Transformation Estimate geometric transformation from matching point pairs Library Geometric Transformations Description Use the Estimate Geometric Transformation
More informationCamera calibration. Robotic vision. Ville Kyrki
Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis
More informationEdge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick
Edge Detection CSE 576 Ali Farhadi Many slides from Steve Seitz and Larry Zitnick Edge Attneave's Cat (1954) Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationFeature Based Registration - Image Alignment
Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html
More informationImage Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi Combine two or more overlapping images to make one larger image Add example Slide credit: Vaibhav Vaish
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationPattern Matching & Image Registration
Pattern Matching & Image Registration Philippe Latour 26/11/2014 1 / 53 Table of content Introduction Applications Some solutions and their corresponding approach Pattern matching components Implementation
More informationA Summary of Projective Geometry
A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]
More informationLesson 6: Contours. 1. Introduction. 2. Image filtering: Convolution. 3. Edge Detection. 4. Contour segmentation
. Introduction Lesson 6: Contours 2. Image filtering: Convolution 3. Edge Detection Gradient detectors: Sobel Canny... Zero crossings: Marr-Hildreth 4. Contour segmentation Local tracking Hough transform
More informationGeneralized Hough Transform, line fitting
Generalized Hough Transform, line fitting Introduction to Computer Vision CSE 152 Lecture 11-a Announcements Assignment 2: Due today Midterm: Thursday, May 10 in class Non-maximum suppression For every
More informationSurface Registration. Gianpaolo Palma
Surface Registration Gianpaolo Palma The problem 3D scanning generates multiple range images Each contain 3D points for different parts of the model in the local coordinates of the scanner Find a rigid
More informationMatching and Recognition in 3D. Based on slides by Tom Funkhouser and Misha Kazhdan
Matching and Recognition in 3D Based on slides by Tom Funkhouser and Misha Kazhdan From 2D to 3D: Some Things Easier No occlusion (but sometimes missing data instead) Segmenting objects often simpler From
More informationImage Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58
Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify
More informationCS 231A Computer Vision (Winter 2018) Problem Set 3
CS 231A Computer Vision (Winter 2018) Problem Set 3 Due: Feb 28, 2018 (11:59pm) 1 Space Carving (25 points) Dense 3D reconstruction is a difficult problem, as tackling it from the Structure from Motion
More informationSrikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah
School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Q K 1 u v 1 What is pose estimation?
More informationInstance-level recognition part 2
Visual Recognition and Machine Learning Summer School Paris 2011 Instance-level recognition part 2 Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,
More informationCircular Hough Transform
Circular Hough Transform Simon Just Kjeldgaard Pedersen Aalborg University, Vision, Graphics, and Interactive Systems, November 2007 1 Introduction A commonly faced problem in computer vision is to determine
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationBiomedical Image Analysis. Point, Edge and Line Detection
Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth
More informationFitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local
Fitting Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local can t tell whether a set of points lies on a line by looking only at each
More informationCAP 5415 Computer Vision Fall 2012
CAP 5415 Computer Vision Fall 2012 Hough Transform Lecture-18 Sections 4.2, 4.3 Fundamentals of Computer Vision Image Feature Extraction Edges (edge pixels) Sobel, Roberts, Prewit Laplacian of Gaussian
More informationDETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION
DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,
More informationImage Warping. Many slides from Alyosha Efros + Steve Seitz. Photo by Sean Carroll
Image Warping Man slides from Alosha Efros + Steve Seitz Photo b Sean Carroll Morphing Blend from one object to other with a series of local transformations Image Transformations image filtering: change
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationSegmentation and Grouping
Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation
More informationEdge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages
Announcements Mailing list: csep576@cs.washington.edu you should have received messages Project 1 out today (due in two weeks) Carpools Edge Detection From Sandlot Science Today s reading Forsyth, chapters
More informationPartial Shape Matching using Transformation Parameter Similarity - Additional Material
Volume xx (y), Number z, pp. 7 Partial Shape Matching using Transformation Parameter Similarity - Additional Material paper66. An Example for Local Contour Descriptors As mentioned in Section 4 of the
More informationSrikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah
School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Presentation Outline 1 2 3 Sample Problem
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More information3D Environment Reconstruction
3D Environment Reconstruction Using Modified Color ICP Algorithm by Fusion of a Camera and a 3D Laser Range Finder The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15,
More informationEdges and Lines Readings: Chapter 10: better edge detectors line finding circle finding
Edges and Lines Readings: Chapter 10: 10.2.3-10.3 better edge detectors line finding circle finding 1 Lines and Arcs Segmentation In some image sets, lines, curves, and circular arcs are more useful than
More informationOn Resolving Ambiguities in Arbitrary-Shape extraction by the Hough Transform
On Resolving Ambiguities in Arbitrary-Shape extraction by the Hough Transform Eugenia Montiel 1, Alberto S. Aguado 2 and Mark S. Nixon 3 1 imagis, INRIA Rhône-Alpes, France 2 University of Surrey, UK 3
More informationAnnouncements, schedule. Lecture 8: Fitting. Weighted graph representation. Outline. Segmentation by Graph Cuts. Images as graphs
Announcements, schedule Lecture 8: Fitting Tuesday, Sept 25 Grad student etensions Due of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review
More informationClustering Color/Intensity. Group together pixels of similar color/intensity.
Clustering Color/Intensity Group together pixels of similar color/intensity. Agglomerative Clustering Cluster = connected pixels with similar color. Optimal decomposition may be hard. For example, find
More information3D Reconstruction of a Hopkins Landmark
3D Reconstruction of a Hopkins Landmark Ayushi Sinha (461), Hau Sze (461), Diane Duros (361) Abstract - This paper outlines a method for 3D reconstruction from two images. Our procedure is based on known
More informationEdge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above
Edge linking Edge detection rarely finds the entire set of edges in an image. Normally there are breaks due to noise, non-uniform illumination, etc. If we want to obtain region boundaries (for segmentation)
More information