Fitting: The Hough transform

Similar documents
Fitting: The Hough transform

Fitting: The Hough transform

Model Fitting: The Hough transform I

Straight Lines and Hough

Finding 2D Shapes and the Hough Transform

Hough Transform and RANSAC

Model Fitting: The Hough transform II

Lecture 8: Fitting. Tuesday, Sept 25

Fitting. Lecture 8. Cristian Sminchisescu. Slide credits: K. Grauman, S. Seitz, S. Lazebnik, D. Forsyth, J. Ponce

Announcements, schedule. Lecture 8: Fitting. Weighted graph representation. Outline. Segmentation by Graph Cuts. Images as graphs

EECS 442 Computer vision. Fitting methods

Fitting: Voting and the Hough Transform April 23 rd, Yong Jae Lee UC Davis

HOUGH TRANSFORM CS 6350 C V

Fitting. Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! EECS Fall 2014! Foundations of Computer Vision!

Lecture 4: Finding lines: from detection to model fitting

Lecture 9: Hough Transform and Thresholding base Segmentation

Lecture 9 Fitting and Matching

Lecture 8 Fitting and Matching

Prof. Kristen Grauman

10/03/11. Model Fitting. Computer Vision CS 143, Brown. James Hays. Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem

Feature Matching and Robust Fitting

Object Detection. Sanja Fidler CSC420: Intro to Image Understanding 1/ 1

Beyond bags of features: Adding spatial information. Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011

Patch-based Object Recognition. Basic Idea

Example: Line fitting. Difficulty of line fitting. Fitting lines. Fitting lines. Fitting lines. Voting 9/22/2009

Feature Matching + Indexing and Retrieval

Part-based and local feature models for generic object recognition

Elaborazione delle Immagini Informazione Multimediale. Raffaella Lanzarotti

Part based models for recognition. Kristen Grauman

Instance-level recognition

CS 664 Image Matching and Robust Fitting. Daniel Huttenlocher

Instance-level recognition

Perception IV: Place Recognition, Line Extraction

Instance-level recognition part 2

Computer Vision Lecture 6

RANSAC and some HOUGH transform

Computer Vision Lecture 6

Dense Hough transforms on gray level images using multi-scale derivatives

Instance-level recognition II.

Lecture 15: Segmentation (Edge Based, Hough Transform)

Homography estimation

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

Fi#ng & Matching Region Representa3on Image Alignment, Op3cal Flow

Deformable Part Models

Beyond Bags of features Spatial information & Shape models

Multi-stable Perception. Necker Cube

Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above

Object detection using non-redundant local Binary Patterns

Computer Vision 6 Segmentation by Fitting

GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES. D. H. Ballard Pattern Recognition Vol. 13 No

Other Linear Filters CS 211A

Edge Detection. EE/CSE 576 Linda Shapiro

Methods for Representing and Recognizing 3D objects

Lecture 10 Detectors and descriptors

Photo by Carl Warner

Feature Detectors and Descriptors: Corners, Lines, etc.

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Fitting: Voting and the Hough Transform

Edge detection. Gradient-based edge operators

The SIFT (Scale Invariant Feature

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Category vs. instance recognition

Lecture 6: Edge Detection

Uncertainties: Representation and Propagation & Line Extraction from Range data

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Biomedical Image Analysis. Point, Edge and Line Detection

Backprojection Revisited: Scalable Multi-view Object Detection and Similarity Metrics for Detections

Local features: detection and description. Local invariant features

Object Category Detection. Slides mostly from Derek Hoiem

Local Features Tutorial: Nov. 8, 04

Pattern recognition systems Lab 3 Hough Transform for line detection

CS 558: Computer Vision 4 th Set of Notes

Fitting. Fitting. Slides S. Lazebnik Harris Corners Pkwy, Charlotte, NC

Local features and image matching. Prof. Xin Yang HUST

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Feature Selection. Ardy Goshtasby Wright State University and Image Fusion Systems Research

Prof. Feng Liu. Spring /26/2017

Patch Descriptors. EE/CSE 576 Linda Shapiro

Part-based models. Lecture 10

Template Matching Rigid Motion

CAP 5415 Computer Vision Fall 2012

Computer Vision for HCI. Topics of This Lecture

Visuelle Perzeption für Mensch- Maschine Schnittstellen

CAP 5415 Computer Vision Fall 2012

Multi-modal Registration of Visual Data. Massimiliano Corsini Visual Computing Lab, ISTI - CNR - Italy

Local Image Features

EECS490: Digital Image Processing. Lecture #20

Image Processing and Computer Vision

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Template Matching Rigid Motion. Find transformation to align two images. Focus on geometric features

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

Local features: detection and description May 12 th, 2015

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Patch Descriptors. CSE 455 Linda Shapiro

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

CS231A Midterm Review. Friday 5/6/2016

Transcription:

Fitting: The Hough transform

Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data doesn t matter as long as there are enough features remaining to agree on a good model

Hough transform An early type of voting scheme General outline: Discretize parameter space into bins For each feature point in the image, put a vote in every bin in the parameter space that could have generated this point Find bins that have the most votes Image space Hough parameter space P.V.C. Hough, Machine Analysis of Bubble Chamber Pictures, Proc. Int. Conf. High Energy Accelerators and Instrumentation, 1959

Parameter space representation A line in the image corresponds to a point in Hough space Image space Hough parameter space Source: S. Seitz

Parameter space representation What does a point (x 0, y 0 ) in the image space map to in the Hough space? Image space Hough parameter space

Parameter space representation What does a point (x 0, y 0 ) in the image space map to in the Hough space? Answer: the solutions of b = x 0 m + y 0 This is a line in Hough space Image space Hough parameter space

Parameter space representation Where is the line that contains both (x 0, y 0 ) and (x 1, y 1 )? Image space Hough parameter space (x 1, y 1 ) (x 0, y 0 ) b = x 1 m + y 1

Parameter space representation Where is the line that contains both (x 0, y 0 ) and (x 1, y 1 )? It is the intersection of the lines b = x 0 m + y 0 and b = x 1 m + y 1 Image space Hough parameter space (x 1, y 1 ) (x 0, y 0 ) b = x 1 m + y 1

Parameter space representation Problems with the (m,b) space: Unbounded parameter domain Vertical lines require infinite m

Parameter space representation Problems with the (m,b) space: Unbounded parameter domain Vertical lines require infinite m Alternative: polar representation x cosθ + y sinθ = ρ Each point will add a sinusoid in the (θ,ρ) parameter space

Algorithm outline Initialize accumulator H to all zeros For each edge point (x,y) ρ in the image For θ = 0 to 180 ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 θ end end Find the value(s) of (θ, ρ) where H(θ, ρ) is a local maximum The detected line in the image is given by ρ = x cos θ + y sin θ

Basic illustration features votes

Other shapes Square Circle

Several lines

Effect of noise features votes

Effect of noise features Peak gets fuzzy and hard to locate votes

Effect of noise Number of votes for a line of 20 points with increasing noise:

Random points features votes Uniform noise can lead to spurious peaks in the array

Random points As the level of uniform noise increases, the maximum number of votes increases too:

Practical details Try to get rid of irrelevant features Take only edge points with significant gradient magnitude Choose a good grid / discretization Too coarse: large votes obtained when too many different lines correspond to a single bucket Too fine: miss lines because some points that are not exactly collinear cast votes for different buckets Increment neighboring bins (smoothing in accumulator array) Who belongs to which line? Tag the votes

Hough transform: Pros Can deal with non-locality and occlusion Can detect multiple instances of a model in a single pass Some robustness to noise: noise points unlikely to contribute consistently to any single bin

Hough transform: Cons Complexity of search time increases exponentially with the number of model parameters Non-target shapes can produce spurious peaks in parameter space It s hard to pick a good grid size

Extension: Incorporating image gradients Recall: when we detect an edge point, we also know its gradient direction But this means that the line is uniquely determined! Modified Hough transform: For each edge point (x,y) θ = gradient orientation at (x,y) ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 end

Extension: Cascaded Hough transform Let s go back to the original (m,b) parametrization A line in the image maps to a pencil of lines in the Hough space What do we get with parallel lines or a pencil of lines? Collinear peaks in the Hough space! So we can apply a Hough transform to the output of the first Hough transform to find vanishing points Issue: dealing with unbounded parameter space T. Tuytelaars, M. Proesmans, L. Van Gool "The cascaded Hough transform," ICIP, vol. II, pp. 736-739, 1997.

Cascaded Hough transform T. Tuytelaars, M. Proesmans, L. Van Gool "The cascaded Hough transform," ICIP, vol. II, pp. 736-739, 1997.

Hough transform for circles How many dimensions will the parameter space have? Given an oriented edge point, what are all possible bins that it can vote for?

Hough transform for circles y image space Hough parameter space r ( x, y) + r f ( x, y) (x,y) ( x, y) r f ( x, y) x x y

Hough transform for circles Conceptually equivalent procedure: for each (x,y,r), draw the corresponding circle in the image and compute its support r x y What is more efficient: going from the image space to the parameter space or vice versa?

Application in recognition F. Jurie and C. Schmid, Scale-invariant shape features for recognition of object categories, CVPR 2004

Hough circles vs. Laplacian blobs Original images Robustness to scale and clutter Laplacian circles Hough-like circles F. Jurie and C. Schmid, Scale-invariant shape features for recognition of object categories, CVPR 2004

Generalized Hough transform We want to find a shape defined by its boundary points and a reference point a D. Ballard, Generalizing the Hough Transform to Detect Arbitrary Shapes, Pattern Recognition 13(2), 1981, pp. 111-122.

Generalized Hough transform We want to find a shape defined by its boundary points and a reference point For every boundary point p, we can compute the displacement vector r = a p as a function of gradient orientation θ a p θ r(θ) D. Ballard, Generalizing the Hough Transform to Detect Arbitrary Shapes, Pattern Recognition 13(2), 1981, pp. 111-122.

Generalized Hough transform For model shape: construct a table indexed by θ storing displacement vectors r as function of gradient direction Detection: For each edge point p with gradient orientation θ: Retrieve all r indexed with θ For each r(θ), put a vote in the Hough space at p + r(θ) Peak in this Hough space is reference point with most supporting edges Assumption: translation is the only transformation here, i.e., orientation and scale are fixed Source: K. Grauman

Example model shape

Example displacement vectors for model points

Example range of voting locations for test point

Example range of voting locations for test point

Example votes for points with θ =

Example displacement vectors for model points

Example range of voting locations for test point

Example votes for points with θ =

Application in recognition Instead of indexing displacements by gradient orientation, index by visual codeword training image visual codeword with displacement vectors B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004

Application in recognition Instead of indexing displacements by gradient orientation, index by visual codeword test image B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004

Implicit shape models: Training 1. Build codebook of patches around extracted interest points using clustering (more on this later in the course)

Implicit shape models: Training 1. Build codebook of patches around extracted interest points using clustering 2. Map the patch around each interest point to closest codebook entry

Implicit shape models: Training 1. Build codebook of patches around extracted interest points using clustering 2. Map the patch around each interest point to closest codebook entry 3. For each codebook entry, store all positions it was found, relative to object center

Implicit shape models: Testing 1. Given test image, extract patches, match to codebook entry 2. Cast votes for possible positions of object center 3. Search for maxima in voting space 4. Extract weighted segmentation mask based on stored masks for the codebook occurrences

Implicit shape models: Details Supervised training Need reference location and segmentation mask for each training car Voting space is continuous, not discrete Clustering algorithm needed to find maxima How about dealing with scale changes? Option 1: search a range of scales, as in Hough transform for circles Option 2: use scale-covariant interest points Verification stage is very important Once we have a location hypothesis, we can overlay a more detailed template over the image and compare pixel-bypixel, transfer segmentation masks, etc.