Fitting: The Hough transform

Similar documents
Fitting: The Hough transform

Fitting: The Hough transform

Straight Lines and Hough

Model Fitting: The Hough transform I

Hough Transform and RANSAC

Model Fitting: The Hough transform II

Finding 2D Shapes and the Hough Transform

Lecture 8: Fitting. Tuesday, Sept 25

Fitting. Lecture 8. Cristian Sminchisescu. Slide credits: K. Grauman, S. Seitz, S. Lazebnik, D. Forsyth, J. Ponce

Announcements, schedule. Lecture 8: Fitting. Weighted graph representation. Outline. Segmentation by Graph Cuts. Images as graphs

10/03/11. Model Fitting. Computer Vision CS 143, Brown. James Hays. Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem

Feature Matching and Robust Fitting

Fitting: Voting and the Hough Transform April 23 rd, Yong Jae Lee UC Davis

HOUGH TRANSFORM CS 6350 C V

Fitting. Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! EECS Fall 2014! Foundations of Computer Vision!

EECS 442 Computer vision. Fitting methods

Lecture 9 Fitting and Matching

Beyond bags of features: Adding spatial information. Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba

Lecture 9: Hough Transform and Thresholding base Segmentation

Lecture 8 Fitting and Matching

Prof. Kristen Grauman

Object Detection. Sanja Fidler CSC420: Intro to Image Understanding 1/ 1

Patch-based Object Recognition. Basic Idea

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011

Instance-level recognition

Instance-level recognition

Part based models for recognition. Kristen Grauman

Lecture 4: Finding lines: from detection to model fitting

Part-based and local feature models for generic object recognition

Feature Matching + Indexing and Retrieval

Perception IV: Place Recognition, Line Extraction

Example: Line fitting. Difficulty of line fitting. Fitting lines. Fitting lines. Fitting lines. Voting 9/22/2009

Instance-level recognition part 2

Homography estimation

Instance-level recognition II.

Multi-stable Perception. Necker Cube

Feature Selection. Ardy Goshtasby Wright State University and Image Fusion Systems Research

Elaborazione delle Immagini Informazione Multimediale. Raffaella Lanzarotti

CS 664 Image Matching and Robust Fitting. Daniel Huttenlocher

Other Linear Filters CS 211A

Deformable Part Models

Dense Hough transforms on gray level images using multi-scale derivatives

Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above

Beyond Bags of features Spatial information & Shape models

Edge detection. Gradient-based edge operators

Fi#ng & Matching Region Representa3on Image Alignment, Op3cal Flow

RANSAC and some HOUGH transform

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

The SIFT (Scale Invariant Feature

Lecture 15: Segmentation (Edge Based, Hough Transform)

Computer Vision Lecture 6

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

CS 558: Computer Vision 4 th Set of Notes

Computer Vision Lecture 6

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Photo by Carl Warner

Computer Vision 6 Segmentation by Fitting

Object detection using non-redundant local Binary Patterns

Edge Detection. EE/CSE 576 Linda Shapiro

Methods for Representing and Recognizing 3D objects

Uncertainties: Representation and Propagation & Line Extraction from Range data

Local Features Tutorial: Nov. 8, 04

Lecture 6: Edge Detection

Local features: detection and description May 12 th, 2015

Fitting: Voting and the Hough Transform

Lecture 10 Detectors and descriptors

Object Category Detection. Slides mostly from Derek Hoiem

Feature Detectors and Descriptors: Corners, Lines, etc.

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Local features and image matching. Prof. Xin Yang HUST

CAP 5415 Computer Vision Fall 2012

Biomedical Image Analysis. Point, Edge and Line Detection

Implementing the Scale Invariant Feature Transform(SIFT) Method

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

Local features: detection and description. Local invariant features

CAP 5415 Computer Vision Fall 2012

(Refer Slide Time: 0:32)

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

Template Matching Rigid Motion

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Eppur si muove ( And yet it moves )

School of Computing University of Utah

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

SIFT - scale-invariant feature transform Konrad Schindler

Visuelle Perzeption für Mensch- Maschine Schnittstellen

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Outline 7/2/201011/6/

Image Processing and Computer Vision

Category vs. instance recognition

Template Matching Rigid Motion. Find transformation to align two images. Focus on geometric features

GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES. D. H. Ballard Pattern Recognition Vol. 13 No

Object detection. Announcements. Last time: Mid-level cues 2/23/2016. Wed Feb 24 Kristen Grauman UT Austin

Backprojection Revisited: Scalable Multi-view Object Detection and Similarity Metrics for Detections

Fitting. Fitting. Slides S. Lazebnik Harris Corners Pkwy, Charlotte, NC

Matching and Tracking

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Automatic Image Alignment (feature-based)

Feature Based Registration - Image Alignment

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

Transcription:

Fitting: The Hough transform

Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data doesn t matter as long as there are enough features remaining to agree on a good model

Hough transform An early type of voting scheme General outline: Discretize parameter space into bins For each feature point in the image, put a vote in every bin in the parameter space that could have generated this point Find bins that have the most votes Image space Hough parameter space P.V.C. Hough, Machine Analysis of Bubble Chamber Pictures, Proc. Int. Conf. High Energy Accelerators and Instrumentation, 1959

Parameter space representation A line in the image corresponds to a point in Hough space Image space Hough parameter space Source: S. Seitz

Parameter space representation What does a point (x 0, y 0 ) in the image space map to in the Hough space? Image space Hough parameter space

Parameter space representation What does a point (x 0, y 0 ) in the image space map to in the Hough space? Answer: the solutions of b = x 0 m + y 0 This is a line in Hough space Image space Hough parameter space

Parameter space representation Where is the line that contains both (x 0, y 0 ) and (x 1, y 1 )? Image space Hough parameter space (x 1, y 1 ) (x 0, y 0 ) b = x 1 m + y 1

Parameter space representation Where is the line that contains both (x 0, y 0 ) and (x 1, y 1 )? It is the intersection ti of the lines b = x 0 m + y 0 and b = x 1 m + y 1 Image space Hough parameter space (x 1, y 1 ) (x 0, y 0 ) b = x 1 m + y 1

Parameter space representation Problems with the (m,b) space: Unbounded parameter domain Vertical lines require infinite m

Parameter space representation Problems with the (m,b) space: Unbounded parameter domain Vertical lines require infinite m Alternative: polar representation x cosθ + y sinθ = ρ Each point will add a sinusoid in the (θ,ρ) parameter space

Algorithm outline Initialize accumulator H to all zeros For each edge point (x,y) ρ in the image For θ = 0 to 180 ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 θ end end Find the value(s) of (θ, ρ) where H(θ, ρ) is a local maximum The detected line in the image is given by ρ = xcosθ θ +ysinθ θ

Basic illustration features votes

Other shapes Square Circle

Several lines

A more complicated image http://ostatic.com/files/images/ss_hough.jpg

Effect of noise features votes

Effect of noise features Peak gets fuzzy and hard to locate votes

Effect of noise Number of votes for a line of 20 points with increasing noise:

Random points features votes Uniform noise can lead to spurious peaks in the array

Random points As the level of uniform noise increases, the maximum number of votes increases too:

Dealing with noise Choose a good grid / discretization Too coarse: large votes obtained when too many different lines correspond to a single bucket Too fine: miss lines because some points that are not exactly collinear cast votes for different buckets Increment neighboring i bins (smoothing in accumulator array) Try to get rid of irrelevant features Take only edge points with significant gradient magnitude

Incorporating image gradients Recall: when we detect an edge point, we also know its gradient direction But this means that the line is uniquely determined! Modified Hough transform: For each edge point (x,y) θ = gradient orientation at (x,y) ρ = x cos θ + y sin θ H(θ, ρ) = H(θ, ρ) + 1 end

Hough transform for circles How many dimensions will the parameter space have? Given an oriented edge point, what are all possible bins that it can vote for?

Hough transform for circles y image space Hough parameter space r ( x, y) + r I( x, y) (x,y) ( x, y) r I( x, y) x x y

Hough transform for circles Conceptually equivalent procedure: for each (x,y,r), draw the corresponding circle in the image and compute its support r y x Is this more or less efficient than voting with features?

Application in recognition F. Jurie and C. Schmid, Scale-invariant shape features for recognition of object categories, CVPR 2004

Hough circles vs. Laplacian blobs Original images Robustness to scale and clutter Laplacian circles Hough-like circles F. Jurie and C. Schmid, Scale-invariant shape features for recognition of object categories, CVPR 2004

Generalized Hough transform We want to find a template defined by its reference point (center) and several distinct types of andmark points in stable spatial configuration Template c

Generalized Hough transform Template representation: for each type of landmark point, store all possible displacement vectors towards the center Template Model

Generalized Hough transform Detecting the template: For each feature in a new image, look up that feature type in the model and vote for the possible center locations associated with that type in the model Test image Model

Application in recognition Index displacements by visual codeword training image visual codeword with displacement vectors B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004

Application in recognition Index displacements by visual codeword test image B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004

Implicit shape models: Training 1. Build codebook of patches around extracted interest points using clustering (more on this later in the course)

Implicit shape models: Training 1. Build codebook of patches around extracted interest points using clustering 2. Map the patch around each interest point to closest codebook entry

Implicit shape models: Training 1. Build codebook of patches around extracted interest points using clustering 2. Map the patch around each interest point to closest codebook entry 3. For each codebook entry, store all positions it was found, relative to object center

Implicit shape models: Testing 1. Given test image, extract patches, match to codebook entry 2. Cast votes for possible positions of object center 3. Search for maxima in voting space 4. Extract weighted segmentation mask based on stored masks for the codebook occurrences

Additional examples B. Leibe, A. Leonardis, and B. Schiele, Robust Object Detection with Interleaved Categorization and Segmentation, IJCV 77 (1-3), pp. 259-289, 2008.

Implicit shape models: Details Supervised training Need reference location and segmentation mask for each training car Voting space is continuous, not discrete Clustering algorithm needed to find maxima How about dealing with scale changes? Option 1: search a range of scales, as in Hough transform for circles Option 2: use scale-covariant interest points Verification stage is very important Once we have a location hypothesis, we can overlay a more detailed template over the image and compare pixel-by- pixel, transfer segmentation masks, etc.

Hough transform: Discussion Pros Can deal with non-locality and occlusion Can detect multiple instances of a model Some robustness to noise: noise points unlikely to contribute consistently to any single bin Cons Complexity of search time increases exponentially with the number of model parameters Non-target shapes can produce spurious peaks in parameter space It s hard to pick a good grid size Hough transform vs. RANSAC