Pattern Matching & Image Registration

Similar documents
Outline 7/2/201011/6/

Feature Detectors and Descriptors: Corners, Lines, etc.

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

The SIFT (Scale Invariant Feature

Obtaining Feature Correspondences

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Chapter 3 Image Registration. Chapter 3 Image Registration

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

CAP 5415 Computer Vision Fall 2012

Local Feature Detectors

Local Features: Detection, Description & Matching

Object Recognition with Invariant Features

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

Computer Vision for HCI. Topics of This Lecture

Motion Estimation and Optical Flow Tracking

Feature Based Registration - Image Alignment

SIFT - scale-invariant feature transform Konrad Schindler

Fitting: The Hough transform

Scale Invariant Feature Transform

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

Computer Vision I - Filtering and Feature detection

Image features. Image Features

Local Features Tutorial: Nov. 8, 04

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Templates, Image Pyramids, and Filter Banks

Anno accademico 2006/2007. Davide Migliore

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

COMPUTER AND ROBOT VISION

Feature Matching and Robust Fitting

Step-by-Step Model Buidling

Local invariant features

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Biomedical Image Analysis. Point, Edge and Line Detection

Local Image Features

INFO0948 Fitting and Shape Matching

Scale Invariant Feature Transform

Local features: detection and description. Local invariant features

Computer Vision I - Basics of Image Processing Part 2

Implementing the Scale Invariant Feature Transform(SIFT) Method

Problems with template matching

Coarse-to-fine image registration

Local features: detection and description May 12 th, 2015

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Corner Detection. GV12/3072 Image Processing.

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields

Edge Detection (with a sidelight introduction to linear, associative operators). Images

School of Computing University of Utah

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Panoramic Image Stitching

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Schedule for Rest of Semester

Image Processing and Computer Vision

Straight Lines and Hough

Image Segmentation and Registration

CS664 Lecture #21: SIFT, object recognition, dynamic programming

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

Edge and corner detection

Digital Image Processing. Image Enhancement - Filtering

Local Image Features

CS5670: Computer Vision

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Edge and local feature detection - 2. Importance of edge detection in computer vision

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

CS4733 Class Notes, Computer Vision

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Motion illusion, rotating snakes

Key properties of local features

COMPUTER AND ROBOT VISION

Fitting: The Hough transform

Digital Image Processing

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

CPSC 425: Computer Vision

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Invariant Features from Interest Point Groups

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

A Pyramid Approach For Multimodality Image Registration Based On Mutual Information

Automatic Image Alignment (feature-based)

CS 231A Computer Vision (Fall 2012) Problem Set 3

Lecture 4: Spatial Domain Transformations

Local Image preprocessing (cont d)

3D Modeling of Objects Using Laser Scanning

Image processing and features

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Filtering Images. Contents

Wikipedia - Mysid

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Fundamentals of Digital Image Processing

Transcription:

Pattern Matching & Image Registration Philippe Latour 26/11/2014 1 / 53

Table of content Introduction Applications Some solutions and their corresponding approach Pattern matching components Implementation speed-up 2 / 53

Table of content Introduction Definition & Examples Objectives & Components Applications Some solutions and their corresponding approach Pattern matching components Implementation speed-up 3 / 53

Definition: pattern or template matching Pattern or template matching is the process of either finding any instance of an image T, called the pattern or the template or the model, within another image I. or finding which ones of the templates T 1, T 2,, T N correspond in some way to another image I. 4 / 53

Example: pattern matching 5 / 53

Definition: image registration or alignment Image registration is the process of spatially aligning two images of a scene so that corresponding points assume the same coordinates. 6 / 53

Objectives Finding the image transform or warping that would be needed to fit or align a source image on another one. Counting the number of instances that matched the pattern. Measuring and assessing the matching quality. 7 / 53

Components Find, for some points in the first image, the corresponding point in the second image Either find the correspondence of all pixels of an image Or only find the correspondence of some interresting points of an image Consider the image variation/transformation/warping between the two images: estimate those which are of interrest to our application. specify those to which the system should be insensitive or invariant Measure the similarity between the template and its matched instances. 8 / 53

Table of content Introduction Applications List Electronic component manufacturing Some solutions and their corresponding approach Pattern matching components Implementation speed-up 9 / 53

Applications Stereo and multi-view correspondence: 3D reconstruction Pose estimation Panoramic images: Image alignment for stitching Machine Vision: Template detection and counting Object alignment -> robot arm control, gauging Model conformance assessment -> NDT, defect detection Multi-modalities correspondences: Biomedical images alignment Satellite images fusion Robot navigation Content Based Image Retrieval (CBIR): Signature 10 / 53

Electronic components manufacturing I 11 / 53

Electronic components manufacturing II Wafer dicing 12 / 53

Electronic components manufacturing III Die bonding Wire bonding 13 / 53

Electronic components manufacturing IV 14 / 53

Printed board assembly (pick & place) I Position of picked components Position of placement area Control of welding after the process 15 / 53

Printed board assembly (pick & place) II 16 / 53

Pattern matching inspection I Control of presence/absence Control of position and orientation Control of the component type 17 / 53

Pattern matching inspection II 18 / 53

Table of content Introduction Applications Some solutions and their corresponding approach Pattern pixels distance Pixel based approach Feature point correspondences Feature based approach Comparisons Pattern matching components Implementation speed-up 19 / 53

Naive solution - pattern pixels distance I Define, inside the observed image I, all the windows W i of the same size (width w T x height h T ) as the template T. If (x, y) = (k i, l i () is the center of W i, then) W i (x, y) = I x + k i w T 2, y + l i h T 2 For each window W i, compute the euclidian distance between T and W i : dist (T, W i ) = w T 1h T u=0 v=0 [T (u, v) W i (u, v)] 2 (1) 20 / 53

Naive solution - pattern pixels distance II Create a distance map that contains for each position W i the computed distance to T ) dist (T, W D (k, l) = i(k,l) when 0 otherwise { wt Find the position of the minimum in these map 2 k < w I w T 2 h T2 l < h I h T 2 (2) 21 / 53

Pixel-based approach I The approach is to shift or warp the images relative to each other and to look at how much the pixels agree A suitable similarity or dissimilarity measure must first be chosen to compare the images The similarity or dissimilarity measure depend on the image characteristics to which it is necessary to be invariant. Lighting conditions (linear gain and offset) Noise Small rotation or scaling Thinning -> Define the similarity/dissimilarity measure 22 / 53

Pixel-based approach II Then, we need to decide which kind of warping is eligible! Translation, rotation, scaling, affine transform -> Define the search space The search space is the parameter space for the eligible warping (the set of all the parameters giving rise to an eligible transformation). Translation -> 2D search space Translation + rotation -> 3D search space Translation + rotation + isotropic scaling -> 4D search space Affine transform -> 6D search space Projective transform -> 8D search space Finally, the search technique must be devised Trying all possible alignment (a full search) is often impractical! So hierarchical coarse-to-fine techniques based on image pyramids are often used. 23 / 53

Other solution: feature point correspondences I There are mainly two paradigms to compare two images: Pixel based: Compare all pairs of corresponding (=located at the same place in the image, possibly after warping one image) pixels Then compute a global score based on the individuals comparisons Feature based: Find informatives feature points in each images Then associate each feature point of one image to a feature point of the other image Compute the transformation/warping that enable the feature point in the left image to fit their corresponding point in the rigth image 24 / 53

Other solution: feature point correspondences II 25 / 53

Feature-based approach Feature points have also been referred to as critical points, interest points, key points, extremal points, anchor points, landmarks, control points, tie points, corners, vertices, and junctions in the literature. We need to decide what is a feature point Corners, junctions, edges, blob center,... Compute a cornerness function and suppress non-maxima Design to be invariant to some image variation Then we have to characterize and describe them (position, image gradient or moment, cornerness,... ) to find the best correspondance between feature points in each images We need to decide which kind of warping is admissible! How to find the best correspondences Robust methods (Ransac, ICP) 26 / 53

Pixel-based versus feature based approach I Pixel based Feature based Pattern information usage Use all pixels in the pattern in an uniform way. -> No need to analyze or understand the pattern. Find and use pattern features (most informative part of the pattern). -> Sensitive operation. Occlusion or pose Sensitive Could be design to be insensitive variation Sub-pixel accuracy Interpolation of the similarity/dissimilarity measure Naturally accurate at the sub-pixel level. Admissible warping The choice has to be done at the beginning of the process (orientation and scaling) Mostly insensitive to differences in orientation and scaling Noise and lighting conditions Sensitive Naturally much more insensitive 27 / 53

Pixel-based versus feature based approach II Rigid pattern warping Dimensionality of the search space Pixel based Feature based Mostly limited to rigid pattern Enable non-rigid warping. warping Mostly limited to low Higher dimensionality search dimensionality (the search space are more easily time is exponential in the reachable search space dimensionality) Implementation Easy to implement, natural implementation on GPUs Much more difficult to implement and/or to optimize Complexity Complexity proportionnal to the image size. Need specific search strategies to reach real-time. Complexity roughly proportionnal to the number of feature points (depend more on the content of the scene than on the image size) 28 / 53

Table of content Introduction Applications Some solutions and their corresponding approach Pattern matching components Pixel-based approach: Similarity/Dissimilarity measures Feature detection Image warping or transform Transform parameter space search strategy Implementation speed-up 29 / 53

Pixel-based approach: Similarity/dissimilarity measures Given two sequences of measurement X = {x i i = 1,, n} Y = {y i i = 1,, n} X and Y can represent measurements from two objects or phenomena. Here, in our case, we assume they represent images and x i and y i are the intensities of the corresponding pixels in the images. The similarity (dissimilarity) between them is a measure that quantifies the dependency (independency) between the sequences. 30 / 53

Pixel-based approach: Similarity/dissimilarity measures Similarity / Correlation / Score Dissimilarity / Distance Pearson correlation coefficient Tanimoto Measure Stochastic Sign Change Deterministic Sign Change Minimum Ratio Spearman s Rho Kendall s Tau Greatest Deviation Ordinal Measure Correlation Ratio Energy of Joint Probability Distribution Material Similarity Shannon Mutual Information Rényi Mutual Information Tsallis Mutual Information F-Information Measures L1 Norm Median of Absolute Differences Square L2 Norm Median of Square Differences Normalized Square L2 Norm Incremental Sign Distance Intensity-Ratio Variance Intensity-Mapping-Ratio Variance Rank Distance Joint Entropy Exclusive F Information 31 / 53

Pixel-based approach: Similarity/dissimilarity measures Similarity / Correlation / Score Dissimilarity / Distance Pearson correlation coefficient Tanimoto Measure Stochastic Sign Change Deterministic Sign Change Minimum Ratio Spearman s Rho Kendall s Tau Greatest Deviation Ordinal Measure Correlation Ratio Energy of Joint Probability Distribution Material Similarity Shannon Mutual Information Rényi Mutual Information Tsallis Mutual Information F-Information Measures L1 Norm Median of Absolute Differences Square L2 Norm Median of Square Differences Normalized Square L2 Norm Incremental Sign Distance Intensity-Ratio Variance Intensity-Mapping-Ratio Variance Rank Distance Joint Entropy Exclusive F Information 32 / 53

Pearson correlation coefficient The correlation coefficient between sequences X = {x i i = 1,, n} and Y = {y i i = 1,, n} is given by r = 1 n 1 ni=1 n (x i x)(y i ȳ) ni=1 (x i x) 2 1 (3) ni=1 n (y i ȳ) 2 where x = 1 n n x i and ȳ = 1 n y i n i=1 i=1 which can also be written as r = 1 n ( ) (xi x) ( ) (y i ȳ) (4) n σ i=1 x σ y or r = 1 n X t Ȳ (5) 33 / 53

Pearson correlation coefficient - FFT To speed-up the search step when the score is, in some way, related to a correlation coefficient, we can use FFT algorithm: Y represents the 2-D image inside which a 2-D template is to be found X represents the template padded with zeros to be the same size as Y The best-matching template window in the image is located at the peak of C [X, Y ] = F 1 {F {X } F {Y }} (6) Phase correlation: the information about the displacement of one image with respect to another is included in the phase component of the cross-power spectrum of the images: { F {X } F C p [X, Y ] = F 1 } {Y } F {X } F {Y } (7) 34 / 53

Pearson correlation coefficient map (a) A template X. (b) An image Y containing the template X. (c) The correlation image C [X, Y ] with intensity at a pixel showing the correlation coefficient between the template and the window centered at the pixel in the image. (d) The real part of image C p [X, Y ], showing the phase correlation result with the location of the spike encircled. (a) (c) (b) (d) 35 / 53

Spearman rank correlation or Spearman s rho I The Spearman rank correlation or Spearman s Rho (ρ) between sequences X = {x i i = 1,, n} and Y = {y i i = 1,, n} is given by ρ = 1 6 n i=1 [R (x i ) R (y i )] 2 n (n 2 1) (8) where R (x i ) and R (y i ) represent ranks of x i and y i in images X and Y Remark: To eliminate possible ties among discrete intensities in images, the images are smoothed with a Gaussian of a small standard deviation, such as 1 pixel, to produce unique floating-point intensities. 36 / 53

Spearman rank correlation or Spearman s rho II Comparison with the Pearson correlation coefficient: ρ is less sensitive to outliers and, thus, less sensitive to impulse noise and occlusion. ρ is less sensitive to nonlinear intensity difference between images than Pearson correlation coefficient. Spearman s ρ consistently produced a higher discrimination power than Pearson correlation coefficient. Computationally, ρ is much slower than r primarily due to the need for ordering intensities in X and Y 37 / 53

Kendall s tau I If x i and y i, for i = 0,..., n, show intensities of corresponding pixels in X and Y, then for i j, two possibilities exist: Either concordance : sign(x j =x i ) = sign(y j =y i ) Or discordance : sign(x j =x i ) = sign(y j =y i ) Assuming that out of possible Cn 2 combinations, N c pairs are concordants and N d pairs are discordants, Kendall s τ is defined by: τ = N c N d (9) n (n 1) /2 If bivariate (X, Y ) is normally distributed, Kendall s τ is related to Pearson correlation coefficient r by: r = sin (πτ/2) (10) 38 / 53

Kendall s tau II Comparison with other similarity measures: Pearson correlation coefficient can more finely distinguish images that represent different scenes than Kendall s τ Conversely, Kendall s τ can more finely distinguish similar images from each other when compared to Pearson correlation coefficient Spearman s ρ and Kendall s τ have the same discrimination power when comparing images of different scenes Kendall s τ is one of the costliest similarity measures 39 / 53

Spearman s rho and Kendall s tau maps (a) Spearman s Rho (b) Kendall s Tau. 40 / 53

Feature point detection Feature points in an image carry critical information about scene structure -> They are widely used in image analysis. In image registration, knowledge about corresponding points in two images is required to spatially align the images. It is important that detected points be independent of noise, blurring, contrast, and geometric changes -> the same points can be obtained in images of the same scene taken under different environmental conditions and sensor parameters. A large number of point detectors have been developed throughout the years... 41 / 53

Feature point category Correlation-based detectors Edge-based detectors Model-based detectors Uniqueness-based detectors Curvature-based detectors Laplacian-based detectors Gradient-based detectors Hough Transform-based detectors Symmetry-based detectors Filtering-based detectors Transform Domain detectors Pattern Recognition-based detectors Moment-based detectors Entropy-based detectors 42 / 53

Feature point category Correlation-based detectors Edge-based detectors Model-based detectors Uniqueness-based detectors Curvature-based detectors Laplacian-based detectors Gradient-based detectors Hough Transform-based detectors Symmetry-based detectors Filtering-based detectors Transform Domain detectors Pattern Recognition-based detectors Moment-based detectors Entropy-based detectors 43 / 53

Correlation-based detectors I The angle between the line connecting pixel (x, y) to the ith pixel on the smallest circle and the x-axis is θ i, and the intensity at the ith pixel is I 1 (θ i ) If Ī j (θ i ) represents the normalized intensity at θ i in the jth circle, then n m C(x, y) = Ī j (θ i ) (11) i=1 j=1 is used to measure the strength of a vertex or a junction at (x, y). Pixel (x, y) is then considered a corner if C(x, y) is locally maximum. 44 / 53

Correlation-based detectors II 45 / 53

Laplacian-based detectors I A number of detectors use either the Laplacian of Gaussian (LoG) or the difference of Gaussians (DoG) to detect points in an image. The DoG operator is an approximation to the LoG operator The best approximation to the LoG operator of standard deviation σ is the difference of Gaussians of standard deviations σ and 1.6σ. That is 2 G (σ) = 1.6[G(1.6σ) G(σ)] σ 2 Local extrema of LoG or its approximation DoG detect centers of bright or dark blobs in an image. So, they are not as much influenced by noise as points representing corners and junctions and points detected by the LoG operator are generally more resistant to noise than points detected by vertex and junction detectors. SIFT (Scale Invariant Feature Transform) used the difference of Gaussians (DoG) to find points in an image. 46 / 53

Laplacian-based detectors II 47 / 53

Pattern and image warping or transform Pattern or Image warping The eligible warping defines the search space Translation Rotation Isotropic / anisotropic scaling Affine / projective transform Non-linear warping The insensitivity properties guide the choice of a score/distance measure and the choice of a feature point detector Noise Lighting conditions Small rotations or scaling Template thinning Applying the warping to the pattern or the image or both? or to the feature points? Image resampling and sub-pixel accuracy? 48 / 53

Transform parameter space search strategy Similarity/Dissimilarity measure optimization Full search Steepest descent Conjugate gradient Quasi-Newton method Levenberg-Marquardt Simulated annealing Transform computation from feature points correspondence Ransac ICP (Iterative Closest Point) 49 / 53

Table of content Introduction Applications Some solutions and their corresponding approach Pattern matching components Implementation speed-up FFT Multiresolution: Coarse to fine Hybrid approach: feature extraction in one image only 50 / 53

Multiresolution - Coarse-to-fine approach Compute image and pattern down-scaled pyramids. Proceed to a full search of the most reduced (coarser) pattern within the most reduced image. Find a number of eventual candidates at the coarsest scale by a full search. For each candidates at a given scale: Upscale the image and the candidate and look for the best matching pattern location in a neighbourhood of the candidate. Reduce the number of candidates If the finer scale has not yet been reached, proceed to the next scale level 51 / 53

Hybrid approach: feature extraction in one image only Search for some feature points in the pattern Scan the transform parameter space following a given strategy: Transform the feature points following the current eligible warping parameters Superimpose the transformed pattern feature points on the reference image At each pattern feature points location in the reference image, check if a compatible point exists in the reference image and measure its similarity/dissimilarity score. Compute a global measure of similarity/dissimilarity by adding all the individual scores. Find the optimum of this measure on the search space. 52 / 53

Bibliography A. Goshtasby. Image registration Principles, Tools an Methods. Springer-Verlag London, 2012, DOI: 10.1007/978-1-4471-2458-0 R. Brunelli. Template matching techniques in computer vision Theory and practice. John Wiley & Sons, 2009. R. Szeliski. Image alignment and stitching: a tutorial. Technical Report MSR-TR-2004-92, Microsoft Research, 2006 53 / 53