CS 534: Computer Vision Model Fitting

Similar documents
Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.

y and the total sum of

Biostatistics 615/815

Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

A Robust Method for Estimating the Fundamental Matrix

Lecture 9 Fitting and Matching

Machine Learning. Topic 6: Clustering

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Active Contours/Snakes

Support Vector Machines

LEAST SQUARES. RANSAC. HOUGH TRANSFORM.

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SIGGRAPH Interactive Image Cutout. Interactive Graph Cut. Interactive Graph Cut. Interactive Graph Cut. Hard Constraints. Lazy Snapping.

Fitting: Deformable contours April 26 th, 2018

Ecient Computation of the Most Probable Motion from Fuzzy. Moshe Ben-Ezra Shmuel Peleg Michael Werman. The Hebrew University of Jerusalem

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

Fitting and Alignment

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Image Alignment CSC 767

SVM-based Learning for Multiple Model Estimation

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Feature Reduction and Selection

Machine Learning. K-means Algorithm

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Help for Time-Resolved Analysis TRI2 version 2.4 P Barber,

Multi-stable Perception. Necker Cube

An Entropy-Based Approach to Integrated Information Needs Assessment

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Machine Learning 9. week

CMPS 10 Introduction to Computer Science Lecture Notes

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Three supervised learning methods on pen digits character recognition dataset

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Announcements. Supervised Learning

Smoothing Spline ANOVA for variable screening

Support Vector Machines

Lecture 4: Principal components

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Support Vector Machines. CS534 - Machine Learning

Computer Vision. Exercise Session 1. Institute of Visual Computing

Exact solution, the Direct Linear Transfo. ct solution, the Direct Linear Transform

Random Variables and Probability Distributions

Lecture 5: Multilayer Perceptrons

EECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science

EXTENDED BIC CRITERION FOR MODEL SELECTION

Model Fitting מבוסס על שיעור שנבנה ע"י טל הסנר

Machine Learning: Algorithms and Applications

LECTURE : MANIFOLD LEARNING

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

APPLIED MACHINE LEARNING

Unsupervised Learning and Clustering

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

Simplification of 3D Meshes

Investigating the Performance of Naïve- Bayes Classifiers and K- Nearest Neighbor Classifiers

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Accounting for the Use of Different Length Scale Factors in x, y and z Directions

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

Robust Computation and Parametrization of Multiple View. Relations. Oxford University, OX1 3PJ. Gaussian).

AMath 483/583 Lecture 21 May 13, Notes: Notes: Jacobi iteration. Notes: Jacobi with OpenMP coarse grain

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

Why visualisation? IRDS: Visualization. Univariate data. Visualisations that we won t be interested in. Graphics provide little additional information

Hierarchical clustering for gene expression data analysis

Radial Basis Functions

AP PHYSICS B 2008 SCORING GUIDELINES

Journal of Chemical and Pharmaceutical Research, 2014, 6(6): Research Article

Wishing you all a Total Quality New Year!

IMPROVING AND EXTENDING THE INFORMATION ON PRINCIPAL COMPONENT ANALYSIS FOR LOCAL NEIGHBORHOODS IN 3D POINT CLOUDS

Load-Balanced Anycast Routing

Available online at ScienceDirect. Procedia Environmental Sciences 26 (2015 )

Adaptive Transfer Learning

A multi-level thresholding approach using a hybrid optimal estimation algorithm

Calibrating a single camera. Odilon Redon, Cyclops, 1914

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

Adjustment methods for differential measurement errors in multimode surveys

Active 3D scene segmentation and detection of unknown objects

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

USING GRAPHING SKILLS

Monte Carlo Integration

IMAGE MATCHING WITH SIFT FEATURES A PROBABILISTIC APPROACH

New Extensions of the 3-Simplex for Exterior Orientation

Improved Methods for Lithography Model Calibration

Sorting. Sorted Original. index. index

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram

Hybridization of Expectation-Maximization and K-Means Algorithms for Better Clustering Performance

Data Mining: Model Evaluation

Efficient Distributed File System (EDFS)

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Problem Set 3 Solutions

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

A Scalable Projective Bundle Adjustment Algorithm using the L Norm

A Simple and Efficient Goal Programming Model for Computing of Fuzzy Linear Regression Parameters with Considering Outliers

An Optimal Algorithm for Prufer Codes *

Transcription:

CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust estmaton Mssng values problem EM algorthm CS 534 Model Fttng -

Model Fttng Model fttng s a fundamental problem n computer vson Data Model parameters? Fnd the model parameters that best fts the data Optmzaton problem The model can be as smple as a D lne or as complex as 3D artculated object. CS 534 Model Fttng - 3 Fttng Models Issues: What s the model? How to measure a good ft? What s your metrc? Effect of nose on the fttng Multple nstances of the same model (object dfferent models Whch data ponts belong to whch object? How many objects are there? CS 534 Model Fttng - 4

Smple example: Fttng a lne n D to a set of pont. The same ssues apply to more complex problems Gven a set of ponts {(x,y } fnd lne parameters Least-squares y = ax + b Fnd a, b whch mnmze ( y ax b Resdual: measure how far s pont from the model K M a x = 1 y b K M A x = b mn A x x - b Lnear least-squares We have seen an example of ths before n calbraton CS 534 Model Fttng - 5 Queston: what does the resdual mean? What are we mnmzng? Fnd a, b whch mnmze ( y ax b (x,y y=ax+b Mnmzes vertcal dstances Very poor model (e.g., We can not ft vertcal/near vertcal lnes (x,a x +b CS 534 Model Fttng - 6

Better model x cos θ + y snθ + ρ = 0 y ax + by + c = 0, ρ where a + b = 1 Stll we wll use least-squares to mnmze the resdual: ( ax + by + c ax +by +c s the perpendcular dstance θ x Ths s called total least-squares CS 534 Model Fttng - 7 Maxmum Lkelhood Parameter Estmaton (MLE Fttng as a probablstc nference problem We stll usng the D lne example Assume x s correct (determnstc no errors y has a measurement error that s Normally dstrbuted around true y(x y y y( 1/ ( σ : N ( y( x, σ P( y e x (x,y y=ax+b (x, y(x =ax +b CS 534 Model Fttng - 8

Maxmum Lkelhood Parameter Estmaton (MLE Assume x s correct (determnstc no errors y has a measurement error that s Normally dstrbuted around true y(x y y y( 1/ ( σ : N( y( x, σ P( y e x Assume errors are ndependent, standard devatons σ for all ponts Probablty that the data comes from the model: probablty that the data and model predcton are wthn y P N = 1 e y y( x 1/ ( σ y Fnd the parameters (a,b that maxmze P Maxmze the lkelhood P(measurement a,b P(measurement model parameters (x,y y=ax+b (x, y(x =ax +b CS 534 Model Fttng - 9 Maxmum Lkelhood = Least squares P N = 1 e y y( x 1/ ( σ y Ths term s rrelevant (doesn t depend on a,b Log P = N = 1 ( y y( x σ N log y Fnd the parameters (a,b that maxmze P Fnd the parameters (a,b that mnmzes Log P Mnmze: N N ( y y( x = = 1 = 1 ( y ax b Least-squares as we know t CS 534 Model Fttng - 10

We can assume that both x and y contans measurement errors Both x, y are probablstc random varables x u + w y = v (x,y N( 0, Σ (u, v CS 534 Model Fttng - 11 MAP estmate Maxmum lkelhood estmate (MLE Maxmze P(measurement model parameters But we are actually nterested n maxmzng P(model parameters measurement So what s the dfference? Bayes Rule P(model measurement = P(measurement model * P(model / P(measurement Posteror probablty measurement lkelhood model pror Measurement probablty Maxmze P(model measurement α P(measurement model * P(model Maxmum A posteror Estmaton (MAP MAP wll be useful f we have reasons to prefer one model over the others,.e., f we have pror knowledge about model parameters. e.g., for lne fttng, certan lne orentatons are most probable than others CS 534 Model Fttng - 1

Robustness What s the effect of nosy data (outlers Least-squares estmate (smlarly MLE s extremely senstve to outlers The problem s the sngle pont on the rght; the error for that pont s so large that t drags the lne away from the other ponts CS 534 Model Fttng - 13 M-estmators Least-squares mnmze the resdual: sum of squared dstances (resduals for each data pont. r( ; θ e.g. ( ax + by + c x The resdual for a far away pont (outler s huge Instead, we want to reduce the effect of the resduals for far away ponts How to do that: replace (dstance wth somethng that looks lke (dstance for small dstances, and s about constant for large dstances CS 534 Model Fttng - 14

M-estmators How to do that: replace (dstance wth somethng that looks lke (dstance for small dstances, and s about constant for large dstances mnmze ρ( r, σ r ρ( r, σ = r + σ Resdual (dstance for each pont CS 534 Model Fttng - 15 M-estmators mnmze ρ( r, σ ρ( r, σ r = r + σ Ths leads to a nonlnear optmzaton problem Iteratve procedure gven an ntal soluton Can stuck to a local mnma depends on ntal guess The choce of s σ crtcal Rght σ Too small ft s nsenstve to all ponts Just stuck to ntal guess Too Large smlar to LS Outler has bg contrbuton CS 534 Model Fttng - 16

M-estmators Rght σ Too small ft s nsenstve to all ponts Just stuck to ntal guess Too Large smlar to LS Outler has bg contrbuton CS 534 Model Fttng - 17 M-estmators The choce of s σ crtcal Start wth bg σ - soluton smlar to least squares. Reduce as σ you go CS 534 Model Fttng - 18

RANSAC RANdom SAmple Consensus Searchng for a random sample that leads to a ft on whch many of the data ponts agree Extremely useful concept Can ft models even f up to 50% of the ponts are outlers. Repeat Choose a subset of ponts randomly Ft the model to ths subset See how many ponts agree on ths model (how many ponts ft that model Use only ponts whch agree to re-ft a better model Fnally choose the best ft CS 534 Model Fttng - 19 RANSAC RANdom SAmple Consensus Four parameters n : the smallest # of ponts requred k : the # of teratons requred t : the threshold used to dentfy a pont that fts well d : the # of nearby ponts requred Untl k teratons have occurred Pck n sample ponts unformly at random Ft to that set of n ponts For each data pont outsde the sample Test dstance; f the dstance < t, t s close If there are d or more ponts close, ths s a good ft. Reft the lne usng all these ponts End use the best ft CS 534 Model Fttng - 0

Mssng varable problems In many vson problems, f some varables were known the maxmum lkelhood nference problem would be easy fttng; f we knew whch lne each token came from, t would be easy to determne lne parameters segmentaton; f we knew the segment each pxel came from, t would be easy to determne the segment parameters fundamental matrx estmaton; f we knew whch feature corresponded to whch, t would be easy to determne the fundamental matrx etc. Ths sort of thng happens n statstcs, too CS 534 Model Fttng - 1 Mssng varable problems Consder lne fttng: What s gven? pont locatons What s mssng? whch ponts belong to whch lne If we know the lne assgnment for each pont ft the lnes (estmate the parameters usng MLE If we know the parameters of the two lnes we can fgure out the lne assgnment for each pont Chcken and egg problem CS 534 Model Fttng -

Mssng varable problems Strategy estmate approprate values for the mssng varables plug these n, now estmate parameters re-estmate approprate values for mssng varables, contnue eg guess whch lne gets whch pont now ft the lnes now reallocate ponts to lnes, usng our knowledge of the lnes now reft, etc. We ve seen ths lne of thought before (k means CS 534 Model Fttng - 3 EM algorthm Expectaton-Maxmzaton Iterate untl convergence: replace mssng varable wth expected values, gven fxed values of parameters (E-step - expectaton fx mssng varables, choose parameters to maxmze lkelhood gven fxed values of mssng varables (M-step - maxmzaton e.g., (lne fttng terate tll convergence: allocate each pont to a lne wth a weght, whch s the probablty of the pont gven the lne reft lnes to the weghted set of ponts Lne assgnment for each pont s assumed to be a mssng value (Hdden varable CS 534 Model Fttng - 4

EM algorthm Mxture Model: Probablty of pont x gven lne l Probablty of pont x gven all lnes p x θ ( l p ( x Θ = αl p( x θl l α : probablty of pont x les on lne l mxture weghts The whole parameter set Θ = α, L, α, θ, L, θ ( 1 N 1 N Lkelhood for all ponts p ( X Θ α p( x θ = l l l CS 534 Model Fttng - 5 Example: lne fttng usng EM. Soluton s very senstve to ntal guess (assgnment local mnma. CS 534 Model Fttng - 6

Segmentaton wth EM K=,3,4,5 Fgure from Color and Texture Based Image Segmentaton Usng EM and Its Applcaton to Content Based Image Retreval,S.J. Belonge et al., Proc. Int. Conf. Computer Vson, 1998, c1998, IEEE CS 534 Model Fttng - 7 Sources Forsyth and Ponce, Computer Vson a Modern approach: chapters 15,16 R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classfcaton. Wley, New York, nd edton, 000 Sldes by D. Forsyth CS 534 Model Fttng - 8