Exact solution, the Direct Linear Transfo. ct solution, the Direct Linear Transform

Similar documents
Machine Learning. K-means Algorithm

CS 534: Computer Vision Model Fitting

Prof. Feng Liu. Spring /24/2017

LEAST SQUARES. RANSAC. HOUGH TRANSFORM.

Structure from Motion

Geometric Transformations and Multiple Views

Priority queues and heaps Professors Clark F. Olson and Carol Zander

Calibrating a single camera. Odilon Redon, Cyclops, 1914

Graph-based Clustering

Feature Reduction and Selection

Lecture 4: Principal components

Multi-stable Perception. Necker Cube

Image warping and stitching May 5 th, 2015

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.

A Robust Method for Estimating the Fundamental Matrix

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

y and the total sum of

Lecture 9 Fitting and Matching

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

Announcements. Supervised Learning

Machine Learning 9. week

KFUPM. SE301: Numerical Methods Topic 8 Ordinary Differential Equations (ODEs) Lecture (Term 101) Section 04. Read

Support Vector Machines

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

What are the camera parameters? Where are the light sources? What is the mapping from radiance to pixel color? Want to solve for 3D geometry

X- Chart Using ANOM Approach

LOOP ANALYSIS. The second systematic technique to determine all currents and voltages in a circuit

Image Alignment CSC 767

Region Segmentation Readings: Chapter 10: 10.1 Additional Materials Provided

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

Programming in Fortran 90 : 2017/2018

Computer Vision Lecture 12

The ray density estimation of a CT system by a supervised learning algorithm

MAPI Computer Vision

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

New Extensions of the 3-Simplex for Exterior Orientation

Hermite Splines in Lie Groups as Products of Geodesics

cos(a, b) = at b a b. To get a distance measure, subtract the cosine similarity from one. dist(a, b) =1 cos(a, b)

Unsupervised Learning and Clustering

Computer Animation and Visualisation. Lecture 4. Rigging / Skinning

Expectation Maximization (EM). Mixtures of Gaussians. Learning probability distribution

Random Variables and Probability Distributions

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

Machine Learning: Algorithms and Applications

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

AIMS Computer vision. AIMS Computer Vision. Outline. Outline.

Complex Filtering and Integration via Sampling

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

Lecture #15 Lecture Notes

Life Tables (Times) Summary. Sample StatFolio: lifetable times.sgp

Mode-seeking by Medoidshifts

Lecture 5: Probability Distributions. Random Variables

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Introduction to Geometrical Optics - a 2D ray tracing Excel model for spherical mirrors - Part 2

Polyhedral Compilation Foundations

Classification / Regression Support Vector Machines

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga

Recognizing Faces. Outline

Feature Extraction and Registration An Overview

The AVL Balance Condition. CSE 326: Data Structures. AVL Trees. The AVL Tree Data Structure. Is this an AVL Tree? Height of an AVL Tree

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Biostatistics 615/815

Support Vector Machines

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

ROBOT KINEMATICS. ME Robotics ME Robotics

Introduction to Multiview Rank Conditions and their Applications: A Review.

5.0 Quality Assurance

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Ecient Computation of the Most Probable Motion from Fuzzy. Moshe Ben-Ezra Shmuel Peleg Michael Werman. The Hebrew University of Jerusalem

Computer Vision. Exercise Session 1. Institute of Visual Computing

A SYSTOLIC APPROACH TO LOOP PARTITIONING AND MAPPING INTO FIXED SIZE DISTRIBUTED MEMORY ARCHITECTURES

High Dimensional Data Clustering

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

OPL: a modelling language

LECTURE : MANIFOLD LEARNING

An Optimal Algorithm for Prufer Codes *

Radial Basis Functions

Wishing you all a Total Quality New Year!

MOTION PANORAMA CONSTRUCTION FROM STREAMING VIDEO FOR POWER- CONSTRAINED MOBILE MULTIMEDIA ENVIRONMENTS XUNYU PAN

A Scalable Projective Bundle Adjustment Algorithm using the L Norm

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

Unsupervised Learning

we use mult-frame lnear subspace constrants to constran te D correspondence estmaton process tself, wtout recoverng any D nformaton. Furtermore, wesow

5 The Primal-Dual Method

Solitary and Traveling Wave Solutions to a Model. of Long Range Diffusion Involving Flux with. Stability Analysis

Computer Vision Lecture 14

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

Path Planning for Formation Control of Autonomous

Fitting and Alignment

AMath 483/583 Lecture 21 May 13, Notes: Notes: Jacobi iteration. Notes: Jacobi with OpenMP coarse grain

Robust Computation and Parametrization of Multiple View. Relations. Oxford University, OX1 3PJ. Gaussian).

Monte Carlo Integration

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005

INF Repetition Anne Solberg INF

6.1 2D and 3D feature-based alignment 275. similarity. Euclidean

Wavefront Reconstructor

Transcription:

Estmaton Basc questons We are gong to be nterested of solvng e.g. te followng estmaton problems: D omograpy. Gven a pont set n P and crespondng ponts n P, fnd te omograpy suc tat ( ) =. Camera projecton. Gven a pont set n P and crespondng ponts n P, fnd te mappng P P. Te fundamental matr. Gven a pont set n one mage and crespondng ponts n a second mage, fnd te fundamental matr F between te mages. Te fundamental matr s a sngular matr F te satsfes F = 0 f all. Wat s requred f an eact, unque soluton,.e. ow may crespondng ponts are needed? A omograpy H as 8 degrees of freedom. Eac pont par gves ndependet equatons = H. Tus we need at least ponts f an eact soluton. How can we use me data to mprove te soluton? Wat s meant by better? We need to defne a metrc. Wat metrcs are smple to calculate? Wc are teetcally best? How do we andle low qualty data,.e. outlers? p. ct soluton, te Drect Lnear Transfm Eact soluton, te Drect Lnear Transfo Study te problem to determne a omograpyh : P P from pont crespondences. Te transfmaton s gven by = H. Rewrtng ts gves H = 0, snce and H are parallel vects n R. Let j be te jt row n H and = (, y, w ). Ten we may wrte and H = H = y w w y 0 w y w 0 y 0 Ts equaton s on te fm A = 0 were A s a 9 matr and s a 9-vect wt te row-wse elements of H. =, H = 8 9. = 0. Te equaton A = 0 s lnear n. Eac equaton A = 0 as lnearly ndependent equatons,.e. one row can be removed. Removng te trd row gves us " 0 w y w 0 # = 0 A = 0, were A s a 9 matr. If any of te ponts s an deal pont,.e. w = 0, anoter row as to be removed. Te equaton s vald f all omogenous representatons (, y,w ) of, e.g. f w =. Eac pont par produces equatons n te elements of H. Wt pont pars, te matr A becomes 9 and te A matr 8 9. Bot matrces ave rank 8,.e. tey ave a one-dmensonal null-space. Te soluton can be determned from te null-space of A. p.

T Over-determned soluton (SVD) DLT Over-determned soluton (SVD If we ave me tan pont pars, te equaton A = 0 becomes over-determned. Wtout err n te ponts ( nose ), te rank of A wll stll be 8. Wt nose, te rank wll be 9 and te only soluton of A = 0 s = 0,.e. undefned. One soluton of ts problem s to add a contrant to,.e. =. In tat case te problem becomes Study te sngular value decomposton (SVD) of A, A = UDV. Te matr D s dagonal and contans te non-negatve sngular values of A, sted n descendng der. Te matrces U and V are togonal. Te soluton of te mnmzaton problem mn A subject to = mn A. mn A. s te rgt sngular vect v n crespondng to te smallest sngular value. p. Eample Eample F te ponts and = = 0 0 0.99 0 0. 0. 0 0.. F te ponts and = = 00 00 00 00 00 00 00 00 00 00 0 00 00 00 00 00 00 00 00 00 0 00 0 00 0 00 0 we get A = 0 0 0 0 0 0 0.99 0.99 0.99 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.99 0.00 0.00 H =.000 0.00. 0.00. 0. 0 0.. and = 0.99 0.00 0.00.000 0.00 0.00 p. we get A = 0 0 0 00 00 0000 0000 00 00 00 0 0 0 000 000 0 0 0 0 00 00 0000 90000 00 00 00 0 0 0 0000 0000 00 0 0 0 00 00 0000 0000 00 00 00 0 0 0 0000 0000 00 0 0 0 00 00 0000 0000 00 00 00 0 0 0 90000 0000 00 0 0 0 00 00 90000 90000 00 00 00 0 0 0 90000 90000 00 0.90 0.08.00 H = 0.00 0.9...000 00 00 0 00 0 00 0 00 0 and = 0.90 0.08.00 0.00 0.9..000

Inomogenous soluton Solutons from lnes and ponts If we can f of te elements n we can remove tat element and solve f te 8 remanng. If we e.g. assume tat 9 = H = te pont equatons become " 0 0 0 w y w w w y y y w y w w w 0 0 0 y # " = w y w #, A lne crespondence l l also gves equatons n te elements n H so a smlar problem may be fmulated from e.g. lne pars pont pars and lne pars. were contans te frst 8 elements n. Wt pont pars we get an equatonm = b, were M s 8 8, tat can be solved eactly. Wt me tat pont par we may solve mn M b wt a least squares metod. Observe tat ts metod wks poly f te crect soluton as H = 0. p. 9 Algebrac dstance Nmalzed DLT f D omograpes Te DLT algtm mnmzes ɛ = A. Eac pont par contrbutes wt an err vect ɛ tat s called te algebrac err vect assocated wt te pont par and te omograpyh. Te nm of ɛ s called te algebrac dstance d alg and s " d alg (,H ) = ɛ = 0 w y w 0 In general, d alg för två vekter oc s defned as d alg (, ) = a + a were a = #. a a a T =. Gven a set of pont crespondences te total err becomes Gven n pont pars { }, determne te omograpy H suc tat = H. Determne te smlarty transfmaton T suc tat te ponts { = T } ave a center of gravty at te gn and a mean dstance of to te gn. Determne te smlarty transfmaton T suc tat te ponts { = T } ave a center of gravty at te gn and a mean dstance of to te gn. Determne te omograpy H f te pont crespondences { }. Re-nmalze suc tat H = T HT. A = ɛ = ɛ = d alg (,H ). Te algebrac dstance s easy to mnmze, but s dffcult to nterpret geometrcally. Furterme t s transfmaton dependent and calculatons based on te algebrac dstance sould be nmalzed. p.

Pertubaton senstvty Geometrcal dstance Assume we want to use a grd pattern (, y) [00,900] as a reference codnate system f te transfmaton between two mages. Assume te mages are appromately te same,.e. H I. How muc wll small measurement errs affect te estmaton of anoter pont p = (00,00, )? Result of 00 monte carlo smulatons were H was determned from pont crespondences { }, were te ponts were perturbed wt wte nose of standard devaton σ=0. pels. We wll now study a few err measures based on te geometrc dstance between measured and estmated pont codnates. Use te notaton f measured codnate, ˆ f estmated codnates, and f te true codnate f a pont. An estmated omograpys s denoted Ĥ. 000 000 000 900 900 900 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 900 00 00 00 00 00 900 00 00 00 00 00 900 Setup Dstrbuton of Hp f unnmalzed estmaton of H Dstrbuton of Hp f nmalzed estmaton of H p. Errs n one mage only If we ave errs n one mage only, an approprate err measure s te Eucldan dstance between te measured ponts and te transfmed eact ponts H. Ts s called te transfer err and s denoted d(,h ), were d(, y) s te Eucldan dstance between te cartesan ponts represented by and y. Errs n bot mages If we ave measurement errs n bot mages we need to take bot errs nto account. One soluton s to sum te geometrcal err fm te fward transfmaton H and te backward transfmaton H. Ts s called te symmetrc transfer err d(,h ) + d(,h ). H d / d / H - mage mage An alternate soluton s to requre a perfect matcng and sum te errs n bot mages. Ts s called te reprojecton err d(, ˆ ) + d(, ˆ ) subject to ˆ = Hˆ,. d / / d H / H - mage mage p.

Statstcal err Mamum lkelood estmates If we assume te measurement err s Guassan dstrbuted wt varance σ, we may descrbe te measured codnates as = + δ, were te err δ s nmally dstrbuted wt varance σ. Furterme, f we assume te errs are ndependent, te probablty densty functon (pdf) f a pont measurement gven te true pont «Pr() = πσ e d(, ) /(σ ). In te case of err n one mage only we are nterested n te probablty f observng te crespondences { }. If te observatons are ndependent te pdf becomes Pr({ } H) = Π «πσ e d(,h ) /(σ ),.e. te probablty tat we wll observe { } gven tat H s te true omograpy. If we take te logartm we get te log-lkelood functon were c s a constant. log Pr({ } H) = d( σ,h ) + c, Te mamum lkelood estmate (MLE) of te omograpy, Ĥ, mamzes te log-lkelyood functon and mnmzes d(,h ),.e. te geometrcal transfer err. F err n bot mages we get te pdf f te true crespondences { H = } as «Pr({, } H, { }) = Π e d(, ) +d( πσ,h ) /(σ ), wose MLE cresponds bot of a omograpy Ĥ and pont crespondences { } and mnmzes d(, ˆ ) + d(, ˆ ) p. were ˆ = Ĥˆ,.e. te reprojecton err. Maalanobs dstance If we know te covarance matr Σ f our observatons we get te MLE by mnmzng te Maalanobs dstance Σ = ( ) Σ ( ). If te errs n bot mages are ndependent te crespondng err measure becomes Σ + Σ, were Σ and Σ are te covarance matrces f measurements n te two mages. A specal case s f te measurements are ndependent but wt dfferent varance. Ten te covarance matr Σ becomes dagonal. Iteratve mnmzaton To mnmze a geometrc dstance an teratve metod s often needed. If an nomogenous fmulaton s possble a unconstraned algtm may be used, e.g. Gauss-Newton. Oterwse a constaned algtm, e.g. SQP s te best coce. F te transfer err te vect of unknowns s and te objectve functon becomes d(,h ),.e. te resdual functon s r () r () r() =., were r () = r n () [ ] + ȳ + w + ȳ + w + ȳ + w + ȳ + w y F a omogenous fmulaton a nmalzaton constrant on s necessary, e.g. +... + = T = 0. p. 9

Iteratve mnmzaton Robust estmaton F te reprojecton err we ave to estmate ˆ and ˆ n addton to. Te components of te constant ˆ = Hˆ as to be nmalzed. F nstance te mplct constrant ŵ = may be used togeter wt = 0. Ten te resdual functon becomes wt constrants ˆ ŷ y r () = ˆ ŵ y ˆ H ŷ ŷ ŵ ˆ ŷ ŵ = 0, = 0. How do we andle observatons wt large errs (outlers). One way s to use te Random Sample Consensus (RANSAC) algtm. Gven a model and a data set S contanng outlers: Pck randomly s data ponts from te set S and calculate te model from tese ponts. F a lne, pck ponts. Determne te consensus set S of s,.e. te set of ponts beng wtn t unts from te model. Te set S defne te nlers n S. If te number of nlers are larger tan a tresold T, recalculate te model based on all ponts n S and termnate. Oterwse repeat wt a new random subset. After N tres, coose te largest consensus set S, recalculate te model based on all ponts n S and termnate. c a A A d B B C C D D b p. How to coose te dstance lmt t? If we assume te dstance d from te model s nmally dstrbuted wt standard devaton σ te lmt t may be coosen as t = Fm (α)σ, were F m s te cumulatve dstrbuton functon f te χ dstrbuton wt m degrees of freedom. Suc a measurement satsfes d < t wt probablty α. A few eamples: Degrees of freedom Model t lne, fundamental matr.8σ omograpy, camera matr.99σ trfocal tens.8σ How many samples N? Te number of samples N sould be cosen suc tat te probablty of avng pcked at least one sample wtout outlers s p. Assume w s te probablty f an nler,.e. ɛ = w s te probablty f an outler. Ten we need at least N samples of s ponts eac, were ( w s ) N = p N = log p log( ( ɛ) s ). How to coose an acceptable sze of te consensus set T? A rule of tumb s to termnate f te sze of te consensus set s equal to te number of epected nlers n te set,.e. f n ponts T = ( ɛ)n. p.

Adaptve RANSAC Eample It s possble to estmate T and N dynamcally. Gven a pont set of n ponts: N =, = 0. Repeat wle < N Pck a subset of s elements and count te number of nlers k. Let ɛ = k/n. Let N = = +. log p log( ( ɛ) s ) f e.g. p = 0.99. Ts s called te adaptve RANSAC algtm. p.