Deterministic Hypothesis Generation for Robust Fitting of Multiple Structures

Similar documents
Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.

A Robust Method for Estimating the Fundamental Matrix

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

CS 534: Computer Vision Model Fitting

Programming in Fortran 90 : 2017/2018

Parallelism for Nested Loops with Non-uniform and Flow Dependences

S1 Note. Basis functions.

Support Vector Machines

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Multi-stable Perception. Necker Cube

Image Alignment CSC 767

Unsupervised Learning

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

Active Contours/Snakes

Feature Reduction and Selection

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

Simultaneously Fitting and Segmenting Multiple- Structure Data with Outliers

Fitting and Alignment

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Smoothing Spline ANOVA for variable screening

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints

LEAST SQUARES. RANSAC. HOUGH TRANSFORM.

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

y and the total sum of

MOTION PANORAMA CONSTRUCTION FROM STREAMING VIDEO FOR POWER- CONSTRAINED MOBILE MULTIMEDIA ENVIRONMENTS XUNYU PAN

Ecient Computation of the Most Probable Motion from Fuzzy. Moshe Ben-Ezra Shmuel Peleg Michael Werman. The Hebrew University of Jerusalem

Robust Computation and Parametrization of Multiple View. Relations. Oxford University, OX1 3PJ. Gaussian).

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Fitting: Deformable contours April 26 th, 2018

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

SVM-based Learning for Multiple Model Estimation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

MOTION BLUR ESTIMATION AT CORNERS

Collaboratively Regularized Nearest Points for Set Based Recognition

Vanishing Hull. Jinhui Hu, Suya You, Ulrich Neumann University of Southern California {jinhuihu,suyay,

Wishing you all a Total Quality New Year!

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Hierarchical Motion Consistency Constraint for Efficient Geometrical Verification in UAV Image Matching

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga

Support Vector Machines

Lecture 4: Principal components

Lecture 9 Fitting and Matching

Unsupervised Learning and Clustering

A Binarization Algorithm specialized on Document Images and Photos

Biostatistics 615/815

An Image Fusion Approach Based on Segmentation Region

X- Chart Using ANOM Approach

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

5 The Primal-Dual Method

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

Wavefront Reconstructor

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Very simple computational domains can be discretized using boundary-fitted structured meshes (also called grids)

A Scalable Projective Bundle Adjustment Algorithm using the L Norm

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

Homography Estimation with L Norm Minimization Method

The Codesign Challenge

An Optimal Algorithm for Prufer Codes *

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

A Robust LS-SVM Regression

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

An Entropy-Based Approach to Integrated Information Needs Assessment

AIMS Computer vision. AIMS Computer Vision. Outline. Outline.

Hierarchical clustering for gene expression data analysis

An Efficient Genetic Algorithm with Fuzzy c-means Clustering for Traveling Salesman Problem

Structure from Motion

Machine Learning. Topic 6: Clustering

A Statistical Model Selection Strategy Applied to Neural Networks

Learning Ensemble of Local PDM-based Regressions. Yen Le Computational Biomedicine Lab Advisor: Prof. Ioannis A. Kakadiaris

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

High resolution 3D Tau-p transform by matching pursuit Weiping Cao* and Warren S. Ross, Shearwater GeoServices

Module Management Tool in Software Development Organizations

Classifier Selection Based on Data Complexity Measures *

A Deflected Grid-based Algorithm for Clustering Analysis

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005

Categories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms

Feature-Area Optimization: A Novel SAR Image Registration Method

Reducing Frame Rate for Object Tracking

Solving two-person zero-sum game by Matlab

GSLM Operations Research II Fall 13/14

Image warping and stitching May 5 th, 2015

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Classifier Swarms for Human Detection in Infrared Imagery

Geometric Primitive Refinement for Structured Light Cameras

User Authentication Based On Behavioral Mouse Dynamics Biometrics

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

Optimal Scheduling of Capture Times in a Multiple Capture Imaging System

3D Modeling Using Multi-View Images. Jinjin Li. A Thesis Presented in Partial Fulfillment of the Requirements for the Degree Master of Science

Transcription:

Determnstc Hypothess Generaton for Robust Fttng of Multple Structures Kwang Hee Lee, Chank Yu, and Sang Wook Lee, Member, IEEE Abstract We present a novel algorthm for generatng robust and consstent hypotheses for multple-structure model fttng. Most of the exstng methods utlze random samplng whch produce varyng results especally when outler rato s hgh. For a structure where a model s ftted, the nlers of other structures are regarded as outlers when multple structures are present. Global optmzaton has recently been nvestgated to provde stable and unque solutons, but the computatonal cost of the algorthms s prohbtvely hgh for most mage data wth reasonable szes. The algorthm presented n ths paper uses a maxmum feasble subsystem (MaxFS) algorthm to generate consstent ntal hypotheses only from partal datasets n spatally overlappng local mage regons. Our assumpton s that each genune structure wll exst as a domnant structure n at least one of the local regons. To refne ntal hypotheses estmated from partal datasets and to remove resdual tolerance dependency of the MaxFS algorthm, teratve re-weghted L (IRL) mnmzaton s performed for all the mage data. Intal weghts of IRL framework are determned from the ntal hypotheses generated n local regons. Our approach s sgnfcantly more effcent than those that use only global optmzaton for all the mage data. Expermental results demonstrate that the presented method can generate more relable and consstent hypotheses than random-samplng methods for estmatng sngle and multple structures from data wth a large amount of outlers. We clearly expose the nfluence of algorthm parameter settngs on the results n our experments. Index Terms Fttng of multple structures, hypothess generaton, maxmum feasble subsystem (MaxFS), teratve reweghted L (IRL) mnmzaton u INTRODUCTION I N many computer vson problems observatons or measurements are frequently contamnated wth outlers and nose, and thus robust estmaton s needed for model fttng. The hypothesze-and-verfy framework s the core of many robust geometrc fttng methods. The Random Sample Consensus (RANSAC) algorthm [24] s a wdely used robust estmaton technque, and most of the state-of-the-art methods are based on random samplng. They are comprsed of two steps: () random hypotheses generaton and (2) verfcaton. These steps are performed teratvely. Many hypotheses of geometrc model are randomly generated from mnmal subsets of the data. The goal of random samplng s to generate many putatve hypotheses for gven geometrc model. In the verfcaton step, the hypotheses are evaluated accordng to robustness crteron to fnd the best model. Random samplng-based methods have some drawbacks. In the majorty of random samplng-based methods, the number of teratons needed to guarantee a desred confdence can be determned by a pror knowledge such as nler rato and nler scale. For the sngle-structure data wth unknown nler rato, t s crucal to determne the adequate number of teraton. The more heavly contamnated the data are, the lower the probablty of httng an all-nler subset s. However, snce the true nler rato s a pror unknown n many practcal stuatons, t s necessary to be determned by users. The standard stoppng crteron n RANSAC s based on an assumpton that a model generated from an uncontamnated mnmal subset s consstent wth all-nlers. However, n practce, ths assumpton may be dscrepant ether ncreasng runtme or estmatng ncorrect solutons snce nlers are perturbed by nose [8]. Furthermore, the exstence of multple structures makes the problem more dffcult snce the nlers belongng to other structures are regarded as outlers (pseudo-outlers). To the best of our knowledge, there s no stoppng crteron to guarantee consstent and relable results for mult-structure data. When the number of teratons s nsuffcent, the random samplng-based technques provde varyng results for the same data and parameter settngs. Despte ther robustness, the random samplng-based methods provde no guarantee of consstency n ther solutons due to the randomzed nature [2]. Snce many extensons of random samplng-based frameworks also follow the same heurstc of random samplng, none of them can guarantee determnstc solutons [2]. The varous approaches to mprove the effcency of the random hypothess generaton for the estmaton of sngle structure have been nvestgated [8, 29, 2, 22, 9, 20, 27]. They have been developed to ncrease the frequency of httng all-nler samples. Unfortunately, these methods are lmted to sngle structure problem. In recent years, to deal wth multple structure data, some guded samplng methods [5, 6, 23] have been proposed. To perform guded samplng, a seres of tentatve hypotheses are generated from mnmal subsets of the data n advance. Then, guded samplng based on preference analyss s performed. The qualty of the ntal hypotheses depends on the proporton of pseudo-outlers and gross outlers. Furthermore, snce [5, 6, 23] have no clear stoppng crteron, t s dffcult to determne optmal number of teratons needed. Although the methods mentoned above mprove the probablty of httng an all-nler subsets n some ways, consstent performance cannot be guaranteed for the data wth unknown

2 nler rato or the number of structures. Many multplestructure model fttng methods also start wth random hypothess generaton [9, 0,, 3, 2, 4, 7, 8]. Some of these algorthms classfy dataset based on randomly generated hypotheses and fnd model parameters [9, 0, ]. Due to the same nature of random samplng, however, varyng results may be produced from the same dataset. Global optmzaton has recently been actvely nvestgated for model fttng problems n computer vson [6, 2, 3, 25, 26]. L has developed a global optmzaton method for the algebrac DLT (Drect Lnear Transformaton) problem that has fxed bounded varables [2]. He suggested an exact blnear-mip (Mxed Integer Programmng) formulaton and obtaned globally optmal solutons usng an LP (Lnear Programmng)-based BnB (Branch-and-Bound) algorthm. In [3], Yu et al. drectly solved Bg-M based MILP problem. Whle these methods guarantee globally optmal soluton, hgh computatonal cost s requred n general. For a large dataset, the global optmzaton methods requre a great deal more runnng tme than RANSAC. Furthermore, the presence of mage features from multple structures makes ther computaton cost even hgher. In ths paper, we present a novel approach to relable and consstent hypothess generaton for multplestructure model fttng. Unlke prevous random samplng methods, our method generates hypotheses usng determnstc optmzaton technques, and thus produces consstent results gven a set of mages. A whole mage dataset s splt nto spatally overlappng crcular regons of subsets, and a maxmum feasble subsystem (MaxFS) problem s solved to generate consstent ntal hypotheses n each regon. Because of the reducton of data sze usng local regons, the MaxFS algorthm can generate the ntal hypothess from each local regon wth reasonable effcency. Snce the MaxFS algorthm yelds a globally optmal soluton only for the mage subset n a spatally local regon and the result s nfluenced by a resdual tolerance value, an teratve reweghted L (IRL) mnmzaton s carred out usng all the data n the mage to compensate for fttng errors from the subset and to get rd of the resdual tolerance dependency. Our method s developed to fnd multple structures under the assumpton that a good hypothess for each genune structure wll be found n at least one of the spatally local regons. It may be noted that the use of spatal restrcton for hypothess generaton s not unprecedented. There have been approaches to usng spatal coherence n local regons for estmatng sngle and multple structures/motons [27, 28, 2]. They are based on random samplng and explot spatal coherence manly to ncrease the chance of fndng all-nler samples. Those methods therefore have the lmtatons of random samplng when the nler rato s unavalable. Our algorthm uses spatally overlappng local regons to generate the ntal hypothess effcently and to exhaustvely search for all genune structures. It provdes con-sstent hypotheses regardless of nler rato, nose effect and the number of structures. Recently, n our prevous work [36], a determnstc fttng method for mult-structure data has been proposed. To reduce hgh computaton cost, the MaxFS algorthm s performed for small subset lke the method presented n ths paper. However, whle ntal subset s obtaned by sortng keypont matchng scores n [36], the presented method does not requre applcaton-specfc knowledge. In [36], there exst dependences between hypotheses generated, snce a sequental fttngand-removng procedure s used. Therefore, t s mpossble to generate hypotheses n parallel. On the other hand, the presented method can be mmedately parallelzed snce there s no dependency between all of the hypotheses generated. The rest of the paper s organzed as follows: Secton 2 ntroduces two determnstc methods for geometrc fttng. Secton 3 descrbes our algorthm based on MaxFS and IRL frameworks. Secton 4 shows the expermental results on synthetc and real data, and we conclude n Secton 5. 2 DETERMINISTIC METHODS FOR ROBUST GEOMETRIC FITTING In ths secton, we brefly descrbe two man optmzaton technques that we employ n our method. 2. Maxmum Feasble Subsystem (MaxFS) Problem The am of a MaxFS algorthm s to fnd the largest cardnalty set wth constrants that are feasble [4, 2, 3]. In other words, t fnds a feasble subsystem contanng the largest number of nequaltes for an nfeasble lnear system Ax b wth real matrx A  n k, real vector b  k and varable x  n. The objectve of the MaxFS and RANSAC are the same. However, unlke the RANSAC, the MaxFS guarantees a global soluton. The MaxFS problem admts the followng mxed nteger lnear programmng (MILP) formulaton by ntroducng a bnary varable y for each of the nequaltes: mn k å x,y = y subject to n å a j j= x Πn, x j b + M y, y Î{0,}, " =,...,k where M s a large postve number that converts an nfeasble nequalty nto a feasble one when y =. The case where y =0 ndcates that the nequalty s feasble. Note that f y =, then the th constrant s automatcally deactvated. Ths MILP formulaton s known as the Bg-M method []. Generally, MILP problems are solved usng the LPbased BnB (Lnear Programmng-based Branch and Bound) or LP-based BnC (LP-based Branch and Cut) methods. LPbased BnB and BnC guarantee global optmalty of ther solutons [3, 30]. The MILP problem s expensve n terms ()

3 (b) (c) Fg.. Overvew of our approach. Overlappng crcular regons for hypothess generaton. (b) Detecton of domnant structures n local regons. (c) IRL mnmzaton for refnng ntal hypothess. of computatonal cost, whch s known to be NP-hard. Thus, only relatvely small problems can be solved practcally. Whle the exact Bg-M MILP formulaton s useful for small models, t s not effectve on larger models for reasons of computatonal neffcency []. To solve the geometrc fttng problem ncludng heavly contamnated mult-structure data, MaxFS demands a much hgher computatonal cost than RANSAC style methods. 2.2 Iteratve Re-weghted L (IRL) Mnmzaton The most common determnstc method for geometrc fttng s the least-squares estmaton. The best ft mnmzes the sum of the squared resduals (L2-norm). Ths method s optmal for Gaussan nose, but the algorthm mostly fals n the presence of outlers. Usng the sum of the absolute resduals (L-norm) relatvely brngs about better results than usng L2-norm n the presence of outlers, snce L mnmzaton puts less weght on large resduals than L2 mnmzaton. Nevertheless, L mnmzaton stll cannot guarantee robustness n the case of severely contamnated data or multple structured data wth outlers because the nfluence functon has no cut off pont. Although these methods always have a global soluton, relable output cannot be guaranteed n the presence of severe outlers. Iteratve re-weghted L (IRL) mnmzaton has been presented by Candѐs, Wakn and Boyd [3]. The IRL algorthm solves a sequence of weghted L-norm problems where the weghts are determned accordng to the estmated coeffcent magntude. The IRL mnmzaton algorthm s summarzed n Algorthm. W s a dagonal weghtng matrx from the t th teraton wth th dagonal element w t and a s a stablty parameter whch affects the stablty of the algorthm. In [3], expermental results show that t often outperforms standard L mnmzaton. Although each teraton of the algorthm solves a convex optmzaton problem, the overall algorthm does not. Therefore, one cannot expect ths algorthm to always fnd a global mnmum. Consequentally, t s mportant to determne a good startng pont for the algorthm. In [3], ntalzng wth the soluton to standard L mnmzaton has been ntroduced. However, t s unsutable for problems wth heavly contamnated mult-structure data. Algorthm. IRL algorthm [3] Input: y = Ax Output: x t : Intalze: Set the weghts w 0 = for = d 2: Repeat 3: t = t+ 4: t t x = argmn W x,.. y = Ax s t 5: w t + x = t x + a 6: Untl convergence or a fxed number of tmes 3 HYPOTHESIS GENERATION USING DETERMINISTIC OPTIMIZATION TECHNIQUES We present a determnstc algorthm to generate a relable and consstent hypothess for robust fttng of multple structures. The whole data space s splt nto densely overlappng crcular regons (see Fg. ). In each regon, an ntal hypothess s generated from the maxmum nlers. However, the ntal hypothess may be slghtly based even f t s generated from pure nlers, snce the estmated nlers are from the local regon. To estmate the best ft for each genune structure from the ntal hypothess, IRL mnmzaton s performed for all of the data n the mage (see Fg. (c)). The algorthm of our method s summarzed n Algorthm 2.

4 Algorthm 2. MaxFS-IRL algorthm Input: nput data D, the number of data (n each subset) k, the fractonal constant α, M (Bg-M), the resdual tolerance ε, the varance of weght functon σ R j x j c s s Output: hypothess sets {Θ * j} j=, l : Splt D nto l subset D j 2: For j = to l MaxFS 3: Estmate ntal parameter Qˆ usng MaxFS from Eq. (6) (from D j ) MaxFS 4: Intalze weghts from Qˆ from Eq. (7) 5: Refne parameter Θ * j usng IRL mnmzaton (from D) : (c) (b) r maxdensty (d) 6: Repeat 7: Solve the weghted L mnmzaton problem from Eq. (8) 8: Update the weghts from Eq. (9) x j c s s 9: Untl convergence or a fxed number of tmes 0: End (e) (f) 3. Determnaton of Spatally Overlappng Crcular Regons Our algorthm splts a whole mage regon nto many overlappng small crcular regons. The nput data (D) s the unon of l subsets N D = { x } = D, (2) =! l j= where D j s the set of data n the crcular regon j. Neghborng subsets, D m and D n, share common data n the overlappng regon (D m D n Ø). It s assumed that every structure n the mage appears as a domnant structure n at least one of the local regons, and thus t suffces for our algorthm to fnd only one domnant structure n a crcular regon. The remanng structures are mssed due to the domnant structure s ntal detecton, yet they can be found n the other local regons. Fgure (b) shows an example where an undetected structure n a wndow (Fg. (b), mddle) becomes a domnant structure n a neghborng wndow (Fg. (b), bottom). The crcles szes and postons should be determned dependng on the number of data ponts to be ncluded n the crcular regons. The number of data ponts k that a crcle covers should be larger than the mnmum number requred for fttng a desred model for hypothess generaton. If k becomes larger, the result s more relable but the computatonal cost s hgher. The smaller the nterval between the crcles s, the slmmer the chance of mssng small structures becomes. To mantan even performance j Fg. 2. Center x j c and radus R j of crcular regon j and step sze s. (b) Input mage and data ponts. (c) Data densty map estmated from 2D KDE method. (d) Estmaton of r maxdensty. (e) Estmaton of s and x j c. (f) Spatally overlappng crcular regons. over the regons, the crcular wndows nclude approxmately the same number of data ponts. Thus, the computatonal cost for ntal hypothess generaton s about the same. Our algorthm places the centers of crcular wndows (x j c s) at a regular nterval s n the horzontal and the vertcal drectons of mage as llustrated n Fg. 2, but the rad of the crcles (R j s) vary to keep the number of data ponts approxmately the same. Once k s set, the smallest crcle that contans k data ponts should appear where the data densty s hghest and we take ts radus rmaxdensty as a reference for determnng the step sze s and R j s for ndvdual crcles. We compute the densty of data usng the 2D Kernel Densty Estmate (KDE) descrbed n [7], and Fgure 2 (c) shows the data densty map for the mage data ponts shown n Fg. 2(b). In Fg. 2 (d), the red crcle represents the hghest-densty regon wth k-nearest neghbors and the dotted red lne does rmaxdensty. The step sze s s determned as follows: s = rmaxdenstyα, (3) where α s a fractonal constant. The subset D j around the crcle center x j c s defned as:

5 c j j j D = { x ÎD x - x R }, R = mn( r, 2 r ), j j maxdensty where r j s the mnmum radus for the crcle that contans k-nearest neghbors. Fgure 2 (e) shows the center ponts of the crcular regons (blue crosses), and an example of spatally overlappng crcular regons (green crcles) s shown n Fg. 2 (f). Snce rmaxdensty s the shortest possble radus, r j s always longer than rmaxdensty and thus substantal overlap between the adjacent crcles s guaranteed. The constant α controls the extra degree of overlap. The maxmum R j s set to 2rmaxdensty snce r j becomes unmeanngfully long where data densty s very low. The number of dataponts k and the constant α are the most mportant parameters n our algorthm and we expermentally nvestgate the nfluence of ther settngs on the results n Secon 4. 3.2 Intal Hypothess Generaton usng the MaxFS Algorthm We determne the maxmum feasble nlers n each local regon. An ntal hypothess s generated from the maxmum feasble nlers n each local regon. A smple way of determnstcally solvng ths problem would be to perform an exhaustve search. Ths s ntractable due to the combnatoral exploson of subsets. On the other hand, random-samplng methods cannot guarantee consstent results due to ther randomzed nature and stoppng crteron depends on the pror knowledge of nformaton such as nler rato. Unlke random-samplng methods, the MaxFS algorthm can guarantee maxmum feasble nlers, even though the nler rato s unknown. Moreover, by splttng the problem nto many small parts, the MaxFS algorthm can be performed quckly and effcently. We use the algebrac Drect Lnear Transformaton (DLT) to estmate hypothess parameters [5]. We can then formulate the DLTbased geometrc fttng problem as a MaxFS problem. Each subset D j s parttoned nto the nler-set D j I and the outlerset D j O wth D j I D j, D j O D j, D j I D j O =D j and D j I D j O =Ø. A maxmum resdual tolerance, ε>0, provdes a bound for the algebrac resdual d = a T Θ at pont, where a T s each row vector of A n the homogeneous equaton AΘ=0: (4) d = a T Θ ε, ε>0. (5) The MaxFS formulaton of Eq. (5) s as follows: { Θˆ MaxFS subject to, yˆ} = argmn a T T Θ, y = y Θ e + M c Θ =, n ΘÎÂ k y,, y {0,}, =,...,k. where M s a large postve number (called Bg-M value). The case where y =0 ndcates that the th data s an nler. (6) If y =, the data s an outler and the correspondng constrant s deactvated automatcally. We use a lnear con- th strant c T Θ=, rather than the commonly used Θ =, where c s a problem dependent vector determned by the user [5]. Note that our MaxFS algorthm solves Eq. (6) for every MaxFS subset D j. Then, a seres of hypotheses { ˆΘ,, MaxFS Θˆ } are generated from the maxmum nler-set l where l s the number of hypotheses. In the hypothess generaton stage, although the MaxFS algorthm obtans a global soluton for the local subset, the parameter vectors estmated n spatally restrcted regons can be based agant the true structures. To refne ntal hypotheses estmated from local datasets, IRL mnmzaton s performed for all data n the mage. 3.3 Hypothess Refnement Usng the IRL Mnmzaton In each local regon, gven an ntal hypothess generated from the MaxFS algorthm, ntal weghts of all the data can be determned as w ˆ MaxFS Q (0) ), = - r = exp( s (7),..., N, MaxFS where s the resdual of data x to the ntal hypothess and σ controls the wdth of the weght functon. Unlke the orgnal IRL [3] mnmzaton algorthm, our IRL mnmzaton algorthm uses the MaxFS algorthm to generate the ntal weghts. Snce the ntal hypothess generated from the MaxFS algorthm s robust to outlers, t may provde much better ntal weghts than the orgnal IRL [3] mnmzaton that uses the standard L mnmzaton as the method to generate ntal weghts. After ntal weghts are set, the IRL mnmzaton teratvely performs the two-step algorthm shown below. The frst step solves the followng weghted verson of L mnmzaton: r Qˆ { Θˆ, yˆ} = argmn t subject to Θ, y a T T = Θ y, c Θ =, n N Θ ÎÂ, y w y t = [0, ], =,...,N, where t s the ndex of current teraton. In the next step, the weghts of all the data are updated as follows: Θˆ t (+) t - r w =exp( ), =,..., N, σ (9) where t s the resdual of data x to the current hypothess and σ controls the wdth of the weght functon. In each teraton, Eq. (8) and Eq. (9) are alternately performed (see r Θˆ (8)

6 Fg. 3. Examples for lne fttng. (Top row) Input data wth dfferent outler rato (50%, 66%, 75%, 20%, 33.3%, 42.8%). (Mddle row) The generated hypotheses usng the proposed method. (Bottom row) Fttng results. Algorthm 2, 6~9). In other words, the current parameter vector Θˆ t s estmated by solvng Eq. (8) and then weghts of all the data are updated based on the current parameter vector by Eq. (9). The algorthm s repeated untl convergence or for a fxed number of tmes. Note that an advantage of IRL mnmzaton s that the results are not nfluenced by resdual tolerance, unlke wth RANSAC or MaxFS. Therefore, our IRL mnmzaton not only refnes a hypothess based by local fttng, but also elmnates the resdual tolerance dependency. 3.4 Complementary Role of MaxFS and IRL The hypotheses estmated from only MaxFS may devate from the true structures snce they are estmated n local regons. We use the IRL procedure to globally refne these hypotheses. The ntal weghts of the conventonal IRL procedure are usually generated from the standard L mnmzaton whch often fals when there are a large amount of outlers. We employ the MaxFS algorthm to generate much more relable ntal weghts than the standard L mnmzaton. It s our ntenton to carefully combne the two algorthms to complement each other s lmtatons. The contrbuton of each component can be seen n Fg. and Fg. 2. 4 EXPERIMENTAL RESULTS We have mplemented our algorthm n MATLAB usng the LP/MILP solver GUROBI [30] whch provdes functons for the LP/MILP and a desktop wth Intel 5-2500 3.30GHz (4 cores) and 3GB RAM was used for experments. We used 4 cores only for solvng mxed nteger lnear programmng n each MaxFS problem. We measured the actual elapsed computaton tme and tested our proposed method on several synthetc and real datasets. 4. Experments wth Synthetc Datasets: Sngle and Multple Lne Fttng The frst set of results we show s produced from sngle and multple 2D lne fttng. We performed the DLT-based MaxFS and IRL algorthms for each data subset and the results are shown n Fgs. 3 and 4. The resdual tolerance ε was set to and the Bg-M value of Eq. (6) was fxed to 0000. The varance of weght functon σ was fxed to 0. The number of data ponts n the crcular regon k was fxed to 40. In the sngle lne test, each lne ncludes 00 nlers wth Gaussan nose and varous gross outlers. Nose level was fxed at 0.03, and the number of gross outlers vares from 00 to 300. In multple lne fttng tests, the same rato of gross outlers and Gaussan nose was added. Fnal fttng from generated canddate hypotheses was performed usng L s method [8]. Fgure 3 shows nput synthetc data (Top row), correspondng hypotheses results (Mddle row) and the fnal fttng results (Bottom row). The results show our algorthm generates good hypotheses from true structures. Even when the rato of gross outlers ncreases, the proporton of good hypotheses s farly consstent. Fgure 4 shows (Frst column) ntal hypotheses from the MaxFS algorthm, (Second column) hypotheses after the frst reweghted teraton, (Thrd column) hypotheses after the thrd reweghted teraton and (Last column) fttng results. We used the IRL mnmzaton from all of the data to reduce the errors from subset data. In the frst row, nose level was fxed at 0.0 and the number of gross outlers was fxed at 200. In the second row, nose level s fxed at 0.03 and the number of gross outlers was the same. Wth the resduals of the whole dataset beng used for estmaton, hypotheses are refned for genune structures.

7 Fg. 4. Hypothess refnement through reweghted L teratons. (Frst column) Intal hypotheses from Max-FS framework. (Second column) Hypotheses after the frst reweghted teraton. (Thrd column) Hypotheses after the thrd reweghted teraton. (Last column) Fttng results. 4.2 Experments wth Real Datasets We tested four methods ncludng our MaxFS-IRL algorthm on several real datasets. For performance evaluaton and comparson, we measured the actual elapsed computaton tme. Images and keypont correspondences were acqured from the Oxford VGG dataset [32] and the Adelade RMF dataset [34, 35]. The mage pars used n the experment are shown n Fgs. 5 and 6. Yellow crosses ndcate the gross outlers randomly generated and other colored squares ndcate the nlers of each structure. Each dataset ncluded the varous rates of outlers. We used manually labeled keypont correspondences whch were obtaned by SIFT matchng. Gven true nler-sets and the number of structures, for each structure, a sngle hypothess havng mnmum re-projecton errors between ground truth nlers and estmated hypothess parameters was selected. The qualty of hypotheses estmated was evaluated by averagng re-projecton errors for all structures. 4.2. Analyss: MaxFS-IRL Framework We performed the MaxFS-IRL method to estmate planar homography and affne fundamental matrx. For our MaxFS-IRL algorthm, the Bg-M value n Eq. (6) was set to 0000 for both applcatons. For estmaton we used Drect Lnear Transformaton (DLT) and the resdual was taken as the Sampson dstance [5]. We expermentally examned the effects of parameter k (subset sze) and α (fractonal constant) on dfferent datasets wth 70% of outlers. Frstly, we nvestgated the effects of k n the range of [0, 50] wth fxed s=00 for the homography estmaton and s=50 for the affne fundamental matrx estmaton. Fgs. 7 and 7(b) show the re-projecton errors and computaton tme for the homography estmaton wth the MaxFS-IRL method, respectvely. Only the results from three datasets are shown n the plots. Fgs. 8 and 8(b) show the re-projecton errors and the computaton tme for the affne fundamental matrx estmaton, respectvely. It can be seen n Fgs. 7 and 8 that hgh accuracy s acheved for the k from about 30 and above for the homography estmaton and 20 and above for the affne fundamental matrx estmaton, and the computaton tme gradually ncreases wth k. For the MaxFS-IRL algorthm, we set k to 30 for the homography estmaton and 20 for the affne fundamental matrx estmaton to attan both accuracy and computatonal effcency. Secondly, we nvestgated the effects of α n the range of [0.25,.5] wth predetermned k. Fgs. 7(c) and 7(d) show the re-projecton errors and computaton tme for the homography estmaton wth the MaxFS-IRL method, respectvely. Fgs. 8(c) and 8(d) show the re-projecton errors and the computaton tme for affne fundamental matrx estmaton wth the MaxFS-IRL method, respectvely. Based on these results, we set α to for the homography estmaton and 0.5 for the affne fundamental matrx estmaton by consderng both accuracy and computatonal effcency. We emprcally examned the effects of parameters ε (resdual tolerance) and σ (varance of weght functon). Frstly, we nvestgated the effect of ε wth fxed σ= n the range of [0., 0.3, 2.9] for the mage pars for homography estmaton (Fg. 5) and n the range of [0.0005, 0.00,, 0.005] for the mage pars for affne fundamental matrx estmaton (Fg. 6). Fgs. 9 and 9(b) show the re-projecton errors from the MaxFS algorthm and MaxFS-IRL method for one of several structures n the CollegeIII data wth 60% of the outlers ncluded and the Carchpscube data wth 40% of the outlers ncluded. The results clearly show that our algorthm s stable wthn a wde range of the nler tolerance ε. We set ε=0.5 for homography estmaton and ε=0.00 for affne fundamental matrx estmaton based on our test results. Secondly, whle σ s vared n the range of [, 2, 5] wth fxed ε, we recorded the re-projecton error n each teraton. Fgs. 9(c) and 9(d) shows the results for the Neem data wth 50% of the outlers ncluded and the Cubebreadtoychps data wth 30% of the outler ncluded. In most experments, when σ<5, the results were stable and accurate. Consderng the convergence speed and accuracy, we set σ=3 by default for both applcatons.

8 (d) Johnsona (b) CollegeII Neem (e) Unonhouse (c) CollegeIII (f) Elderhalla (g) Sene Fg. 5. Test mage-sets used for homography estmaton experment. Yellow crosses ndcate the gross outlers randomly generated and other colored squares ndcate the nlers of each structure. Bscutbook (b) Carchpscube (d) Gamebscut (e) Cubetoy (c) Breadcartoychps (f) Dnobooks (g) Cube (h) Bscutb okbox Fg. 6. Test mage-sets used for affne fundamental matrx estmaton experment. Yellow crosses ndcate the gross outlers randomly generated and other colored squares ndcate the nlers of each structure. 4.2.2 Comparson wth Other Methods Our algorthm was compared wth unform random samplng (Random Samplng) [24, 33], NAPSAC [27] and the state-of-the-art algorthm Mult-GS [23, 34]. We mplemented NAPSAC n MATLAB. For performance evaluaton, we measured elapsed computaton tme and the number of generated hypotheses (L) and computed the re-projecton errors (maxmum error and standard devaton). The results for the four algorthms are summarzed n Tables and 2 wth the best results shown n bold. For Random Samplng, NAPSAC and Mult-GS, 50 random runs were performed. Unlke these compettors, snce our method runs untll completon of the algorthm, the elapsed computaton tme for our method was not lmted but measured. To ensure a far comparson, three competng methods were performed for smlar perods of tme (elapsed computaton tme) along wth our algorthm. In ths experment, we selected all of the parameters based on secton 4.2. for our MaxFS-IRL framework. Homography Estmaton: We tested the performance of our proposed method for estmatng planar homographes on real mage data. For the homography estmaton, the Bg-M value of Eq. (6) was fxed to 0000, the resdual tolerance ε was fxed to 0.5, the varance of weght functon σ was fxed to 3, the number of teratons n IRL was fxed

9 to 5, fractonal constant α was fxed to and the number of data ponts n the crcular regon k was fxed to 30. Table summarzes the performance of four methods for estmatng planar homography for seven datasets (see Fg. 5. (a-g)) wth varous outler ratos. The results demonstrate that our method yelds more relable and consstent results wth reasonable computatonal effcency. Affne Fundamental Matrx Estmaton: We also tested the performance of our proposed method for estmatng an affne fundamental matrx on real mage data. For the affne fundamental matrx estmaton, the Bg-M value was fxed to 0000, the resdual tolerance ε was fxed to 0.00, the varance of weght functon σ was fxed to 3, the number of teratons n IRL also was fxed to 5, fractonal constant α was fxed to 0.5 and the number of data ponts n the crcular regon k was fxed to 20. hypothess s relable but may not yeld the mnmum errors for true nlers. Note that random samplng-based methods produced large varatons n ther results. (b) (c) (d) Fg. 8. The effect of parameter k (subset sze) and α (fractonal constant) on dfferent datasets wth 70% of outlers for the affne fundamental matrx estmaton wth MaxFS-IRL method. The re-projecton errors obtaned from dfferent k. (b) The computaton tme measured from dfferent k. (c) The re-projecton errors obtaned from dfferent α. (d) The computaton tme measured from dfferent α. ) (Ga (c) (d) meb Fg. 7. The effect of parameter k (subset sze) and α (fractonal constant) on dfferent datasets wth 70% of outlers for the homography estmaton wth MaxFS-IRL method. The re-projecton errors obtaned from dfferent k. (b) The computaton tme measured from dfferent k. (c) The re-projecton errors obtaned from dfferent α. (d) The computaton tme measured from dfferent α. Table 2 shows the performance of the algorthms for estmatng the affne fundamental matrx for seven datasets (see Fg. 6. (a-g)) wth vared outler ratos. The results show that our algorthm generates hgh-qualty hypotheses wth reasonable effcency for the datasets wth hgh outler ratos and thus fnds all the true structures stably from all of the datasets. In most cases, our method results n the smallest errors on all the test datasets except for the Bscut book dataset wth a relatvely low outler rato. Snce our method performs IRL mnmzaton for all of the data n order to generate a hypothess whle other methods generate a hypothess from mnmal subsets, ts 4.2.3 Performance under Increasng Outler Rates We compared the performance of four algorthms under dfferent outler rates. We ncreased the outler rato from 0% to 80% for the CollegeIII data and the Bscutbookbox data. Fgs. 0 and 0(c) show the re-projecton errors produced by the four methods on the two test datasets as the outler rato ncreases and Fgs. 0(b) and 0(d) show the correspondng standard devatons for the reprojecton errors. Our algorthm outperforms the other algorthms when outler rato s hgh over smlar perods of tme (elapsed computaton tme). Snce the probablty of producng an all-nler subset decreases wth random samplng-based approaches as the outler rato ncreases, the maxmum errors and standard devatons ncrease substantally. On the other hand, more robust results were yelded wth our MaxFS-IRL method snce an ntal hypothess generated from the MaxFS was rarely nfluenced by the outler rato. 4.2.4 Combnaton of MaxFS and IRL In Fgs. and 2, we compared the results usng the conventonal IRL method and usng our MaxFS-IRL method. We used the CollegeIII dataset. For the conventonal IRL method, the ntal weghts were generated

0 model fttng. For relable hypothess generaton wth reasonable computatonal effcency, we employ a MaxFS (maxum feasble subsystem) algorthm, a global optmzaton technque, only n spatally localzed mage regons and refne the hypotheses usng an IRL (re-weghted L) mnmzaton method. To search out all of the structures thoroughly, the local crcular regons spatally overlap wth the neghborng ones, and the number of data n each local regon s dstrbuted evenly for computatonal effcency. The model parameters of major structure are estmated n each local mage regon, and those of the remander can be found n one of the neghborng regons. The IRL mnmzaton s performed over all the mage data to refne ntal hypotheses estmated from subsets and to get rd of resdual tolerance dependency of the MaxFS algorthm. Our experments show that wthout pror knowledge of the nler rato, nler scale and the number of structures, (b) (c) (d) Fg. 0. The performance of four algorthms under dfferent outler rates. Graphs and (c) show the re-projecton errors produced by the four methods on the two test datasts as outler rato ncreases. (b) and (d) show the correspondng standard devatons for the re-projecton errors. Fg. 9. and (b) show the re-projecton errors from the MaxFS algorthm and MaxFS-IRL method for one of several structures n the CollegeIII data wth 60% of the outlers ncluded and the Carchpscube data wth 40% of the outlers ncluded. (c) and (d) show the re-projecton errors obtaned from dfferent σ for each teraton for the Neem data wth 50% of the outlers ncluded and the Cubebreadtoychps data wth 30% of the outlers ncluded. from standard L mnmzaton from local datasets to compete on par wth the MaxFS-IRL method. We can see the contrbuton of each stage from Fgs. and 2. Fgures and (d) show ntal fttng results from standard L mnmzaton and the MaxFS algorthm for the CollegeIII data wth no outlers ncluded. Fgures (b) and (e) show the fttng results after the frst reweghted teraton. Fgs. (c) and (f) show the fnal fttng results from the conventonal IRL method and the MaxFS-IRL method. Fgures 2(a-c) show fttng results from the conventonal IRL method n each teraton step and Fgures 2(df) show those from the MaxFS-IRL method for the CollegeIII data wth 30% of the outlers ncluded. As appears by these results, when the outler rato s low, the standard L mnmzaton can provde good results. When there are severe outlers, however, standard L mnmzaton frequently fals. To obtan good fttng results usng the IRL method, t s mportant to determne good ntal weghts for the algorthm. Therefore, the MaxFS and the IRL algorthms are complementary and we show that ther combnaton s hghly effectve for the task of robust multple-structure fttng. (b) (c) (d) our method generates consstent hypotheses whch are more relable than the random samplng-based methods as outler ratos ncrease. 5 CONCLUSION We present a new determnstc approach to relable and consstent hypothess generaton for multple-structure

(b) (c) (d) (e) (f) Fg.. and (d) show ntal fttng results from the standard L mnmzaton and the MaxFS algorthm for the CollegeIII data wth no outler ncluded. (b) and (e) show the fttng results after the frst reweghted teraton. (c) and (f) show the fnal fttng results from the conventonal IRL method and the MaxFS-IRL method. (b) (c) (d) (e) (f) Fg. 2. and (d) show ntal fttng results from the standard L mnmzaton and the MaxFS algorthm for the CollegeIII data wth 30% of the outler ncluded. (b) and (e) show the fttng results after the frst reweghted teraton. (c) and (f) show the fnal fttng results from the conventonal IRL method and the MaxFS-IRL method. REFERENCES [] J. W. Chnneck. Feasblty and Infeasblty n Optmzaton: Algorthms and Computatonal Methods, st edton, Sprnger, Hedelberg 2007. [2] H. L. Consensus Set Maxmzaton wth guaranteed global optmalty for robust geometry estmaton. In ICCV 2009. [3] C. Yu, Y. Seo, and S. W. Lee. Photometrc stereo from maxmum feasble lambertan reflec-tons. In ECCV, 200. [4] Y. Zheng, S. Sugmoto and M. Okutom. Determnstcally maxmzng feasble subsystem for robust model fttng wth unt norm constrant. In CVPR 20: 825-832 [5] R. Hartley and A. Zsserman. Multple vew geometry n computer vson 2nd ed. Cambrdge Unversty Press, 2004. [6] O. Enqvst, F. Kahl. Two vew geometry estmaton wth outlers. In BMVC 2009.

2 TABLE PERFORMANCE OF VARIOUS METHODS ON HOMOGRAPHY ESTIMATION FOR THE SEVERAL REAL DATASETS. Data Outler Rato[%] Method Random NAPSAC Mult-GS MaxFS-IRL Elapsed tme [sec] 5 5 5 3.7 Johnsona 0 Max Error.087.5245.7234 0.852 Std 0.0563 0.35 0.973 0 L 2028 0926 855 24 Elapsed tme [sec] 0 0 0 9.8 Neem 20 Max Error.78.6387.0993 0.9294 Std 0.0508 0.324 0.0352 0 L 8386 7529 496 2 Elapsed tme [sec] 35 35 35 33.5 CollegeIII 30 Max Error.9025 2.2848.9946.4536 Std 0.0643 0.05 0.0983 0 L 25043 23068 2628 5 Elapsed tme [sec] 50 50 50 49.9 CollegeII 40 Max Error.5704.8473.574.485 Std 0.036 0.0668 0.035 0 L 35775 3389 2589 35 Elapsed tme [sec] 0 0 0 7.6 Unonhouse 50 Max Error 0.6507 0.824 0.65 0.587 Std 0.048 0.055 0.065 0 L 8559 7868 2239 9 Elapsed tme [sec] 20 20 20 8.62 Elderhalla 60 Max Error 2.9963 2.82.9426.6893 Std 0.2685 0.278 0.07 0 L 6922 6080 272 20 Elapsed tme [sec] 40 40 40 39.09 Sene 70 Max Error.0424 0.9275 0.6983 0.49 Std 0.0953 0.023 0.033 0 L 29208 28875 306 35 TABLE 2 PERFORMANCE OF VARIOUS METHODS ON AFFINE FUNDAMENTAL MATRIX ESTIMATION FOR THE SEVERAL REAL DATASETS. Data Outler Rato[%] Method Random NAPSAC Mult-GS MaxFS-IRL Elapsed tme [sec] 20 20 20 2.7 Bscutbook 20 Max Error.047.04.0529.059 Std 0.0047 0.0056 0.038 0 L 20444 204 839 08 Elapsed tme [sec] 20 20 20 7.08 Carchpscube 30 Max Error 0.5542 0.5023 0.5356 0.4927 Std 0.07 0.0036 0.02 0 L 29832 27005 3295 74 Elapsed tme [sec] 30 30 30 28.8 Breadcartoychps 40 Max Error 0.789 0.7454 0.7344 0.74 Std 0.0205 0.0084 0.00 0 L 3450 28929 3023 59 Elapsed tme [sec] 35 35 35 35.53 Gamebscut 50 Max Error 0.796 0.6752 0.6863 0.6744 Std 0.0085 0.0038 0.0066 0 L 3028 28927 294 78 Elapsed tme [sec] 30 30 30 28.58 Cubetoy 50 Max Error 0.6469 0.637 0.6257 0.6076 Std 0.0085 0.0058 0.0048 0 L 24468 2630 2328 94 Elapsed tme [sec] 40 40 40 40.39 Dnobooks 60 Max Error 2.8726 2.3352 2.2649 2.2078 Std 0.693 0.0567 0.04 0 L 25307 2309 2599 56 Elapsed tme [sec] 30 30 30 32.4 Cube 70 Max Error 0.530 0.5486 0.597 0.5053 Std 0.0079 0.0086 0.004 0 L 26585 27652 2558 59

3 [7] Z. I. Botev, J. F. Grotowsk and D. P. Kroese Kernel densty estmaton va dffuson. Annals of Statstcs 200. Volume 38, Number 5, Pages 296 2957 [8] H. L. Two-vew moton segmentaton from lnear programmng relaxaton. In CVPR, 2007. [9] R. Toldo, A. Fusello. Robust Multple Structures Estmaton wth JLnkage. In ECCV 2008: 537-547. [0] T.-J. Chn, H. Wang, D. Suter. Robust Fttng of Multple Structures: The Statstcal Learnng Approach. In ICCV 2009 : 43-420. [] W. Zhang, J. Kosecka. Nonparametrc Estmaton of Multple Structures wth Outlers. LNCS, DVW 2006: 4358. 60-74. [2] Y. Kanazawa, H. Kawakam. Detecton of planar regons wth uncalbrated stereo usng dstrbutons of feature ponts. In: BMVC 2004. [3] M. Zulan, C. Kenney and B. Manjunath. The multransac algorthm and ts applcaton to detect planar homographes. In ICIP 2005 : Volume 3. [4] H. S. Wong, T. J. Chn, J.Yu and D. Suter. Effcent Mult-structure Robust Fttng wth Incremental Top-k Lsts Comparson. In ACCV 200: 553-564. [5] T.-J. Chn, J. Yu and D. Suter. Accelerated Hypothess Generaton for Multstructure Data va Preference Analyss. IEEE Trans. Pattern Anal. Mach. Intell. 34(4): 202 625-638 [6] H. S. Wong, T.-J. Chn, J. Yu and D. Suter. Dynamc and herarchcal mult-structure geometrc model fttng. In ICCV 20: 044-05 [7] A. Delong, A. Osokn, H. Isack, and Y. Boykov. Fast approxmate energy mnmzaton wth label costs. In CVPR, 200. [8] O. Chum, J. Matas and J. Kttler. Locally optmzed RANSAC. In: DAGM 2003. [9] B. J. Tordo and D. W. Murray. Guded-MLESAC: Faster mage transform estmaton by usng matchng prors. IEEE Trans. Pattern Anal. Mach. Intell. (27) 2005:523-535 [20] O. Chum and J. Matas. Matchng wth PROSAC- progressve sample consensus. In CVPR.2005. [2] K. N, H. Jn and F. Dellaert. GroupSAC: Effcent consensus n the presence of groupngs. In ICCV. 2009. [22] T. Sattler, B. Lebe and L. Kobbelt. SCRAMSAC: Improvng RANSAC's effcency wth a spatal consstency flter. In: ICCV 2009. [23] T.-J. Chn, J. Yu, and D. Suter. Accelerated hypothess generaton for mult-structure robust fttng. In ECCV 200. [24] M. A. Fschler and R. C. Bolles. RANSAC: A paradgm for model fttng wth applcatons to mage analyss and automated cartography. Comm. of the ACM 24 98: 38-395 [25] R. Hartley and F. Kahl. Global Optmzaton through Rotaton Space Search. IJCV, 82():64-79, 2009. [26] F. Kahl. Multple vew geometry and the L-nfnty norm. In ICCV 2005: 002 009. [27] D. R. Myatt, Phlp H. S. Torr, Slawomr J. Nasuto, J. Mark Bshop, R. Craddock. NAPSAC: Hgh Nose, Hgh Dmensonal Robust Estmaton - t's n the Bag. In BMVC 2002. [28] K. Schndler and D. Suter. Two-Vew Multbody Structure-and- Moton wth Outlers through Model Selecton. IEEE Trans. Pattern Anal. Mach. Intell. 28(6): 983-995, 2006. [29] R. Raguram, J. M. Frahm and M. Pollefeys. Explotng uncertanty n random sample consensus. In ICCV 2009: 2074-208. [30] Gurob optmzaton. http://www.gurob.com/, 200 [3] E. J. Cand`es, M. B. Wakn and S. P. Boyd. Enhancng sparsty by reweghted L mnmzaton, The Journal of Fourer Analyss and Applcatons, vol. 4, no. 5, pp. 877 905, 2008. [32] http://www.robots.ox.ac.uk/~vgg/data/ [33] http://www.csse.uwa.edu.au/~pk/research/matlabfns/ [34] http://cs.adelade.edu.au/~tjchn/doku.php [35] http://cs.adelad.edu.au/~hwong/doku.php?d=data [36] Kwang Hee Lee, Sang Wook Lee.: Determnstc Fttng of Multple Structures usng Iteratve MaxFS wth Inler Scale Estmaton. In ICCV 203.