Visual Recognition using Mappings that Replicate Margins

Size: px
Start display at page:

Download "Visual Recognition using Mappings that Replicate Margins"

Transcription

1 Vsual Recognton usng Mappngs that Replcate Margns Lor Wolf and Nathan Manor The Blavatnk School of Computer Scence Tel-Avv Unversty Abstract We consder the problem of learnng to map between two vector spaces gven pars of matchng vectors, one from each space. Ths problem naturally arses n numerous vson problems, for example, when mappng between the mages of two cameras, or when the annotatons of each mage s multdmensonal. We focus on the common asymmetrc case, where one vector space X s more nformatve than the other Y, and fnd a transformaton from Y to X. We present a new optmzaton problem that ams to replcate n the transformed Y the margns that domnate the structure of X. Ths optmzaton problem s convex, and effcent algorthms are presented. Lnks to varous exstng methods such as CCA and SVM are drawn, and the effectveness of the method s demonstrated n several vsual domans. 1. Introducton A great deal of attenton s devoted to recognton tasks n whch the predcton value s a sngle label. Such problems are straghtforward to benchmark, and they ft well wthn conventonal learnng technques. Some attenton was gven, however, to the type of tasks that requre the predcton of a multdmensonal outcome. In [13], efforts to produce a textual descrptors of a vdeo are presented, and n [3] mages are mapped to tags and vce versa. In [7] one vew of a road scene s compared to another, and n [24] bologcal data s mapped to mages. Typcally, n learnng from vsual data, the mages are converted to vectors for reasons of mathematcal and algorthmc convenence. Smlarly, for example, textural descrptons, collectons of tags, and bologcal data are often transformed nto vectors. Multdmensonal perceptual problems, nvolvng mages on one sde and other forms of structured data on the other, are therefore most readly treated by ether recoverng a mappng from one vector space X to another vector space Y (multdmensonal regresson), or by mappng the two vector spaces X, Y onto one common vector space (as n CCA, see Secton. 2). Often, these mappngs are recovered based on a set of matchng tranng pars {(x, y )} such that x X and y Y. In ths work we consder a margn-based soluton for the asymmetrc mappng problem. Maxmum margn methods have proven to be effectve for classfcaton, regresson, and recently also n metrc learnng [23]. Here, we recover a mappng T : Y X that replcates or mmcs n the space of the transformed ponts T Y the margns that exst n X. Note that we do not maxmze the margns, snce ths mght dstort the space T Y. By defnton, margns requre the dvson of the samples nto two groups. When computng the map, we consder many such dvsons and resultng margns. Many prevous attempts to extend vector-to-vector mappngs to use dscrmnatve technques have assumed that a par of {(x, y j )} such that j (.e., the ponts are not pared n the tranng set), form an example of an ncompatble par. In many applcatons we found ths assumpton to be harmful. The reason mght be that snce the data s dstrbuted unevenly, some of the presumably non-matchng pars are very smlar n nature to the matchng pars. Snce non-matchng pars are typcally not provded, and snce they cannot be deduced from the data, we seek another source of dscrmnatve nformaton. To compute the transformaton T we rely on the exstence of a group of hyperplanes that separate the samples n the space of X. These hyperplanes are obtaned n varous ways n accordance wth the applcaton at hand. If such hyperplanes are not avalable, random hyperplanes are used successfully. 2. Prevous work The problem of learnng statstcal connecton between matchng vectors from two vector spaces s well studed. The vectors are often combned to form two matrces X and Y wth the same order of columns. Once the model lnkng the two matrces s learned, several tasks can be performed. For example, decde, gven two new vectors x new and y new whether they are matchng accordng to the model. A second task s the multdmensonal regresson problem, where the vector x new s to be predcted gven a vector y new. Ths second task s a harder one n the sense that by solvng t, the frst task can be solved, but not the other way around. 1

2 Multdmensonal regresson problems are often solved by extendng one dmensonal problems such as lnear rdge regresson [10], Support Vector Regresson [20, 14], or Kernel Regresson [9]. Sometmes such solutons are appled one coordnate at a tme, and sometmes the entre output s predcted at once [19]. Whle our method s an asymmetrc method, and therefore belongs to the famly of regresson methods, we do not try to fnd a transformaton that maps y to x. Instead, we am to replcate a specfc aspect of the dstrbuton of the ponts n X n the transformed ponts of the space Y. Ths structure s defned by a set of hyperplanes n X and multple labels for each tranng sample. Often, we take the set of hyperplanes to be unformly sampled, and the labels to be the projectons of the datapont n X by these random hyperplanes. In ths case, our method can be shown to be related to symmetrcal methods such as Canoncal Correlaton Analyss (CCA). In CCA [8], two transformatons are found that cast samples from X and Y to a common target vector-space such that the matchng vectors from the two spaces are transformed to smlar vectors n the sense of maxmal correlaton coeffcent. Addtonal constrants enfore the components of the resultng vectors to be parwse uncorrelated and of unt varance. The CCA formulaton s, therefore: Problem 1. non-regularzed CCA max U X,U Y n x U X UY y, subject to =1 n UX x x U X = =1 n UY y y U Y = I =1 The dmenson of the target vector space s typcally the mnmum of the two dmensons d M and d P and shall be denoted by l. Thus U X s a matrx of dmensons d M l and U Y s of dmensons d P l. For the matchng task above, the vectors x new and y new are consdered a match f the dstance (UX x new UY y new j ) s small. Snce oftenly the feature vectors for both vector spaces are of dmensons sgnfcantly hgher than the number of tranng samples, statstcal regularzaton must be used to avod overfttng. A regularzed verson of CCA suggested by [22] can then be used. Two regularzaton parameters need to be determned η X and η Y, to regularze the computatons n each of the vector spaces. Another modfcaton of CCA s kernel CCA (e.g. [2, 27]), whch uses the kernel trck for capturng non-lnear transformatons. CCA mnmzes the dstances between matchng vectors n X and Y, whle dsregardng the dstances between nonmatchng vectors. An alternatve optmzaton goal combnes the mnmzaton of dstances between matchng pars wth the maxmzaton of dstances between non-matchng pars, thus attemptng to acheve maxmal dscrmnatve ablty. Ths approach has been developed n [12]. We have extensvely tested ths approach, and have come to the concluson that the combned energy (ncludng the maxmzaton of the dstances for non-matchng pars) does not outperform CCA when the non-matchng pars are obtaned by mxng the matchng pars. The reason for the lack of mprovement s that by usng such a procedure many samples that are very smlar to matchng samples are obtaned and labeled non-matchng. One soluton we have tred s to delberately sample pars (x, y j ) that are unlke the matchng pars,.e., we exclude from the samplng of non-matchng pars those pars (x, y j ) for whch x s smlar to x j or y s smlar to y j. Ths does not seem to help f the threshold for excluson s too hgh, the obtaned non-matchng pars are not very nformatve. If, however, the threshold s set low, the pars are too smlar to matchng pars. Prevous attempts to formulate a max-margn CCA-lke problem nclude the Maxmal Margn Robot [16]. Rather than maxmzng the sum of correlatons, Maxmal Margn Robot maxmzes the mnmal correlaton. Robustness to outlers s mantaned through the ncluson of slack varables. Ths formalzaton produces the followng quadratc programmng Problem 2 (Maxmal Margn Robot). mn T,ξ 1 2 T 2 F C1 ξ, subject to 1 n y T x 1 ξ ξ 0 Smlarly to our method, a transformaton T s recovered. However, t s assumed that both X and Y are of the same dmenson. The Nearest Neghbor Transfer [24] s a smple alternatve that does not requre optmzaton. Gven a new vector x new, ths smple method chooses out of the tranng vectors of X the closest one arg n mn x x new =1 and predcts the vector y n Y. For the task of selectng a matchng vector for x new out of a set of vectors {y newk Y} k, the methods selects the vector most smlar to the y : arg mn y newk y. k Many of the above vector to vector mappng technques are symmetrc,.e., replacng the roles of the two vector spaces does not change the outcome. Ths s n contrast to the nature of many applcatons, where often one vector space s much more relable than the other. Our method, presented n Secton 3 maps from the less relable vector space Y to the more relable space X such that strong dstnctons n X are presented n the mapped data from Y.

3 3. Problem formulaton We are gven n pars of matchng vectors (x, y ) n =1. For example, x X mght depct the encodng of an mage ndexed n one camera, whle y Y depcts the a matchng frame from another vew of the same scene. As mentoned above, our framework s asymmetrc and transforms data from the less nformatve space to the more relable one. Assume that X s the relable vector space and Y s the less relable one. For example, wde feld of vew may be better suted for dentfyng the scene than a narrow feld of vew. Moreover, assume that we have an addtonal set of k hyperplanes n the space X each known to provde a good separaton of the n samples x 1,..., x n nto two classes, assocated wth labels L j {1, 1, 0} for the th hyperplane and the j th sample (each hyperplane separates the samples dfferently). The source of these hyperplanes and labels s applcaton dependent, and n [26] we provde several examples on how to obtan them. Denote the set of hyperplanes as w 1,.., w k. We look for a transformaton T : Y X, and scalars b 1,.., b k such that for each, the hyperplane w separates the n samples T y 1,.., T y n around b smlarly to the labels L j,.e., we encourage smlarty between L j and the sgn of the projectons w T y j b for all = 1..k and j = 1..n for whch L j 0. Typcally, the hyperplanes and the labels are such that L j = sgn(w x jb 0 ) for some scalars b0 1,.., b 0 k, however, b 0 and b often dffer, and are not part of the nput. Defnng the smlarty between labels and the projectons usng correlatons one would obtan an extenson of CCA, n whch an extra set of hyperplanes s used. Alternatvely, nspred by the success of large-margn methods such as SVM [4], and LMNN [23], our method mnmzes the hnge-loss. In contrast to SVM and LMNN, the margns are not maxmzed n our framework; Instead, they are replcated,.e., and we seek to have the same margns n the space of transformed samples T Y as exst n the space X. For each of the k separators w, = 1..k, we compute or otherwse obtan a goal margn m. Typcally, ths margn s computed n the vector space X on the tranng samples x j as m = mn j:lj>0 w x j max j:lj<0 w x j. Alternatvely, any defnton of soft margns can be employed, e.g., n the non-separable case. Usng the above defntons, we defne the followng optmzaton problem, whch yelds a mappng T : Y X that replcates the margns n X. Problem 3. (MRM: Margn Replcatng Mappng) [m L j (w T y j b )] mn T, b j Ths s an unconstraned pecewse lnear convex optmzaton problem that can be rewrtten as lnear programmng by addng slack hnges: Problem 4. (MRM as lnear programmng) ξ j, subject to mn T, b j ξ j m L j (w T y j b ) ξ j 0 In Sec. 4 we develop a gradent descent algorthm for MRM. Its frst teraton (when startng at zero) s n the lnear regons of the objectve functon, and no hnges are actve. We show that n ths case, there s a deep relaton to CCA (see Secton 5), therefore, n a sense, the frst teraton follows CCA and then evolves. 4. Optmzaton For such convex cost functons, the gradent descend method s guaranteed to converge to the global mnma, however, ths convergence s slow for a large number of separators. To speed up the convergence, we have developed a sutable sample and backtrack method depcted n Fgure 1. The method s modeled after the daggng and backtrackng modules of boostng methods [18, 5]. In the future, to further speed up the computaton, we plan to ncorporate wthn the procedure a column-generaton lnearprogrammng module [6]. The reason that the gradent descent methods would converge slowly on Problem 3 s that for every combnaton of a separator and a sample, there s a term of the form of max(, 0), thus, there are many non-dfferentablty break ponts. Durng the gradent decent procedure (or more precsely sub-gradent), we cannot have a step sze whch s larger than the dstance to the next break pont (to avod the rsk of overshootng ), and the progress s therefore slow. In order to reduce the effectve number of break ponts we sample a random subsets of hnge terms - thereby makng the gradent step both longer and faster to compute. Each teraton samples a dfferent subset ensurng dversty of search drectons and a faster convergence. 5. Connecton to other methods In ths secton we show that both CCA and SVM are specal cases of MRM. Also SVR and MMR are closely related. Frst, note that unlke most learnng algorthms, MRM objectve s bounded, thus does not ntrnscally requre regulaton. Yet, regularzed MRM can be defned: Problem 5. (bounded MRM) mn [m L j (w T y j b )] w.r.t. T,b j (w T y j) 2 1 j Ths s exactly the problem of 3, where the soluton s bounded. Snce the range of projectons T s convex, a

4 functon [T,b]=MRM({ y j }, { w }, {m }, {l j }) Intalze T:=0 and b:=0 Repeat c tmes Select at random for = 1..p: j,k s.t. l j > 0 k j,k s.t. l j < 0 k Set T := T //to allow backtrackng Set dt := [(w k ) (w k )] Set ξ := m k w k Set ψ := m k Set dξ := m k Set dψ := m k y j T y j b k w k T y j b k w k dt y j b k w k dt y j b k Let r be a random number n [0,3] Repeat 10 r tmes //try both long and short loops Mnmze λ := mn [mn( ξ, ψ )] dξ dψ Save argmn: ndex 0. Put s 0 := 1 f ξ, -1 f ψ Set T := T λ dt Set ξ := ξ λ dξ and ψ := ψ λ dψ f s 0 = 1 w := w k 0 y := y j 0 y j f s 0 = 1 w := w k y := y j 0 0 Set dt := dt s 0 wy Set dξ := dξ s 0 (w w k Set dψ := dψ s 0 (w w k )(y y j ) )(y y j End nner loop rebalance b va convex 1D optmzaton. If global objectve functon evaluated on T s worse than that of T then backtrack by: Set T := T End man loop Return T and b Fgure 1. Pseudocode of a method for computng the transformaton T and bases b n the Margn Replcatng Mappng mehtod. global soluton can be found by usng a gradent decent algorthm smlar to the one of Sec. 4. In our experence the bound s superfluous, however, snce the lnkng to other methods s done asymptotcally, such boundng smplfes the arguments. Unlke MRM, the methods we compare to, namely, CCA (Problem 1), Maxmaum Margn Robot (Problem 2) and SVR, do not employ hyperplanes as nput. The lnkng s therefore performed for the case where random hyperplanes are used. The margns are set wth accordance to the desred reducton. Specfcally, for the random hyperplane case, Problem 5 would reduce to CCA f all m approach nfnty and would reduce to an algorthm whch closely resembles MMR f we set m = 1. Lastly, MRM reduces to SVM, for a undmensonal X takng the role of labels. ) 5.1. connecton to SVM and SVR In the 1D case, where the dmenson of X s 1, w becomes a scalar, and the MRM problem reduces to: Problem 6. 1D MRM: [m L j (t y j b)] mn t,b j Problem 6 s equvalent to problem 7 below when γ = 0, as can be seen by dvdng objectve above by the postve (exhaugenc) constant m and substtute u := t m. Problem 7. SVM as unconstraned mnmzaton: mn u,b (γ u 2 [1 L j (u y j b)] ) j Problem 7 s exactly the optmzaton problem of SVM, formulated va the hnge loss. Note that both problems, SVM and MRM, are often bounded even wthout regularzaton. Yet, SVM s typcally regularzed, to make sure the largest margn s selected among all classfers for whch the penalty s mnmzed. Smlarly, MRM can be also be regularzed. However, expermentally, we found out that t results n lttle gan for hgh dmensonal X. In such cases, the penaltes for msclassfed T y j are strong enough even for low norm of T to stablze the soluton. Snce we dsregard the regularzaton term of MRM, we obtan an algorthm whch has no parameters other than the nput data and set of hyperplanes on X. Even n the case of random hyperplanes, T s forced to match the desred structure as occur n Co-Krng based solutons to SVR [21]. In contrast to sem-parametrc methods such as [15], MRM assumes no pror on bas terms b and learn them from nput. Of course, such prors, whenever avalable, mght mprove the algorthm s performance Connecton to CCA Lnks between MRM and the non-regularzed CCA (problem 1) are presented n [26]. Smlarly to CCA, the proposed varant receves zero mean nputs (centralzed data). Then, gven large m s (forcng the hnge terms to become loose), MRM wll behave smlarly to CCA. Note that the frst teraton of MRM gnores all hnges, snce the hnge functons are lnear near the orgn. Thus, the frst teraton of MRM acts smlarly to the case of m =. Further teratons refne the produced transformaton. Ths observaton was verfed on synthetc data [26]. 6. MRM modfcatons The MRM framework s flexble, allowng for several adaptatons. The frst two modfcatons below have been mplemented and tested. The weghted MRM typcally produces results that are smlar to those of the vanlla MRM.

5 Kernel MRM s often used to speed up experments nvolvng hgh dmensonal data. The mplementaton of the thrd, sparse MRM, s left for future work Weghted MRM Ths varant specfes a confdence level for each label L j. One natural choce of confdence s to use, for a hyperplane w and example j the value L j = w x j b 0 as the sgned confdence value (recall that b 0 s the bas for hyperplane w n the space X ). The sgn of ths value sgn(l j ) s the label, and ts absolute value L j a measure of confdence (n practce t s benefcal to trm hgh confdence values). The followng problem s optmzed: Problem 8. Weghted MRM: L j [m sgn(l j )(w T y j b )] mn T,b j 6.2. kernelzed MRM Ths varant uses the kernel trck to allow for non-lnear mappngs, and to reduce computatonal tme for hgh dmensonal vector spaces. Let φ : X H X and ψ : Y H Y be two transformatons that embed the nput samples n dmensonal Hlbert spaces. We mplctly recover a transformaton τ : H Y H X, wthout performng operatons n H X or n H Y. Snce the recovered T s a lnear combnaton of outer products, t can be represented as products of the form w ψ(y j ). Durng the algorthm run, the transformaton T s appled to the dataponts n H Y, and the results s correlated wth w l, thus obtanng scalars of the form K φ (w l, w )K ψ (y j, y ). We used ths kernalzaton to speed up the lnear case when the dmensonalty s much larger than the number of examples. Whle the complexty of each teraton n the Algorthm of Fgure 1 (domnated by re-calculatons of the objectve functon) s O(nkd), the complexty of each teraton n the kernelzed algorthm s O(kn) f W W and Y Y are precomputed (ths precomputaton takes O(dn 2 dk 2 )) sparse MRM To acheve an ntrnsc feature selecton, the columns of T mght be encouraged to become zero, by bndng approprate cost terms (as suggested by [17]). We wsh to elmnate entre columns, and therefore apply these terms to the maxmal absolute value of each column: Problem 9. mnmze over T,β,ξ,ψ of L j ξ j C ψ r w.r.t. r j cr ψ r T cr, cr ψ r T cr j ξj m sgn(l j )[ T rc y rj w c rc r j ξ j 0 β r y rj w r ] Fgure 2. Precson/Recall statstcs for matchng car mages between two cameras. We overlay our results on top of the results of Fgure 5 n [7]. MRM performs smlarly to the best magematchng technque of [7] eventhough a smple representaton s used. Regularzed CCA (best parameter found) yelds poor results wth ths poorly dscrmnatve data and Lnear Rdge-Regresson performed even worse (not shown). 7. Results We present results for the MRM algorthm for varous applcatons. These nclude camera to camera mappng, and mmckng the performance of face recognton technques. Other examples are provded n [26]. Below, 2000 random hyperplanes were used n all examples Matchng between cameras In several applcatons t s useful to compare the the output of multple cameras. We employ two such datasets. The frst was presented n [7] and contans the output of detected cars n two traffc cameras. The other s collected by us. In the frst dataset, the task s to dentfy car based on a sngle vew from the other camera, whle tranng s on such matched pars, but of dfferent cars. The tranng set contans about 120 cars, and the test set contans 50 dfferent car types. Ths yelds a tranng set wth 2922 samples, each s a 1200 dmensonal vector, descrbng the three channels of rescaled 20x20 pxel mages. Tranng tme was 17 mnutes on a standard PC. Results are comparable to [7], whch employs a sophstcated mage matchng technque. See Fg. 2. We also receved 4 mnutes of two synchronzed vdeo sequences wth partly overlappng felds of vew. We used half of the vdeo to apply MRM wth random hyperplanes. To test, we randomly pcked from the second half of the vdeo one frame from one camera, and 10 frames from the other camera, out of whch 9 are random dstractors.

6 (a) (b) Fgure 3. Sample frames from two synchronzed vdeos. Frames from vew (b) are mapped to vew (a). The vews overlap only partly, and are taken at very dfferent angles. The red crosses mark the most promnent locaton n the vew of (a) are pxels n range (.e. columns n mappng matrx T ) that ther norm s > 10% of the strongest. These turn out to be at the expected locatons wthn the regon vewed by both cameras. We run also the opposte drecton and got analog marks n (b). Here algorthm have chosen to focus on the area when vast majorty of moton occurs. Note that the two frames here are not matched n tme. Fgure 3 depcts two sample frames. In ths experment, both feature spaces were smply 1200 dmentonal vectors obtaned by reszng the mages to 40x30 pxels. Tranng MRM wth 290 frames has taken 2 mnutes. On the frame from the camera we used to map from, we mark red crosses to sgn the output coordnates whose weght (norm of relevant column n mappng matrx) f at least 10% the maxmum (most mportant pxel n spatal doman). Note that marks are spatally contnuous, as desrable, although we dd not employ such constrants. Fgure 4(a,b) shows the performance of the varous algorthms: Nearest Neghbor Transfer (Secton 2), regularzed CCA, and our MRM method. MRM consderably outperforms all other methods. We have also tred to analyze the opposte (less plausble) drecton: map from the wde feld of vew nto the narrow one. Analogous graphs appear n Fg. 4(c,d). As expected results are less mpressve, however MRM stll outperforms the two other methods Mmckng algorthm performance One task that s framed naturally n the vector-to-vector learnng framework s the task of mmckng an unknown algorthm. We test ths applcaton n the context of face recognton, usng the LFW database [11]. We wsh to recognze face mages based on the well-proven LBP feature set [1]. LBP represents each face mage as a vector. For the task of comparng two face mages and decdng whether they belong to the same person, we consder the vector v 12 of absolute dfferences v 1 v 2, where v 1 and v 2 are the LBP encodngs of these mages. Tranng lnear SVM on ths vector, usng 9 splts of the LFW benchmark for tranng and one for testng, and repeatng ten tmes, yelds an average performance of 69.5%, (a) (c) (d) Fgure 4. Performance charts for the vdeo to vdeo matchng task. (a) hstogram of the rankng of the true frame wthn the lst of dstractors as gven by each algorthm. As can be seen, MRM typcally ranks the hghest the correct frame. (b) ROC curves for same vs not-same queres n the same settngs,.e., the accuracy n whch each method s able to dstngush between matchng pars and non-matchng pars of frames. (c,d) Performance charts for the vdeo to vdeo matchng task, now mappng n the opposte drecton. whch s slghtly hgher than applyng a smple threshold to the Eucldean dstance (67.2%) between v 1 and v 2. To further mprove performance, we employ the output of more advance algorthms. We employ the raw outputs of the 16 dfferent classfers employed n [25] to obtan a results of 78.5%. Out of these 16 methods, only one can be drectly computed from the vector of absolute LBP dfferences v j (the Eucldean dstance of LBP descrptors). Usng MRM, we map these vectors of absolute dfferences to the 16D output obtaned by the methods of [25]. Thus, the nput to MRM contans 4800 samples (pars of faces, only 8 tranng splts, snce one s used as the background sample set of [25]), each consstng of a vector of 3717 absolute dfference and a vector of 16 classfer outputs. Tranng takes 26 mnutes. We then tran SVM on the values of T v j and obtan an average performance level of 74.6%, whch s not as hgh as the 16 classfers together, but s much easer to compute: snce both T and the learned SVM are lnear, the fnal classfer s lnear as well. CCA and lnear rdge regresson, appled n a smlar settng, both produce results lower than what s obtaned wth SVM tself (69.5%), for a wde range of regularzaton parameters. We have mmcked the performance of the face recognton algorthm based on the raw output of the classfers (b)

7 (real numbers). If the algorthms are gve as a black box, and only the fnal predcton s gven, a confdence level can be estmated emprcally, by addng nose to the nput data. Ths s left for future research. 8. Conclusons The problem of perceptual nference based on pars of matchng vectors s applcable to a wde range of computer vson problems. Despte some effort, the classcal regularzed CCA algorthm seems to performs better than many of the more recent contrbutons. Ths mght stem from the addton of dscrmnatve nformaton of non-matchng pars that s ether much less relevant than the nformaton obtaned by matchng pars, or s obtaned usng assumptons that are often false. Here we employ margns, however, unlke prevous work ths s done by usng dscrmnatve nformaton provded n the feature space tself. Another contrbuton s that we refran from maxmzng the margns n the transformed space, and nstead am to replcate the nput margns. Snce the nput margns are gven n one of the two spaces, and replcated n the other, the problem s asymmetrc by nature. We note that many of the applcatons n vector-to-vector learnng are ndeed asymmetrc. Whle our method maps one space to the other, we do not perform classcal regresson analyss. Our optmzaton framework allows the mapped ponts to consderably dffer from the matchng ponts n the other space. The emphass s put on smlar margns, not smlar coordnates, and the optmzaton computes a new set of bas terms. Acknowledgments Lor Wolf s supported by the Israel Scence Foundaton (grant No. 1214/06), and The Mnstry of Scence and Technology Russa-Israel Scentfc Research Cooperaton. Ths research was carred out n partal fulfllment of the requrements for the Ph.D. degree of Nathan Manor. References [1] T. Ahonen, A. Hadd, and M. Petkanen. Face descrpton wth local bnary patterns: Applcaton to face recognton. PAMI, 28(12): , Dec [2] F. R. Bach and M. I. Jordan. Kernel ndependent component analyss. JMLR, [3] K. Barnard and D. Forsyth. Learnng the semantcs of words and pctures. In CVPR, [4] G. I. Boser B. and V. V. An tranng algorthm for optmal margn classfers. In COLT, [5] T. Bylander and L. Tate. Usng valdaton sets to avod overfttng n adaboost. In Artfcal Intellgence Research Socety Conference, [6] A. Demrz, K. P. Bennett, and J. Shawe-Taylor. Lnear programmng boostng va column generaton. Mach. Learn., 46(1-3): , [7] A. Ferencz, E. G. Learned-Mller, and J. Malk. Learnng to locate nformatve features for vsual dentfcaton. IJCV, , 5 [8] H. H. Relatons between two sets of varates. In Bometrka, Vol. 28, pages , [9] T. Haste, R. Tbshran, and J. Fredman. The elements of statstcal learnng: data mnng, nference and predcton. Sprnger, [10] A. E. Hoerl and R. Kennard. Rdge regresson: based estmaton for nonorthogonal problems. Technometrcs, [11] G. Huang, M. Ramesh, T. Berg, and E. Learned-Mller. Labeled faces n the wld: A database for studyng face recognton n unconstraned envronments. Unversty of Massachusetts, Amherst, TR 07-49, [12] D. Ln and X. Tang. Inter-modalty face recognton. ECCV, pages 13 26, [13] H.-H. Nagel. Steps toward a cogntve vson system. AI Mag., [14] B. Schölkopf, A. Smola, R. C. Wllamson, and P. L. Bartlett. New support vector algorthms. Neural Computaton, 12: , [15] A. J. Smola, T. T. Freß, and B. Schölkopf. Semparametrc support vector and lnear programmng machnes. In NIPS, [16] B. T. Szedmak S. and H. D.R. A metamorphoss of canoncal correlaton analyss nto multvarate maxmum margn learnng. In ESANN, [17] R. Tbshran. Regresson shrnkage and selecton va the lasso. Journal Royal Statststcs, [18] K. M. Tng and I. H. Wtten. Stackng bagged and dagged models. In ICML, [19] I. Tsochantards, T. Hofmann, T. Joachms, and Y. Altun. Support vector learnng for nterdependent and structured output spaces. In ICML, [20] V. Vapnk. Statstcal learnng theory. Wley, [21] E. Vazquez and E. Walter. Mult-output support vector regresson. In 13th IFAC Symposum on System Identfcaton, pages Cteseer, [22] H. D. Vnod. Canoncal rdge and econometrcs of jont producton. Journal of Econometrcs, [23] B. J. Wenberger K. and S. L. Dstance metrc learnng for large margn nearest neghbor classfcaton. In NIPS, , 3 [24] L. Wolf and Y. Donner. An expermental study of employng vsual appearance as a phenotype. CVPR, , 2 [25] L. Wolf, T. Hassner, and Y. Tagman. Descrptor based methods n the wld. In Faces n Real-Lfe Images Workshop n ECCV, [26] L. Wolf and N. Manor. Vsual recognton usng mappngs that replcate margns. Extended Techncal Report, avalable at wolf, , 4, 5 [27] L. Wolf and A. Shashua. Learnng over sets usng kernel prncpal angles. J. Mach. Learn. Res., 4: ,

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and

More information

Discriminative classifiers for object classification. Last time

Discriminative classifiers for object classification. Last time Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

Face Detection with Deep Learning

Face Detection with Deep Learning Face Detecton wth Deep Learnng Yu Shen Yus122@ucsd.edu A13227146 Kuan-We Chen kuc010@ucsd.edu A99045121 Yzhou Hao y3hao@ucsd.edu A98017773 Mn Hsuan Wu mhwu@ucsd.edu A92424998 Abstract The project here

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Private Information Retrieval (PIR)

Private Information Retrieval (PIR) 2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

Ecient Computation of the Most Probable Motion from Fuzzy. Moshe Ben-Ezra Shmuel Peleg Michael Werman. The Hebrew University of Jerusalem

Ecient Computation of the Most Probable Motion from Fuzzy. Moshe Ben-Ezra Shmuel Peleg Michael Werman. The Hebrew University of Jerusalem Ecent Computaton of the Most Probable Moton from Fuzzy Correspondences Moshe Ben-Ezra Shmuel Peleg Mchael Werman Insttute of Computer Scence The Hebrew Unversty of Jerusalem 91904 Jerusalem, Israel Emal:

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

An efficient method to build panoramic image mosaics

An efficient method to build panoramic image mosaics An effcent method to buld panoramc mage mosacs Pattern Recognton Letters vol. 4 003 Dae-Hyun Km Yong-In Yoon Jong-Soo Cho School of Electrcal Engneerng and Computer Scence Kyungpook Natonal Unv. Abstract

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Data Mining For Multi-Criteria Energy Predictions

Data Mining For Multi-Criteria Energy Predictions Data Mnng For Mult-Crtera Energy Predctons Kashf Gll and Denns Moon Abstract We present a data mnng technque for mult-crtera predctons of wnd energy. A mult-crtera (MC) evolutonary computng method has

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

A Robust LS-SVM Regression

A Robust LS-SVM Regression PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Correlative features for the classification of textural images

Correlative features for the classification of textural images Correlatve features for the classfcaton of textural mages M A Turkova 1 and A V Gadel 1, 1 Samara Natonal Research Unversty, Moskovskoe Shosse 34, Samara, Russa, 443086 Image Processng Systems Insttute

More information

Incremental Learning with Support Vector Machines and Fuzzy Set Theory

Incremental Learning with Support Vector Machines and Fuzzy Set Theory The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Using Neural Networks and Support Vector Machines in Data Mining

Using Neural Networks and Support Vector Machines in Data Mining Usng eural etworks and Support Vector Machnes n Data Mnng RICHARD A. WASIOWSKI Computer Scence Department Calforna State Unversty Domnguez Hlls Carson, CA 90747 USA Abstract: - Multvarate data analyss

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Recognizing Faces. Outline

Recognizing Faces. Outline Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &

More information

Towards Semantic Knowledge Propagation from Text to Web Images

Towards Semantic Knowledge Propagation from Text to Web Images Guoun Q (Unversty of Illnos at Urbana-Champagn) Charu C. Aggarwal (IBM T. J. Watson Research Center) Thomas Huang (Unversty of Illnos at Urbana-Champagn) Towards Semantc Knowledge Propagaton from Text

More information

Adaptive Transfer Learning

Adaptive Transfer Learning Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong {caobn,snnopan,zhangyu,dyyeung,qyang}@cse.ust.hk

More information

Learning a Class-Specific Dictionary for Facial Expression Recognition

Learning a Class-Specific Dictionary for Facial Expression Recognition BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for

More information

Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems

Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems Taxonomy of Large Margn Prncple Algorthms for Ordnal Regresson Problems Amnon Shashua Computer Scence Department Stanford Unversty Stanford, CA 94305 emal: shashua@cs.stanford.edu Anat Levn School of Computer

More information

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming CEE 60 Davd Rosenberg p. LECTURE NOTES Dualty Theory, Senstvty Analyss, and Parametrc Programmng Learnng Objectves. Revew the prmal LP model formulaton 2. Formulate the Dual Problem of an LP problem (TUES)

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

Local Quaternary Patterns and Feature Local Quaternary Patterns

Local Quaternary Patterns and Feature Local Quaternary Patterns Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Efficient Text Classification by Weighted Proximal SVM *

Efficient Text Classification by Weighted Proximal SVM * Effcent ext Classfcaton by Weghted Proxmal SVM * Dong Zhuang 1, Benyu Zhang, Qang Yang 3, Jun Yan 4, Zheng Chen, Yng Chen 1 1 Computer Scence and Engneerng, Bejng Insttute of echnology, Bejng 100081, Chna

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Structure from Motion

Structure from Motion Structure from Moton Structure from Moton For now, statc scene and movng camera Equvalentl, rgdl movng scene and statc camera Lmtng case of stereo wth man cameras Lmtng case of multvew camera calbraton

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Positive Semi-definite Programming Localization in Wireless Sensor Networks Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer

More information

A Robust Method for Estimating the Fundamental Matrix

A Robust Method for Estimating the Fundamental Matrix Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Categorizing objects: of appearance

Categorizing objects: of appearance Categorzng objects: global and part-based models of appearance UT Austn Generc categorzaton problem 1 Challenges: robustness Realstc scenes are crowded, cluttered, have overlappng objects. Generc category

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Empirical Distributions of Parameter Estimates. in Binary Logistic Regression Using Bootstrap

Empirical Distributions of Parameter Estimates. in Binary Logistic Regression Using Bootstrap Int. Journal of Math. Analyss, Vol. 8, 4, no. 5, 7-7 HIKARI Ltd, www.m-hkar.com http://dx.do.org/.988/jma.4.494 Emprcal Dstrbutons of Parameter Estmates n Bnary Logstc Regresson Usng Bootstrap Anwar Ftranto*

More information

Polyhedral Compilation Foundations

Polyhedral Compilation Foundations Polyhedral Complaton Foundatons Lous-Noël Pouchet pouchet@cse.oho-state.edu Dept. of Computer Scence and Engneerng, the Oho State Unversty Feb 8, 200 888., Class # Introducton: Polyhedral Complaton Foundatons

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

Image Alignment CSC 767

Image Alignment CSC 767 Image Algnment CSC 767 Image algnment Image from http://graphcs.cs.cmu.edu/courses/15-463/2010_fall/ Image algnment: Applcatons Panorama sttchng Image algnment: Applcatons Recognton of object nstances

More information

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION 1 THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Seres A, OF THE ROMANIAN ACADEMY Volume 4, Number 2/2003, pp.000-000 A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION Tudor BARBU Insttute

More information