Robust and Effective Metric Learning Using Capped Trace Norm

Size: px
Start display at page:

Download "Robust and Effective Metric Learning Using Capped Trace Norm"

Transcription

1 Robust and Effectve Metrc Learnng Usng Capped Trace Norm Zhouyuan Huo Department of Computer Scence and Engneerng Unversty of Texas at Arlngton Texas, USA Fepng Ne Department of Computer Scence and Engneerng Unversty of Texas at Arlngton Texas, USA Heng Huang Department of Computer Scence and Engneerng Unversty of Texas at Arlngton Texas, USA ABSTRACT Metrc learnng ams at automatcally learnng a metrc from par or trplet based constrants n data, and t can be potentally benefcal whenever the noton of metrc between nstances plays a nontrval role. In Mahalanobs dstance metrc learnng, dstance matrx M s n symmetrc postve sem-defnte cone, and n order to avod overfttng and to learn a better Mahalanobs dstance from weakly supervsed constrants, the low-rank regularzaton has been often mposed on matrx M to learn the correlatons between features and samples. As the approxmatons of the rank mnmzaton functon, the trace norm and Fantope have been utlzed to regularze the metrc learnng objectves and acheve good performance. However, these low-rank regularzaton models are ether not tght enough to approxmate rank mnmzaton or tme-consumng to tune an optmal rank. In ths paper, we ntroduce a novel metrc learnng model usng the capped trace norm based regularzaton, whch uses a sngular value threshold to constrant the metrc matrx M as low-rank explctly such that the rank of matrx M s stable when the large sngular values vary. The capped trace norm regularzaton can also be vewed as the adaptve Fantope regularzaton. We mnmze sngular values whch are less than threshold value and the rank of M s not necessary to be k, thus our method s more stable and applcable n practce when we do not know the optmal rank of matrx M. We derve an effcent optmzaton algorthm to solve the proposed new model and the algorthm convergence proof s also provded n ths paper. We evaluate our method on a varety of challengng benchmarks, such as LFW and Pubfg datasets. Face verfcaton experments are performed and results show that our method consstently outperforms the state-of-the-art metrc learnng algorthms. To whom all correspondence should be addressed. Ths work was partally supported by US NSF-IIS , NSF-IIS , NSF-IIS , NSF-DBI , NIH R01 AG Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. Copyrghts for components of ths work owned by others than ACM must be honored. Abstractng wth credt s permtted. To copy otherwse, or republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. Request permssons from permssons@acm.org. KDD 16, August 13 17, 2016, San Francsco, CA, USA. c 2016 ACM. ISBN /16/08... $15.00 DOI: Fgure 1: An llustraton of metrc learnng on Outdoor Scene Recognton OSR dataset. CCS Concepts Informaton systems Data mnng; Computng methodologes Machne learnng; Keywords Metrc Learnng; Capped Trace Norm; Low-Rank Approxmaton; Face Verfcaton 1. INTRODUCTION Metrc learnng has shown promsng results wth learnng the proper Mahalanobs dstance for many data mnng tasks.the goal of metrc learnng s to learn an optmal lnear or nonlnear projecton for orgnal hgh-dmenson features from supervsed or weakly supervsed constrants, and there have been a lot of works n ths feld [28, 1, 26, 9, 27, 24]. Metrc learnng has been wdely used n applcatons where metrc between samples plays an mportant role, such as mage classfcaton, face verfcaton and recognton n computer vson [6, 17, 13, 14], learnng to rank n nformaton retreval [16, 15], bonformatcs [25], etc. In mage mnng and retreval, there are many metrc learnng algorthms learnng an optmal Mahalanobs dstance from weakly supervsed constrants between mages. The man constrant paradgms nclude: par constrant [1], trplet constrant [27], and quadruplet constrant [13]. To avod overfttng and learn correlaton among samples, many regularzatons were proposed to mpose on the projecton matrx. Among these regularzatons, the low-rank regularzaton s proved to be effectve and effcent to learn potental correlatons from tranng data, e.g. trace norm and Fantope regularzaton[14]. In ths paper, we propose a novel metrc learnng model usng the capped trace norm as the low-rank regularzaton for Mahalanobs

2 dstance metrc learnng. Dfferent from trace norm whch mnmzes sum of all sngular values, or Fantope regularzaton whch mnmzes sum of k smallest sngular values, the capped trace norm penalzes the sngular values that are less than a threshold adaptvely learned n the optmzaton. As a result, the non-relevant nformaton assocated to smallest sngular values can be fltered out, such that the metrc learnng model s more robust. Meanwhle, we only need nput an approxmate rank value, thus our regularzaton term s tghter than trace norm and more stable and applcable n practcal problems. We also derve an effcent optmzaton algorthm wth rgorous convergence analyss. In the experments, we mpose our novel low-rank regularzaton on dfferent metrc learnng formulatons and compare wth other the state-ofthe-art metrc learnng methods. Expermental results show that our method outperforms other related methods on benchmark datasets. 2. RELATED WORK The goal of metrc learnng s to learn an adaptve dstance, such as Mahalanobs dstance d M x, x j = x x j T Mx x j, for the problem of nterest usng the nformaton brought by tranng examples. Most of metrc learnng methods use weakly-supervsed constrants. There are three manly paradgms of constrants, such as parwse, trplet, or quadruplet constrant. The parwse constrant contans nformaton whether two objects n a par are smlar or dssmlar, sometmes postve pars or negatve pars. Parwse constrant s represented by D and S as: S = {x, x j : x and x jare smlar} D = {x, x j : x and x jare dssmlar}. 1 The nformaton-theoretc Metrc Learnng ITML s one of many methods usng parwse constrant tranng examples n metrc learnng feld [28, 1, 11, 5], and t s formulated as follows: mn M S d s.t. γ,j ξ j D ld M, M 0 d 2 M x, x j u ξ j, x, x j S d 2 M x, x j l ξ j, x, x j D, 2 where u and l are upper bound and lower bound for smlar samples and dssmlar samples respectvely. ξ j s a safety margn dstance for each par and D ld M, M 0 s LogDet dvergence and D ld M, M 0 = TrMM 1 0 logdetmm 1 0 d where d s the dmenson of nput space and M 0 s a postve defnte matrx. Trplet constrant s also wdely used n metrc learnng, and t s denoted by R as: R = {x, x j, x k : x s more smlar to x j than to x k }. 3 Large Margn Nearest Neghbor LMNN [27] s one of the most wdely used metrc learnng methods whch uses trplet constrant on tranng examples. The LMNN model s to solve: mn M S d 1 µ x,x j S d 2 M x, x j µ,j,k R ξ jk 4 s.t. d 2 M x, x k d 2 M x, x j 1 ξ jk x, x j, x k R, where µ [0, 1] controls relatve weght between two terms, and S = {x, x j : y = y j and x j belongs to the k-neghborhood of x } R = {x, x j, x k : x, x j S, y y k }. 5 It s proved to be very effectve to learn Mahalanobs dstance n practce, and s extended to many methods for dfferent applcatons [20, 10, 9]. However, LMNN s prone to be overfttng sometmes, and t s also senstve to Eucldean dstance when t computes neghbors of each sample at the begnnng. In [13], a novel quadruplet constrant was proposed to model smlarty from complex semantc label relatons, for example, the degree of presence of smle, from least smlng to most smlng. The scheme of quadruplet constrant s as follows: A = {x, x j, x k, x l : d 2 M x k, x l d 2 M x, x j δ}, 6 where δ s a soft margn. Quadruplet s able to encompass par and trplet constrant. Parwse constrant can be represented as x, x, x, x j, and set δ = l, so d 2 M x, x j l and x, x j are from dssmlar set; or x, x j, x, x, set δ = u, then d 2 M x, x j u, x, x j are from smlar set. Smlarly, trplet constrant can also be represented as x, x j, x, x k. To solve the problem of overfttng, many regularzatons over matrx M were proposed n the past. In [22], they mpose squared Frobenus norm on M, and form an SVM lke structure to do metrc learnng: mn W M 2 F C,j,k ξ jk 7 s.t. d 2 M x, x k d 2 M x, x j 1 ξ jk, x, x j, x k R, where M = A T W A, matrx A s fxed and known, and the dagonal matrx W s learned. There are some works mposng low-rank structure on M. The most drect way s let M = L T L, where M R d d, L R k d and k s smaller than d. So, M s a matrx of rank k. In [16], they proposed a robust structure metrc learnng method, and used nuclear norm as convex approxmaton of low-rank regularzaton, and t can be expressed as a convex optmzaton problem: mn M S d TrM C n q X s.t. q X, y Y : ξ q M, φq, y q φq, y F y q, y ξ q, 8 where X R d s the tranng set of n ponts, Y s the set of all permutatons over X, C > 0 s a slack trade-off parameter, φ s a feature encodng of an nput-output par, and y q, y s the desgned margn. In [14], a tghter rank mnmzaton approxmaton, Fantope regularzaton, was proposed and mposed on M, and holds an explct control over rank of M. The formulaton s RegM = k σ M, where σ M are k smallest sngular values. 3. METRIC LEARNING USING CAPPED TRACE NORM In ths paper, we are gong to ntroduce a novel low-rank regularzaton based metrc learnng method, so that we can avod the problem of overfttng and learn an effectve structure from lmted tranng data. As we mentoned n last secton, there have been already many dfferent types of regularzaton over M S d n metrc learnng lterature. The Frobenus norm regularzaton proposed by [22] can avod overfttng, however, the defnton of M n ths paper restrcts the generaton of M and t cannot learn correlaton between features. In weakly supervsed metrc learnng, the algorthm does not have access to the labels of tranng data, and t s only provded wth sde nformaton whch s n the form of par or trplet constrants. In ths case, the low-rank regularzaton seems to be an

3 effectve way to learn correlatons between data. Trace norm also called as nuclear norm has been used as the convex relaxaton of the rank mnmzaton, however, there stll s a gap between trace norm and rank mnmzaton. Because trace norm s the sum of all sngular values, f one of the large sngular values changes, the trace norm wll also change correspondngly, but the rank of the orgnal matrx keeps constant. Settng M = L T L or mposng Fantope regularzaton are both explct way to control the rank of matrx M. The performance could be good f we can fnd a good ftted rank. However, n practce, we do not know the rank of a matrx accurately, and we have to tune ths parameter very carefully, because a small devaton of parameter k from optmal value may have large nfluence on the fnal performance. It s a tedous process to select the best k from a large range. In ths paper, we wll use the capped trace norm as low-rank regularzaton [29, 30]. It can be represented as RegM = where ε s a threshold value. In ths regularzaton, we only mnmze the sngular values that are smaller than ε, and we gnore other large sngular values. Thus, when large sngular values vary, our regularzaton behaves the same as low-rank regularzaton, and keeps constant too. In practcal problems, t s dffcult to estmate the rank of matrx M, but the ε value n capped trace norm can be easly decded [4, 8]. Because quadruplet constrant can encompass par and trplet constrants, n ths paper, we use quadruplet constrant to form a new robust metrc learnng model as: mn M S d [ ξ q M, x jx T j x kl x T kl ] γ mn{σ M, ε}. 9 2 We wll also use ths model n the optmzaton and convergence analyss sectons, but the conclusons are the same when we use par or trplet constrant. 3.1 Optmzaton Algorthm Objectve functon 9 s non-smooth and non-convex, and t s hard to optmze. In ths secton, at frst, we wll use re-weghted method to transform the orgnal objectve functon to a convex subproblem, then proxmal gradent method s appled to solve ths new subproblem. In next secton, we wll prove that our objectve functon wll converge, and the values of orgnal objectve functon 9 are non-ncreasng after each step, and a local optmum value s to be obtaned. Accordng to the re-weghted algorthm descrbed n [19, 18], let M = UΣV T and sngular value σ are n ascendng order. We defne: D = 1 2 u u T. 10 Therefore, orgnal problem 9 can be transformed to: [ ] mn ξ q M, x jx T M S d j x kl x T kl γ 2 TrM T DM. 11 of problem 11 wth respect to M s: M = Input: A, X R d n mn{σ M, ε}, Output: M S d whle not converge do Update D 1 k 2 σ 1 u u T. Update M Π S d M η M x jx T j x kl x T kl γdm 12 where A denotes the subset of constrants n A that s larger than 0 n functon 11. After each step, M s projected onto the postve semdefnte cone. M = Π S d M η M. 13 Optmzaton algorthm of our method s summarzed n Algorthm 1 below. Algorthm 1 Algorthm to solve problem 9. M = x jx T j x kl x T kl γdm end whle 3.2 Convergence Analyss Usng the algorthm above, we can solve our orgnal non-smooth and non-convex objectve functon 9. In ths secton, we prove the convergence of our optmzaton algorthm, and a local soluton can be obtaned n the end. In capped trace norm optmzaton algorthm, the number of thresholded sngular values vares for dfferent teratons, thus the convergence proof s dffcult. THEOREM 1. Through Algorthm 1, the objectve functon 9 wll converge, or the values of objectve functon 9 are non-ncreasng monotoncally. In order to prove Theorem 1, at frst, we need the followng Lemmas. LEMMA 1. Accordng to [23], any two hermtan matrces A, B R d d satsfy the nequalty σ A, σ B are sngular values sorted n the same order d σ A σ d 1 B Tr A T B d σ A σ B. 14 LEMMA 2. Let M = UΣV T, σ are sngular values of M n ascendng order, and there are k sngular values less than ε. ˆM = Û ˆΣ ˆV T, ˆσ are sngular values of ˆM n ascendng order, and there are ˆk sngular values less than ε. ˆM denotes the updated parameter after M. ε s a constant value. So t s true that, mn{ˆσ ˆM, ε} Tr ˆM T D ˆM where D s defned as 10. mn{σ M, ε} Tr M T DM, 15 When we fx D, ths problem s convex, and we use proxmal gradent method to optmze t teratvely. In each teraton, M s updated by performng a subgradent descent, and the subgradent descent Proof: It s obvous that σ 2 ˆσ ˆσ 2 = 1 σ σ ˆσ 2 0, 16

4 hence the followng nequalty holds: ˆσ 1 2 σ 1 ˆσ σ. 17 We know there are ˆk sngular values of ˆM less than ε and they are n ascendng order, the frst ˆk smallest sngular values ˆσ are less than ε, thus no matter whether ˆk k or ˆk < k, t holds that: ˆ ˆσ ˆkε ˆσ kε. 18 Combnng two nequaltes 17 and 18, we get: ˆ ˆσ 1 2 ˆσ 2 ˆkε 1 2 σ kε. 19 Then, summng dε on both sdes of nequalty 19, where d s the dmenson of matrx M, we are able to get the followng nequalty: that: ˆ ˆσ d ˆk ε 1 2 k σ d k ε 1 2 As per the defnton of matrx D = 1 2 Tr M T DM ˆσ 2 σ. 20 u u T, we know = 1 2 Tr u u T MM T = 1 2 Tr UΛU T UΣ 2 U T = 1 2 Tr UΛΣ 2 U T = 1 2 σ. 21 Va Lemma 1, we know that: 1 2 Tr u u T ˆM ˆM T = 1 2 Tr UΛU T Û ˆΣ 2 Û T 1 2 ˆσ Combnng Eq. 21, nequaltes 20 and 22, we have: ˆ ˆσ d ˆk ε 1 2 Tr σ d k ε 1 2 Tr u u T ˆM ˆM T u u T MM T. Fnally, nequalty holds that: mn{ˆσ ˆM, ε} Tr ˆM T D ˆM mn{σ M, ε} Tr M T DM LEMMA 3. Functon 11, s convex wth doman S d, and gradent of ths functon s clearly Lpschtz contnuous wth a large enough constant L. Secondly, postve semdefnte cone s a closed convex cone. As per [2], when we use proxmal gradent method and select a proper learnng rate, the value of ths functon converges n each teraton. Rght now, we are able to prove Theorem 1 by usng the Lemmas above. Proof: Va Lemma 3, after we use proxmal gradent descent method to mnmze functon 11 n Algorthm 1, t s guaranteed that: [ ] ξ q ˆM, xjx T j x kl x T kl γ 2 Tr ˆM T D ˆM [ ξ q ] M, x jx T j x kl x T kl γ M 2 Tr T DM Va Lemma 2, we can easly know that: γ mn{ˆσ ˆM, ε} Tr ˆM T D ˆM γ mn{σ M, ε} Tr M T DM. Fnally, we combne nequaltes 25 and 26 to acheve: [ ] ξ q ˆM, xjx T j x kl x T kl γ mn{ˆσ 2 ˆM, ε} [ ] ξ q M, x jx T j x kl x T kl γ mn{σ M, ε} So far, t s clear that the value of our proposed objectve functon wll not ncrease by usng our optmzaton algorthm, thus we prove the Theorem 1 that our optmzaton algorthm s nonncreasng monotoncally. Because M s a postve semdefnte matrx, we know that the objectve functon 9 s at least larger than zero. So our objectve functon s also lower bounded. Therefore we can conclude that our optmzaton algorthm converges, and a local optmum value s to be obtaned n the end. 4. EXPERIMENTAL RESULTS We evaluate our proposed model on dfferent datasets, ncludng synthetc dataset, wdely used face recognton datasets, and some other datasets n mage data mnng. There are two man goals n our experment: frst, we wll show that our model s able to outperform the state-of-the-art metrc learnng methods; second, our proposed capped trace norm s more stable to be appled to solve practcal problems than Fantope regularzaton. 4.1 Synthetc Data In ths experment, we evaluate our proposed metrc learnng on a synthetc dataset, and each constrant s quadruplet. Dataset: We follow the settng of experment n [14] to generate a synthetc dataset. We defne a target symmetrc postve semdefnte matrx T S d, and T =, where A S A e s a random symmetrc postve defnte matrx wth ranka = e and e < d. Matrx A s a multplcaton of one random symmetrc

5 matrx and ts transpose. So, rankt = ranka = e. X R d n s a feature matrx, each element s generated from gaussan dstrbuton n [0, 1], and each sample s a feature vector x R d. The Mahalanobs dstance between two feature vectors x and x j s gven by: d 2 T x, x j = x x j T T x x j. To buld a tranng constrant set A, we randomly sample pars of dstance usng quadruplets and get the ground truth usng d 2 T, so that: x, x j, x k, x l A, d 2 T x, x j < d 2 T x k, x l, and t denotes that dstance between sample par, j s smaller than sample par k, l. Tranng set A s used as tranng data to learn Mahalanobs metrc matrx M. Valdaton set V and test set T are generated n the same way as A, and they are used to tune parameters and test evaluaton respectvely. Settng: In the experment, we set e = 10, d = 100, n = 10 6, A = V = T = After we learn a metrc M, we evaluate these metrcs on test set T, and measure accuracy of satsfyng the constrants. In ths experment, we compare wth four other methods: metrc learnng wth no regularzaton, metrc learnng wth trace norm regularzaton and metrc learnng wth fantope regularzaton. Parameter γ are tuned n the range of {10 2, 10 1, 1, 10, 10 2 }, and rank of Mahalanobs metrc M are tuned from [5, 20]. Compared Methods: Because we use quadruplet constrants n ths experment, the general model we use to solve ths problem s: [ ] ξ q M, x jx T j x kl x T kl γ RegM Metrc learnng wth no regularzaton. We set γ = 0 n problem 28. Metrc learnng wth trace norm regularzaton. In ths model, γ > 0 and RegM = σ M, where σ M are sngular values of M. Metrc learnng wth Fantope regularzaton. In ths model, γ > 0 and RegM = k smallest sngular values of M. σ M, where σ M are k Metrc learnng wth capped trace norm regularzaton. In our model, γ > 0 and RegM = mn{σ M, ε}, where σ M are sngular values of M. Evaluaton Metrcs: After learnng a metrc matrx M from tranng constrants A, we evaluate t by computng the number of domnant sngular values, namely rankm. Then we test t on testng constrants T and compute the accuracy of satsfed constrants. Results: Table 1 shows the results of our experment. As we can see, metrc learnng wth Fantope regularzaton and capped norm regularzaton performs much better than other two methods. Our method can get a comparable results to metrc learnng wth Fantope regularzaton, and ths s the result when we tune parameter k for Fantope regularzaton very carefully. Fgure 2 represents the accuraces of metrc learnng wth Fantope regularzaton and our method when selecton of rank changes. It s obvous that our method always outperforms Fantope regularzaton when we select parameter rank randomly except 10. Our method performs more stable when we do not have enough tme to tune the rank of Mahalanobs metrc M. In practce, t common that we do not know the exact rank of a matrx, our method s more applcable to solve practcal problems. Method Accuracy rankm ML 85.62% 53 ML Trace 88.44% 41 ML Fantope 95.50% 10 ML capped 95.43% Table 1: Synthetc experment results. MLTrace MLFantope Fgure 2: Accuracy vs the number of rank k 4.2 Face Verfcaton In ths secton, we evaluate our method, parwse constrants wth capped norm regularzaton, on two challengng face recognton datasets: Labeled Faces n the Wld LFW [7] and Publc Fgures Face Database PubFg [12]. In our experments, we focus on face verfcaton task, namely decdng f two face mages are from the same person, and results show that our method can outperform the state-of-the-art metrc learnng algorthm Labeled Faces n The Wld Dataset: The Labeled Faces n the Wld dataset s consdered as the current state-of-the-art face recognton benchmark. It contans 13,233 unconstraned face mages of 5749 ndvduals, and 1680 of these pctured people appear n two or more dstnct photos n ths data set. There are two dfferent feature representatons n ths experment, LFW SIFT feature dataset and LFW Attrbute feature dataset. We use the face representaton proposed by [5], t extracts SIFT descrptors [6] at 9 automatcally detected facal landmarks over three scales. Each mage s a feature vector of sze 3,456. To make ths dataset tractable for dstance metrc learnng algorthm and save tme, we perform prncple component analyss to reduce the dmenson, and we select 100 largest prncple components n ths experment [11]. We also use hgh-level descrbable vsual attrbutes gender, race, age, har color n [12]. These features of face mage are nsenstve to pose, llumnaton, expresson and other magng condtons, and can avod some obvous mstakes, for example men are confused for women or chld for mddle-aged. Each mage s represented by a vector x R d where d s the number of attrbutes to descrbe the mage. Each entry n vector x means the score of presence of each specfc attrbute. Settng: To compare wth other state-of-the-art methods on face verfcaton, we follow the experment settng n [11]. Data are organzed n 10 folds, and each fold conssts of 300 smlar constrants two faces n a par are from the same person and 300 dssmlar constrants two faces n a par are from dfferent person. The average over results of 10 folds s used as fnal evaluaton metrc. In the experment, we an only access parwse constrants gven

6 by smlar or dssmlar pars, and labels or more tranng data are not allowed. In the experment, we tune parameter µ from range [10 2, 10 1, 1, 10, 102 ], and parameter rank k of matrx M from {30, 35, 40, 45, 50, 55, 60, 65, 70} n LFW SIFT dataset and from {10, 15, 20, 25, 30, 35, 40, 45, 50} n LFW Attrbute dataset. For other methods, we follow ther already tuned parameter settng n [11]. To prove the stableness of our method, we splt LFW SIFT feature dataset and LFW attrbute feature dataset nto 5 folds, and evaluate our metrc learnng wth capped trace norm model and metrc learnng wth Fantope norm on these datasets. We run experments 5 tmes usng dfferent tranng/testng splts and compute mean value and standard varance for valdaton. Compared Methods: IDENTITY: We compute Eucldean dstance drectly as a baselne. MAHALANOBIS: Tradtonal Mahalanobs dstance between mages n a par s computed, where the metrc matrx s nverse of covarance between two vectors. a Test results on LFW SIFT Feature dataset. Frst two rows face pars from the same person correctly verfed by our method but not by metrc learnng wth Fantope regularzaton. Last two rows are face pars from dfferent person correctly verfed by our method but not by metrc learnng wth Fantope regularzaton. KISSME: A metrc learnng methods based on a statstcal nference perspectve. It learns a dstance metrc from equvalence constrants and can be used n large scale dataset [11]. ITML: Metrc learnng methods proposedd n [1]. They use LogDet dvergence as regularzaton so that they do not need do explct postve sem-defnte projecton. LDML: [5] offers a metrc learnng method that uses logstc dscrmnant to learn a metrc from a set of labeled mage pars. Because of the settng of ths experment, for method Fantope and Cap, the general functon for ther models on ths task s: P P M, xj xtj u l M, xj xtj,j S,j D γ2 RegM, 29 where D and S denote dssmlar par set and smlar par set respectvely. u s the upper bound for Mahalanobs dstance between two samples n smlar par set, and l represents the dstance lower bound between two samples n dssmlar par set. MLFantope: It denotes metrc learnng wth Fantope reguk P larzaton [14]. As per ts defnton, RegM = σ M. MLCap: Parwse constrants metrc P learnng wth capped trace norm regularzaton. RegM = mn{σ M, ε}. Evaluaton Metrcs: To measure the face verfcaton accuracy for all these compared methods, we report a Recever Operator Characterstc ROC curve. To compare the performance of each method, we compute Equal Error Rate EER of the respectve method, and use 1 EER as evaluaton crteron, and the method wth the lowest EER, or the hghest 1 EER s the most accurate one. Results: We plot ROC curve for each method n Fgure 4, and 1 EER values are also computed to evaluate ther performance. Fgure 4a shows the results on LFW SIFT feature dataset. Mahalanobs dstance between two smlar pars performs qute well comparng wth Eucldean dstance, t ncreases the performance from 67.5% to 74.8%. KISSME method s the state-of-the-art b Test results on LFW Attrbute Feature dataset. Frst two rows face pars from the same person correctly verfed by our method but not by metrc learnng wth Fantope regularzaton. Last two rows are face pars from dfferent person correctly verfed by our method but not by metrc learnng wth Fantope regularzaton. Fgure 3: Sample results on LFW dataset. method on ths feature type and reaches an 1 EER at 80.06%, t outperforms wdely used metrc learnng methods ITML and LDML. It s clear that parwse contrant wth low-rank approxmaton methods, Fantope and CAP, perform better than KISSME, and reach 81.4%, and 81.7% respectvely. ncreases the performance of Eucldean dstance by 14.2% and KISSME method by 0.9%. Fgure 4b presents the performance of each method on LFW Attrbute feature dataset. outperforms KISSME method, and also works better than metrc learnng than Fantope regularzaton by 0.4%. 3 shows the result samples on ths dataset. We run 5-fold cross-valdaton experments on Fantope method and our method. In Fgure 5, we plot accuracy result wth respect to rank selecton. When we select rank parameter k roughly, t s clearly that our method can get better results than metrc learnng wth Fantope regularzaton. In large scale dataset, t not pratcal to tune parameters as carefully as n small dataset, and sometmes, there s not exact rank at all. So, our method s much more applcable to ths knd of stuaton, and performs better when we nputs an approxmaton of matrx rank.

7 1 ROC 1 ROC True Postve Rate TPR True Postve Rate TPR CAP FANTOPE KISSME ITML LDML IDENTITY MAHAL CAP FANTOPE KISSME ITML LDML IDENTITY MAHAL False Postve Rate FPR False Postve Rate FPR a ROC curves on SIFT Feature dataset b ROC curves on Attrbute Feature dataset Fgure 4: Face verfcaton results on LFW dataset: Fgure 4a ROC curve for dfferent methods on LFW SIFT feature dataset. Fgure 4b ROC curve for dfferent methods on LFW Attrbute dataset MLTrace MLFantope a Verfcaton accuracy vs Rank/SIFT Feature MLTrace MLFantope b Verfcaton accuracy vs Rank/Attrbute Feature Fgure 5: Verfcaton accuracy wth respect to rank selecton on LFW SIFT feature dataset 5a and LFW attrbute feature dataset 5b Publc Fgures Face Database Dataset: The PubFg database s a large, real-world face dataset consstng of 58,797 mages of 200 people collected from the nternet. Images n the data set are downloaded from the nternet usng search query on a varety of mage search engnes. Compared to LFW, there are larger number of mages per person, and more varaton on dfferent poses, lghtng condtons, and expressons. Served as a complementary to LFW dataset, PubFg dataset conssts hgh-level features of vsual face trats that are not senstve to pose, llumnaton or other magng condtons. Settng: Face verfcaton benchmark dataset conssts of 20,000 pars of mages of 140 people. The data s dvded nto 10 folds wth mutually dsjont sets of 14 people each, and each fold contans 1,000 ntra and 1,000 extra-personal pars. Smlar to experment settng n LFW dataset, to prove stableness of our method, we splt Pubfg feature dataset nto 5 folds, and evaluate our metrc learnng wth capped trace norm model and metrc learnng wth Fantope norm on these datasets. We run experments 5 tmes usng dfferent tranng/testng splts and compute average and standard varance. Compared Methods: We use the same compared methods n LFW experment. Evaluaton Metrcs: ROC fgure s plotted for each method and we also compare verfcaton accuracy when we tune parameter k for methods Fantope and Cap. Results: Fgure 6 represents the performance of each compared method on Pubfg dataset. We calculate Equal Error Rate for them, and t s clear that our method outperforms other correlated metrc learnng methods. By mposng capped trace norm regularzaton, our method ncreases the performance of tradtonal Mahalanobs dstance by about 6%, and the state-of-the-art KISSME method by over 1%. We also perform another experments to show the process of parameter tunng procedure. In 7, we tune parameter rank k from 5 to 40, and we can see that our method works more stable than metrc learnng wth Fantope regularzaton. The performance of Fantope regularzaton s greatly subject to the selecton of rank k. In ths fgure, we use trace norm regularzaton as baselne. 4.3 Image Classfcaton In ths secton, we evaluate our methods on mage classfcaton and the task s assgnng an mage to a predefned class. We can also look as ths task as object recognton. In the experments, we use mage dataset wth attrbute features. Attrbute s an mportant type of semantc propertes shared among dfferent objects or actvtes.

8 ROC 1 81 MLTrace MLFantope True Postve Rate TPR CAP FANTOPE KISSME ITML LDML IDENTITY MAHAL False Postve Rate FPR Fgure 9: Classfcaton accuracy vs rank k on Pubfg dataset. Fgure 6: ROC curves of dfferent methods on the Pubfg datasets MLTrace MLFantope a Results of 5 neareset neghbors when we query an mage on Pubfg dataset. Frst row s the results of our method, and second shows the results of metrc learnng wth Fantope regularzaton. Fgure 7: Verfcaton accuracy vs rank k on Pubfg dataset Fgure 8: Classfcaton accuracy of each method on Pubfg dataset. It s a representaton n a hgher level than the raw feature representaton drectly extracted from mages or vdeos. In recent years, several attrbute datasets are used by varous researchers for the study of utlzng attrbutes for dfferent vson applcatons. There are two mage classfcaton datasets, PubFg dataset and Outdoor Scene Recognton OSR dataset Publc Fgures Face Database Dataset: We use a subset face mages of Pubfg dataase, and there are 771 mages from 8 face categores. b Results of 5 neareset neghbors when we query an mage on Pubfg dataset. Frst row s the results of our method, and second shows the results of metrc learnng wth Fantope regularzaton. Fgure 10: Results of 5 neareset neghbors when we query an mage. Green lne means ths neghbor s n the same class wth query mage, and red lne denotes they are dfferent. Settng: We use the same experment setup as [14]. Each person contrbutes 30 mages as tranng data to learn Mahalanobs metrc matrx M and buld classfer, other mages are used as testng data to evaluate classfcaton performance. We run ths experment 5 tmes, and 30 mages per person n tranng data are selected ran-

9 domly each tme, and average performance s used as evaluaton crteron. Compared Methods: KNN: In ths method, we use k-nearest neghbor method as classfer and compute Eucldean dstance to measure the smlarty between any two mages. Ths method works as a baselne. SVM: Support vector machne SVM s a wdely used classfer. In ths method, we compare our method wth SVM, and the code s from LbLnear [3]. LMNN: It s one of the most wdely-used Mahalanobs dstance metrc learnng methods. In ths method, they use labeled nformaton to generate trplet constrants. We mpose low-rank constrant on matrx M n LMNN method, and the general form s, γ RegM 1 µ 2 M, xjxj T µ,j,k R,j S [ 1 M, xjx T j x k x T k ] 30 LMNNTrace norm: We mpose trace norm as low-rank regularzaton approxmaton and RegM = σ M, where σ M s sngular value of matrx M. LMNNFantope: Instead of usng trace norm regularzaton as low-rank approxmaton, we mpose rank of matrx explctly, and RegM = k σ M, where σ M are k smallest sngular values of M. LMNNCapped trace norm: We mnmze sngular value of matrx M smaller than a value learned n the optmzaton ε, so RegM = mn{σ M, ε}. Evaluaton Metrcs: In ths experment, we compute classfcaton accuracy for each method as evaluaton crteron. Results: Fgure 8 represents the performance of compared methods. 1NN method usng Eucldean dstance works really bad on ths task, and ts accuracy s just 55%. SVM method ncreases the performance of 1NN method greatly by about 20%. When we use LMNN method to learn Mahalanobs dstance for ths task, and use 1NN classfer, the accuracy reaches 77% and s better than SVM. We mpose low-rank regularzaton, trace norm, Fantope regularzaton, and capped trace norm respectvely, t s clear that our method works best and ncreases the performance of LMNN method by about 1.5%. Sample query results are presented n Fgure 10a, we plot 5 nearest neghbors for each query, and we can fnd out that our method does a better job than metrc learnng wth Fantope regularzaton n ths task. In Fgure 9, we shows the performance of metrc learnng wth Fantope regularzaton and capped trace norm method when we tune parameter rank k. The performance of Fantope regularzaton s very senstve to the choce of parameter rank k, t s obvous that sometmes, the performance of Fantope regularzaton s worse than trace norm regularzaton. It s clear that when we select k from 40 to 100, our method performs much stable than Fantope regularzaton, because t explctly controls the rank of matrx M to be k. can learn an adaptve threshold n the optmzaton and the rank of matrx M s not necessary to be k. Fgure 11: Classfcaton accuracy of each method on OSR dataset MLTrace MLFantope Fgure 12: Classfcaton accuracy vs rank k on OSR dataset Outdoor Scene Recognton Dataset Dataset: We use Outdoor Scene Recognton OSR dataset from [21], and there are 2688 mages from 8 scene categores, and t s descrbed by hgh level attrbute features. Settng: In the experment, we also use 30 mages for each category as tranng data, and other mages are used as testng data. Each tme, we select tranng data randomly and we repeat ths procedure 5 tmes, and use average accuracy as performance of each method. Compared Methods: There are sx compared methods as n last secton, namely KNN, SVM, LMNN, LMNNTrace norm, LMNNFantope and LMNNcapped trace norm. Evaluaton Metrcs: Classfcaton accuracy s compute as the performance of each method. Results: In Fgure 11, 1NN method reaches accuracy at about 65%, and SVM method perform a large ncrease by about 10%. Although the result of 1NN method usng Mahalanobs dstance learned by LMNN s a lttle worse than SVM method, LMNN method wth low-rank regularzaton always outperform the result of SVM. We can also fnd out that our LMNN wth capped trace regularzaton method works better than trace norm and Fantope regularzaton, and reaches an accuracy at about 76.5%. It mproves LMNN method by about 1%. Fgure 10b shows two sample query results of two compared methods. It s a lttle hard to dstngush coast and mountan when both of them has sky n the back, t s clear that our method learns a better Mahalanobs dstance to do classfcaton and mage search. We also plot fgure to compare the performance of LMNN method wth low-rank regularzaton, and we use LMNN wth trace norm regularzaton as a baselne. In Fgure 12, t s clear that our method

10 always works better than metrc learnng wth Fantope regularzaton. When we make k = 40, we can fnd that the performance of Fantope regularzaton s even worse than the baselne. 5. CONCLUSION In ths paper, we propose to use a novel low-rank regularzaton, capped trace norm regularzaton, to mpose on metrc learnng method. Capped trace norm regularzaton s a better rank mnmzaton approxmaton than trace norm. It works more stable than Fantope regularzaton and can be seen as an adaptve Fantope regularzaton as well. We also ntroduce an effcent optmzaton algorthm, and prove the convergence of our objectve functon. Expermental results show that our method outperforms the state-ofthe-art metrc learnng methods. 6. REFERENCES [1] J. V. Davs, B. Kuls, P. Jan, S. Sra, and I. S. Dhllon. Informaton-theoretc metrc learnng. In Proceedngs of the 24th nternatonal conference on Machne learnng, pages ACM, [2] L. V. EE236C. 6. proxmal gradent method. [3] R.-E. Fan, K.-W. Chang, C.-J. Hseh, X.-R. Wang, and C.-J. Ln. Lblnear: A lbrary for large lnear classfcaton. The Journal of Machne Learnng Research, 9: , [4] H. Gao, F. Ne, W. Ca, and H. Huang. Robust capped norm nonnegatve matrx factorzaton. 24th ACM Internatonal Conference on Informaton and Knowledge Management CIKM 2015, pages , [5] M. Gullaumn, J. Verbeek, and C. Schmd. Is that you? metrc learnng approaches for face dentfcaton. In Computer Vson, 2009 IEEE 12th Internatonal Conference on, pages IEEE, [6] M. Gullaumn, J. Verbeek, and C. Schmd. Multple nstance metrc learnng from automatcally labeled bags of faces. In Computer Vson ECCV 2010, pages Sprnger, [7] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Mller. Labeled faces n the wld: A database for studyng face recognton n unconstraned envronments. Techncal report, Techncal Report 07-49, Unversty of Massachusetts, Amherst, [8] W. Jang, F. Ne, and H. Huang. Robust dctonary learnng wth capped l1 norm. Twenty-Fourth Internatonal Jont Conferences on Artfcal Intellgence IJCAI 2015, pages , [9] D. Kedem, S. Tyree, F. Sha, G. R. Lanckret, and K. Q. Wenberger. Non-lnear metrc learnng. In Advances n Neural Informaton Processng Systems, pages , [10] D. Kedem, Z. E. Xu, and K. Q. Wenberger. Gradent boosted large margn nearest neghbors. [11] M. Koestnger, M. Hrzer, P. Wohlhart, P. M. Roth, and H. Bschof. Large scale metrc learnng from equvalence constrants. In Computer Vson and Pattern Recognton CVPR, 2012 IEEE Conference on, pages IEEE, [12] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attrbute and smle classfers for face verfcaton. In Computer Vson, 2009 IEEE 12th Internatonal Conference on, pages IEEE, [13] M. Law, N. Thome, and M. Cord. Quadruplet-wse mage smlarty learnng. In Proceedngs of the IEEE Internatonal Conference on Computer Vson, pages , [14] M. T. Law, N. Thome, and M. Cord. Fantope regularzaton n metrc learnng. In Computer Vson and Pattern Recognton CVPR, 2014 IEEE Conference on, pages IEEE, [15] D. Lm, G. Lanckret, and B. McFee. Robust structural metrc learnng. In Proceedngs of the 30th Internatonal Conference on Machne Learnng, pages , [16] B. McFee and G. R. Lanckret. Metrc learnng to rank. In Proceedngs of the 27th Internatonal Conference on Machne Learnng ICML-10, pages , [17] T. Mensnk, J. Verbeek, F. Perronnn, and G. Csurka. Metrc learnng for large scale mage classfcaton: Generalzng to new classes at near-zero cost. In Computer Vson ECCV 2012, pages Sprnger, [18] F. Ne, H. Huang, X. Ca, and C. Dng. Effcent and robust feature selecton va jont l 2,1-norms mnmzaton. NIPS, [19] F. Ne, J. Yuan, and H. Huang. Optmal mean robust prncpal component analyss. In Proceedngs of the 31st Internatonal Conference on Machne Learnng ICML-14, pages , [20] S. Parameswaran and K. Q. Wenberger. Large margn mult-task metrc learnng. In Advances n neural nformaton processng systems, pages , [21] D. Parkh and K. Grauman. Relatve attrbutes. In Computer Vson ICCV, 2011 IEEE Internatonal Conference on, pages IEEE, [22] M. Schultz and T. Joachms. Learnng a dstance metrc from relatve comparsons. Advances n neural nformaton processng systems NIPS, page 41, [23] C. Theobald. An nequalty for the trace of the product of two symmetrc matrces. In Mathematcal Proceedngs of the Cambrdge Phlosophcal Socety, volume 77, pages Cambrdge Unv Press, [24] H. Wang, F. Ne, and H. Huang. Robust dstance metrc learnng va smultaneous l1-norm mnmzaton and maxmzaton. The 31st Internatonal Conference on Machne Learnng ICML 2014, pages , [25] J. Wang, X. Gao, Q. Wang, and Y. L. Prods-contshc: learnng proten dssmlarty measures and herarchcal context coherently for proten-proten comparson n proten database retreval. BMC bonformatcs, 13Suppl 7:S2, [26] J. Wang, A. Kalouss, and A. Woznca. Parametrc local metrc learnng for nearest neghbor classfcaton. In Advances n Neural Informaton Processng Systems, pages , [27] K. Q. Wenberger and L. K. Saul. Dstance metrc learnng for large margn nearest neghbor classfcaton. The Journal of Machne Learnng Research, 10: , [28] E. P. Xng, A. Y. Ng, M. I. Jordan, and S. Russell. Dstance metrc learnng wth applcaton to clusterng wth sde-nformaton. Advances n neural nformaton processng systems, 15: , [29] T. Zhang. Mult-stage convex relaxaton for learnng wth sparse regularzaton. NIPS, [30] T. Zhang. Analyss of mult-stage convex relaxaton for sparse regularzaton. JMLR, pages , 2010.

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Robust Dictionary Learning with Capped l 1 -Norm

Robust Dictionary Learning with Capped l 1 -Norm Proceedngs of the Twenty-Fourth Internatonal Jont Conference on Artfcal Intellgence (IJCAI 205) Robust Dctonary Learnng wth Capped l -Norm Wenhao Jang, Fepng Ne, Heng Huang Unversty of Texas at Arlngton

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Positive Semi-definite Programming Localization in Wireless Sensor Networks Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer

More information

Fast Robust Non-Negative Matrix Factorization for Large-Scale Human Action Data Clustering

Fast Robust Non-Negative Matrix Factorization for Large-Scale Human Action Data Clustering roceedngs of the Twenty-Ffth Internatonal Jont Conference on Artfcal Intellgence IJCAI-6) Fast Robust Non-Negatve Matrx Factorzaton for Large-Scale Human Acton Data Clusterng De Wang, Fepng Ne, Heng Huang

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

Network Intrusion Detection Based on PSO-SVM

Network Intrusion Detection Based on PSO-SVM TELKOMNIKA Indonesan Journal of Electrcal Engneerng Vol.1, No., February 014, pp. 150 ~ 1508 DOI: http://dx.do.org/10.11591/telkomnka.v1.386 150 Network Intruson Detecton Based on PSO-SVM Changsheng Xang*

More information

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures A Novel Adaptve Descrptor Algorthm for Ternary Pattern Textures Fahuan Hu 1,2, Guopng Lu 1 *, Zengwen Dong 1 1.School of Mechancal & Electrcal Engneerng, Nanchang Unversty, Nanchang, 330031, Chna; 2. School

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input

Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input Real-tme Jont Tracng of a Hand Manpulatng an Object from RGB-D Input Srnath Srdhar 1 Franzsa Mueller 1 Mchael Zollhöfer 1 Dan Casas 1 Antt Oulasvrta 2 Chrstan Theobalt 1 1 Max Planc Insttute for Informatcs

More information

Learning a Class-Specific Dictionary for Facial Expression Recognition

Learning a Class-Specific Dictionary for Facial Expression Recognition BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for

More information

Face Recognition Method Based on Within-class Clustering SVM

Face Recognition Method Based on Within-class Clustering SVM Face Recognton Method Based on Wthn-class Clusterng SVM Yan Wu, Xao Yao and Yng Xa Department of Computer Scence and Engneerng Tong Unversty Shangha, Chna Abstract - A face recognton method based on Wthn-class

More information

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE Dorna Purcaru Faculty of Automaton, Computers and Electroncs Unersty of Craoa 13 Al. I. Cuza Street, Craoa RO-1100 ROMANIA E-mal: dpurcaru@electroncs.uc.ro

More information

New l 1 -Norm Relaxations and Optimizations for Graph Clustering

New l 1 -Norm Relaxations and Optimizations for Graph Clustering Proceedngs of the Thrteth AAAI Conference on Artfcal Intellgence (AAAI-6) New l -Norm Relaxatons and Optmzatons for Graph Clusterng Fepng Ne, Hua Wang, Cheng Deng 3, Xnbo Gao 3, Xuelong L 4, Heng Huang

More information

Modular PCA Face Recognition Based on Weighted Average

Modular PCA Face Recognition Based on Weighted Average odern Appled Scence odular PCA Face Recognton Based on Weghted Average Chengmao Han (Correspondng author) Department of athematcs, Lny Normal Unversty Lny 76005, Chna E-mal: hanchengmao@163.com Abstract

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning Internatonal Journal of Intellgent Informaton Systems 2018; 7(2): 15-22 http://www.scencepublshnggroup.com/j/js do: 10.11648/j.js.20180702.11 ISSN: 2328-7675 (Prnt); ISSN: 2328-7683 (Onlne) Kernel Collaboratve

More information

Geometry-aware Metric Learning

Geometry-aware Metric Learning Zhengdong Lu LUZ@CS.UTEXAS.EDU Prateek Jan PJAIN@CS.UTEXAS.EDU Inderjt S. Dhllon INDERJIT@CS.UTEXAS.EDU Dept. of Computer Scence, Unversty of Texas at Austn, Unversty Staton C5, Austn, TX 78712 Abstract

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

Heterogeneous Visual Features Fusion via Sparse Multimodal Machine

Heterogeneous Visual Features Fusion via Sparse Multimodal Machine 013 IEEE Conference on Computer Vson and Pattern Recognton Heterogeneous Vsual Features Fuson va Sparse Multmodal Machne Hua ang, Fepng Ne, Heng Huang, Chrs Dng Department of Electrcal Engneerng and Computer

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Feasibility Based Large Margin Nearest Neighbor Metric Learning

Feasibility Based Large Margin Nearest Neighbor Metric Learning ESANN 18 proceedngs, European Symposum on Artfcal Neural Networks, Computatonal Intellgence and Machne Learnng. Bruges (Belgum), 5-7 Aprl 18, 6doc.com publ., ISBN 978-875877-6. Feasblty Based Large Margn

More information

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Semi-Supervised Discriminant Analysis Based On Data Structure

Semi-Supervised Discriminant Analysis Based On Data Structure IOSR Journal of Computer Engneerng (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 3, Ver. VII (May Jun. 2015), PP 39-46 www.osrournals.org Sem-Supervsed Dscrmnant Analyss Based On Data

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

RECOGNIZING GENDER THROUGH FACIAL IMAGE USING SUPPORT VECTOR MACHINE

RECOGNIZING GENDER THROUGH FACIAL IMAGE USING SUPPORT VECTOR MACHINE Journal of Theoretcal and Appled Informaton Technology 30 th June 06. Vol.88. No.3 005-06 JATIT & LLS. All rghts reserved. ISSN: 99-8645 www.jatt.org E-ISSN: 87-395 RECOGNIZING GENDER THROUGH FACIAL IMAGE

More information

Orthogonal Complement Component Analysis for Positive Samples in SVM Based Relevance Feedback Image Retrieval

Orthogonal Complement Component Analysis for Positive Samples in SVM Based Relevance Feedback Image Retrieval Orthogonal Complement Component Analyss for ostve Samples n SVM Based Relevance Feedback Image Retreval Dacheng Tao and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong {dctao2,

More information

Adaptive Transfer Learning

Adaptive Transfer Learning Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong {caobn,snnopan,zhangyu,dyyeung,qyang}@cse.ust.hk

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

Semi-Supervised Kernel Mean Shift Clustering

Semi-Supervised Kernel Mean Shift Clustering IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. XX, NO. XX, JANUARY XXXX 1 Sem-Supervsed Kernel Mean Shft Clusterng Saket Anand, Student Member, IEEE, Sushl Mttal, Member, IEEE, Oncel

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Towards Semantic Knowledge Propagation from Text to Web Images

Towards Semantic Knowledge Propagation from Text to Web Images Guoun Q (Unversty of Illnos at Urbana-Champagn) Charu C. Aggarwal (IBM T. J. Watson Research Center) Thomas Huang (Unversty of Illnos at Urbana-Champagn) Towards Semantc Knowledge Propagaton from Text

More information

Laplacian Eigenmap for Image Retrieval

Laplacian Eigenmap for Image Retrieval Laplacan Egenmap for Image Retreval Xaofe He Partha Nyog Department of Computer Scence The Unversty of Chcago, 1100 E 58 th Street, Chcago, IL 60637 ABSTRACT Dmensonalty reducton has been receved much

More information

Face Detection with Deep Learning

Face Detection with Deep Learning Face Detecton wth Deep Learnng Yu Shen Yus122@ucsd.edu A13227146 Kuan-We Chen kuc010@ucsd.edu A99045121 Yzhou Hao y3hao@ucsd.edu A98017773 Mn Hsuan Wu mhwu@ucsd.edu A92424998 Abstract The project here

More information

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Learning-Based Top-N Selection Query Evaluation over Relational Databases Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **

More information

Large-scale Web Video Event Classification by use of Fisher Vectors

Large-scale Web Video Event Classification by use of Fisher Vectors Large-scale Web Vdeo Event Classfcaton by use of Fsher Vectors Chen Sun and Ram Nevata Unversty of Southern Calforna, Insttute for Robotcs and Intellgent Systems Los Angeles, CA 90089, USA {chensun nevata}@usc.org

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION 1 THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Seres A, OF THE ROMANIAN ACADEMY Volume 4, Number 2/2003, pp.000-000 A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION Tudor BARBU Insttute

More information

Available online at Available online at Advanced in Control Engineering and Information Science

Available online at   Available online at   Advanced in Control Engineering and Information Science Avalable onlne at wwwscencedrectcom Avalable onlne at wwwscencedrectcom Proceda Proceda Engneerng Engneerng 00 (2011) 15000 000 (2011) 1642 1646 Proceda Engneerng wwwelsevercom/locate/proceda Advanced

More information

Deep Classification in Large-scale Text Hierarchies

Deep Classification in Large-scale Text Hierarchies Deep Classfcaton n Large-scale Text Herarches Gu-Rong Xue Dkan Xng Qang Yang 2 Yong Yu Dept. of Computer Scence and Engneerng Shangha Jao-Tong Unversty {grxue, dkxng, yyu}@apex.sjtu.edu.cn 2 Hong Kong

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Using Neural Networks and Support Vector Machines in Data Mining

Using Neural Networks and Support Vector Machines in Data Mining Usng eural etworks and Support Vector Machnes n Data Mnng RICHARD A. WASIOWSKI Computer Scence Department Calforna State Unversty Domnguez Hlls Carson, CA 90747 USA Abstract: - Multvarate data analyss

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

SRBIR: Semantic Region Based Image Retrieval by Extracting the Dominant Region and Semantic Learning

SRBIR: Semantic Region Based Image Retrieval by Extracting the Dominant Region and Semantic Learning Journal of Computer Scence 7 (3): 400-408, 2011 ISSN 1549-3636 2011 Scence Publcatons SRBIR: Semantc Regon Based Image Retreval by Extractng the Domnant Regon and Semantc Learnng 1 I. Felc Raam and 2 S.

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints TPL-ware Dsplacement-drven Detaled Placement Refnement wth Colorng Constrants Tao Ln Iowa State Unversty tln@astate.edu Chrs Chu Iowa State Unversty cnchu@astate.edu BSTRCT To mnmze the effect of process

More information

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM Classfcaton of Face Images Based on Gender usng Dmensonalty Reducton Technques and SVM Fahm Mannan 260 266 294 School of Computer Scence McGll Unversty Abstract Ths report presents gender classfcaton based

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

Discriminative classifiers for object classification. Last time

Discriminative classifiers for object classification. Last time Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng

More information

MULTI-VIEW ANCHOR GRAPH HASHING

MULTI-VIEW ANCHOR GRAPH HASHING MULTI-VIEW ANCHOR GRAPH HASHING Saehoon Km 1 and Seungjn Cho 1,2 1 Department of Computer Scence and Engneerng, POSTECH, Korea 2 Dvson of IT Convergence Engneerng, POSTECH, Korea {kshkawa, seungjn}@postech.ac.kr

More information

The Comparison of Calibration Method of Binocular Stereo Vision System Ke Zhang a *, Zhao Gao b

The Comparison of Calibration Method of Binocular Stereo Vision System Ke Zhang a *, Zhao Gao b 3rd Internatonal Conference on Materal, Mechancal and Manufacturng Engneerng (IC3ME 2015) The Comparson of Calbraton Method of Bnocular Stereo Vson System Ke Zhang a *, Zhao Gao b College of Engneerng,

More information

Efficient Text Classification by Weighted Proximal SVM *

Efficient Text Classification by Weighted Proximal SVM * Effcent ext Classfcaton by Weghted Proxmal SVM * Dong Zhuang 1, Benyu Zhang, Qang Yang 3, Jun Yan 4, Zheng Chen, Yng Chen 1 1 Computer Scence and Engneerng, Bejng Insttute of echnology, Bejng 100081, Chna

More information