Learning to Name Faces: A Multimodal Learning Scheme for Search-Based Face Annotation

Size: px
Start display at page:

Download "Learning to Name Faces: A Multimodal Learning Scheme for Search-Based Face Annotation"

Transcription

1 Learnng to Name Faces: A Multmodal Learnng Scheme for Search-Based Face Annotaton Dayong Wang, Steven C.H. Ho, Pengcheng Wu, Janke Zhu, Yng He, Chunyan Mao School of Computer Engneerng, Nanyang Technologcal Unversty, Sngapore. College of Computer Scence, Zhejang Unversty, Hangzhou, , Chna. {s090023, chho, wupe0003, yhe, ascymao}@ntu.edu.sg, jkzhu@zju.edu.cn ABSTRACT Automated face annotaton ams to automatcally detect human faces from a photo and further name the faces wth the correspondng human names. In ths paper, we tackle ths open problem by nvestgatng a search-based face annotaton (SBFA) paradgm for mnng large amounts of web facal mages freely avalable on the WWW. Gven a query facal mage for annotaton, the dea of SBFA s to frst search for top-n smlar facal mages from a web facal mage database and then explot these top-ranked smlar facal mages and ther weak labels for namng the query facal mage. To fully mne those nformaton, ths paper proposes a novel framework of Learnng to Name Faces (L2NF) a unfed multmodal learnng approach for search-based face annotaton, whch conssts of the followng major components: () we enhance the weak labels of top-ranked smlar mages by explotng the label smoothness" assumpton; () we construct the multmodal representatons of a facal mage by extractng dfferent types of features; () we optmze the dstance measure for each type of features usng dstance metrc learnng technques; and fnally (v) we learn the optmal combnaton of multple modaltes for annotaton through a learnng to rank scheme. We conduct a set of extensve emprcal studes on two real-world facal mage databases, n whch encouragng results show that the proposed algorthms sgnfcantly boost the namng accuracy of search-based face annotaton task. Categores and Subject Descrptors H.3.3 [Informaton Storage and Retreval]: Informaton Search and Retreval; I.2.6 [Artfcal Intellgence]: Learnng General Terms Algorthms, Expermentaton Keywords web facal mages, auto face annotaton, supervsed learnng 1. INTRODUCTION Automated face annotaton ams to automatcally detect human faces from a photo mage and name the facal mage wth the corre- spondng human names, whch sometmes s termed as face namng" or face taggng" n some exstng studes. It s an mportant yet very challengng problem n multmeda nformaton retreval, whch s hghly desrable for many real-world applcatons, such as onlne photo management or face annotaton for vdeo summarzaton. One possble way s to drectly apply some classcal face recognton methods [5, 32, 49]. For exmaple, one can apply supervsed machne learnng technques to tran face classfcaton models from a collecton of well-controlled labeled facal mages and then apply the models to name a new facal mage. However, such knds of model-based face annotaton" technques suffer from some common drawbacks, e.g., beng dffcult and expensve to collect large hgh-qualty tranng data and beng nontrval for addng new tranng data. Recent years have wtnessed an emergng promsng drecton to tackle the automated face annotaton challenge,.e., the Search- Based Face Annotaton" (SBFA) paradgm [38, 39] whch attempts to explore content-based mage retreval (CBIR) technques [18, 43] n mnng massve WWW facal mages freely avalable on the nternet, such as popular socal sharng web stes (e.g., Flckr or Facebook). Due to the nosy nature of web mages, the raw labels of web facal mages are often nosy wthout extra manual effort, n whch some facal mages are tagged wth ncorrect/ncomplete names. We refer to such knd of raw facal mage database as weakly labeled web facal mage database". Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. SIGIR 13, July 28 August 1, 2013, Dubln, Ireland. Copyrght 2013 ACM /13/07...$ Fgure 1: The framework of Search-Based Face Annotaton.

2 Fgure 1 llustrates the basc framework of the Search-based Face Annotaton" paradgm, whch conssts of three man stages: () gven a query facal mage, t typcally nvolves a pre-processng stage, ncludng face detecton, face algnment, and facal feature extracton; consequently, the nput facal mage s represented as feature vectors n the facal feature space; () we retreve the topn smlar nstances of the query facal mage from the large-scale weakly labeled web facal mage database usng content-based mage retreval technques; and () fnally, we am to name the query mage by mnng the top-ranked smlar mages and the correspondng weak name labels. Such a paradgm was nspred by the searchbased mage annotaton [40] for generc mage annotaton as face annotaton can be generally vewed as a specal case of mage annotaton [11, 12, 34, 37, 42], whch has been extensvely studed, but remans stll an techncally challengng problem. In the followng, we explan the man challenges of ths task to motvate the proposed new technque. As shown n Fgure 1, there are two key challengng tasks for the search-based face annotaton framework: () how to effcently retreve the top-n most smlar facal mages from a large facal mage database gven a query facal mage, that s, how to develop an effectve content-based facal mage retreval soluton; and () how to effectvely explot the short lst of canddate facal mages and ther weak labels for namng the faces automatcally. In general, these two tasks can be solved separately, though the second task can be affected by the results of the frst task. As one can tackle the frst task by adaptng exstng CBIR technques [26, 43, 9], n ths paper we focus on the second challenge, whch s crtcal due to the nature of nosly labeled web facal mages. In ths paper, we propose a novel framework of Learnng to Name Faces" (L2NF) for search-based face annotaton, whch attempts to learn both the optmal weght vector n combnng dfferent query-neghbor smlarty functons for face namng and the refned labels for enhancng the ntal weak labels smultaneously n a unfed learnng framework. In partcular, the key challenge of namng the query facal mage s to effectvely measure the smlarty between the query mage and ts nearest nstances by combnng dverse facal feature representatons and ther proper dstance measurements. To tackle ths challenge, we propose a multmodal learnng scheme that () frst constructs multple dverse facal features for representng the faces, () further optmzes the dstance measure on each feature space (modalty) usng dstance metrc learnng, and () fnally learns the optmal fuson of the multple representatons by adaptng the structural SVM algorthm. Besdes, we suggest a graph-based label refnement scheme to enhance the weak labels of top-ranked smlar facal mages by explotng the label smoothness" assumpton. The man contrbutons of ths work nclude: We propose a novel Learnng to Name Faces" (L2NF) scheme, whch tackles the face namng problem by explorng multmodal learnng on weakly labeled facal mage data. We conduct extensve experments to evaluate the proposed algorthm for face annotaton on large-scale web facal mage databases and obtan encouragng results. The remander of ths paper s organzed as follows. Secton 2 revews related work. Secton 3 presents the proposed algorthms of Learnng to Name Faces (L2NF). Secton 4 shows the expermental results of our emprcal studes. Secton 5 dscusses the lmtatons, and fnally Secton 6 concludes ths paper. 2. RELATED WORK Automated face annotaton can be drectly solved by general face recognton and verfcaton technques, whch have been extensvely studed for many years [47, 22]. However, the success of such model-based face annotaton" scheme often reles on a large set of hgh-qualty facal mages collected n well-controlled envronments, whch can be dffcult and expensve to obtan. Ths drawback has been partally addressed n recent benchmark studes of unconstraned face detecton and verfcaton technques on facal mage testbeds collected from the web, such as LFW [20, 7]. By focusng on the facal mage doman, the studes for automated face annotaton can be further classfed nto four groups. The frst group of studes am to handle the collectons of personal/famly photos [10], where rch context clues, such as personal/famly names, socal context, GPS tags, tme tamps, are avalable. Some technques have already been successfully deployed n the commercal applcatons, e.g., Apple Photo, 1 Google Pcasa, 2 and Facebook face auto-taggng soluton. 3 The second group of works consder to refne the text-based facal mage retreval results, where a human name s used as the nput query [29, 24, 19, 14, 15]. Such problems are closely related to the mage re-rankng problems, and part of top-ranked facal mages are tagged wth the name query. For example, Ozkan and Duygulu proposed a graphbased model for fndng the densest sub-graph as the most related result [29], whch s mproved by addng an extra constrant such that a face can only appear once n an mage [14] or by ntroducng the mages of frends" of the query name n a query expanson scheme [28]. Followng the graph-based approach, Le and Satoh [24] proposed to represent the mportance of each returned mage. Recently, the generatve approach has also been adopted n ths problem and acheved better performance [14, 15]. The thrd group of works have attempted to drectly annotate each web facal mage wth the names extracted from ts capton nformaton. For example, Berg et al. [4] proposed a probablty model whch s combned wth a clusterng algorthm to estmate the relatonshp between the facal mages and the names n ther captons. Gullaumn et al. [14] proposed to teratvely update the assgnment between the facal mage and detected names n captons based on a mnmum cost matchng algorthm, whch s further mproved by usng supervsed dstance metrc learnng technques to grasp the mportant dscrmnatve features n low dmensonal spaces n ther subsequent work [15]. Recently, Bu et al. [6] proposed to estmate the dstance between faces and names wth commute dstance". The last group of studes s the search-based face annotaton" (SBFA), whch was nspred by the search-based mage annotaton and s fundamentally dfferent from the prevous three groups of research. In partcular, the SBFA framework ams to solve the generc content-based face annotaton problem, where facal mages are drectly used as the nput query mages and the task s to return the correspondng human names for the query mages. There are rather few studes n ths group. For example, by attemptng to mne the large-scale nosy web facal mage wth weak labels, Wang et al. [38] proposed an Unsupervsed Label Refnement (URL) algorthm to enhance the ntal weak label matrx over the entre facal mage database. In ther subsequent work [39], they further proposed the Weak Label Regularzed Local Coordnate Codng (WLRLCC) algorthm, whch ams to fully explot the top-ranked smlar mages of the query facal mage va a unfed optmzaton scheme of learnng both local coordnate codng and refned labels

3 Recently, a few of emergng works have attempted to attack the automated face annotaton problem va the search-based face annotaton" (SBFA) paradgm [38, 39]. It was generally nspred by the search-based mage annotaton that attempts to nfer the correlaton or jont probabltes between query mages and annotaton keywords based on exstng object recognton technques and sem-supervsed learnng algorthms n mnng massve free web mages on the WWW [11, 12, 8, 34, 17, 40, 31, 36, 30]. Several studes have attempted to develop effcent content-based ndexng and search technques to facltate mage taggng tasks. For example, Russell et al. [31] developed a large collecton of web mages wth ground truth labels to facltate object recognton tasks. More studes n ths area am to address the fnal annotaton process by explorng effectve label propagaton algorthms. For example, Wrght et al. [41] proposed a classfcaton algorthm based sparse representaton technque, whch predcts the label nformaton based on the class-based feature reconstructon. Tang et al. [34] presented a sparse graph-based sem-supervsed learnng (SGSSL) approach to annotate web mages. Wang et al. [37] proposed another sparse codng based annotaton framework, where the label-based graph s used to learn a lnear transformaton matrx for feature dmenson reducton, and sparse reconstructon s employed for the subsequent label propagaton step. Wu et al. [42] proposed to select heterogeneous features wth structural groupng sparsty and suggested a Mult-label Boostng scheme (denoted as MtBGS" for short) for feature regresson, where a group sparse coeffcent vector s obtaned for each class (category) and further used for predctng new nstances. Wu et al. [43] proposed a multreference re-rankng scheme (denoted as MRR" for short) for mprovng the retreval process. Our work dffers from the above exstng works for search-based face annotaton n several aspects. Frst of all, the ULR algorthm ams to refne the nosy labels over the entre facal mage database, whch s extremely tme-consumng for the large-scale database. Unlke the ULR algorthm, our work tackles such a computatonally expensve task by mnng only the top-ranked smlar mages for each query, whch follows the smlar approach of the WL- RLCC algorthm. Further, both ULR and WLRLCC algorthms are desgned to explore only one sngle type of facal feature descrptor, e.g., the GIST features, whle our work s desgned to explore more clues by constructng multple types of facal features descrptors and further learnng to optmze the fuson of the multmodal representatons. 3. L2NF LEARNING TO NAME FACES In ths secton, we present the proposed Learnng to Name Faces" (L2NF) framework for search-based face annotaton n detal. 3.1 Prelmnares Throughout the paper, we denote matrxes or sets by upper case letters, e.g., X and D; we denote vectors by bold lower case letters, e.g., x, x, x j; we denote scalars by normal letters, e.g., x, X j, where x s the -th element of vector x whch could also be denoted as x[], and X j s the element n the -row and j-column of matrxx, whch could also be denoted as X[,j]. Let us denote by Q = {q = 1,2,...,N t} the set of query facal mages, and assume there are a total of m names (classes) n the whole retreval database, denoted by C = {c 1,c 2,...,c m}. Each query facal mageq Q s assocated wth one name (class label) c q C. Notce that we assume that there s only one name for each person. We denote by y q {0,1} m the label vector for the query nstance q, whch contans only one non-zero tem: y q 0 = 1. If the query nstance q s annotated wth the k- th name (c q = c k ), then y q [k] = 1. In the SBFA framework, the name (label) of the query face s predcted based on ts nearest facal mages. Assume the top-n retreval results of the query mage q are {(d j,y j) j = 1,2,...,n}, where d j s the j-th smlar mage n the retreval result andy j {0,1} m s ts correspondng label vector. We denote by Y = [y 1,y 2,...,y n] R m n the label matrx for the -th queryq. For each query-neghbor par(q,d j), we can create one queryneghbor smlarty based feature vector: x j = Φ(q,d j) = [φ k (q,d j)] N f k=1 where φ k (, ) represents the k-th query-neghbor smlarty functon and N f s the number of the query-neghbor smlarty functons. Typcally, the query-neghbor smlarty functon s related to three factors: (1) the facal feature representaton, (2) the dstance metrc, and (3) the mappng functon between the dstance value and the smlarty value. For example, we can extract the Local bnary patterns (LBP) as the facal feature, apply the L2-norm (Eucldean) dstance as the dstance metrc, and the radal bass functon wthγ = 0.1 as the smlarty-mappng functon: exp( 1 γ 2 q(lbp) d (lbp) j 2 ). To estmate the smlarty more accurately by explorng more nformaton, we can leverage multple dverse query-neghbor smlarty functons. More detals about query-neghbor smlarty functon constructon wll be presented n Secton 3.4. Based on the predefned query-neghbor smlarty functon and the acheved queryneghbor smlarty based feature vector, for the-th query nstance q, we denote ts query-neghbor smlarty matrx by X = [x k ] wthk = 1,2,...,n andx R N f n. 3.2 Problem Overvew The basc dea of the SBFA paradgm s to explot the weak labels of top-ranked smlar facal mages for namng the query face. The crux for the face namng task les n how to effectvely estmate the confdence values for the weak label vectors of the top-ranked smlar nstances. Gven a query mage q and ts top-n retreval results {(d j,y j) j = 1,2,...,n}, we denote by v j the confdence value for ts j-th smlar mage d j. Then, the estmated label vector of q, denoted as ŷ q, can be generated as: ŷ q = j y j v j = Y v (1) where v = [v 1,v 2,...,v n]. Obvously, the confdence value v j s related to both query q and the j-th smlar nstance d j. In our problem, we assume t lnearly depends on the query-neghbor smlarty based feature vectorx j, that s,v j = x jw. Hence, the confdence vector v can be acheved as follows: v = X w (2) where w R N f s a weght vector for multmodal fuson, whch ams to combne dfferent features of X generated by the N f dverse smlarty functons. In other words, each confdence value v j s a weghted lnear combnaton of the correspondng queryneghbor smlarty based feature vector x j. Remark. Ths aforementoned assumpton s not dffcult to understand as follows: each tem n x j (e.g. the k-th tem x j[k]) s only related to the correspondng smlarty functon (e.g. φ k (, )). A large x j[k] ndcates that the j-th retreved nstance s more smlar to the query nstance based on the k-th smlarty functon. Hence, t s more possble that the query nstance has the same label

4 (a) (b) Fgure 2: Introducton of face namng n Search-based Face Annotaton". (a) A vsual explanaton of Eq.(3). For a query nstance q, ts top-n smlar samples are{(d j,y j)} j=1,2,...,n, and the predcted label vector sŷ q. y j s the label vector of thej-th nearest sample d j. x j s the feature vector between the query nstance q and the smlar example d j. The k-th tem of x j s constructed wth the k-th query-neghbor smlarty functon φ k (q,d j). The nter product value x jw s the confdence values v j for the j-th label vector y j. (b) Three factors that affect the annotaton performance of SBFA and the correspondng mprovement soluton. vector wth the j-th retreved nstance. As a result, the smlarty value x j[k] s correlated wth the the confdence value v j. Typcally, dfferent smlarty functons can perform very dfferently n practce, hence they should be combned approprately by a proper weght vector w By combnng Eq.(1) and Eq.(2), the estmated label vector ŷ q for the query mage q can be computed as follows: ŷ q = Y v = Y X w (3) where Y and X vary for dfferent query nstances, and w s ndependent of the query nstances. We show a vsual example n Fgure 2 (a) to help understand ths formula. To generate the fnal annotaton results (a sorted canddate name lst), we can rank all the m names by sortng the predcted label vector ŷ q n a descendng order, as shown n Fgure 2 (b). We denote by ˆπ q the ranked name lst, n whch the tem ˆπ q [j] C s the j-th annotated name. Gven the correct name of the query nstance q s c q, a good annotaton system should ensure c q appears at the top-ranked poston or deally at the frst poston. Hence, our problem ams to mnmze the rankng poston of the correct name c q, whch can be formulated as follows: N t mn loss(c q, ˆπ q ) (4) w,y,x =1 where ˆπ q = rank(ŷ q ) and ŷ q = Y X w In general, the loss value should be zero f the correct namec q s at the frst poston of ˆπ q, and the loss value of c q at the top-ranked poston should be smaller than the one of c q at a lower-ranked poston. The goal of the whole learnng to name faces scheme s to mnmze the loss values over all the query nstances by addressng the followng three key factors: () the nosy label matrx Y, () the query-neghbor smlarty matrx X, and () the combnaton weght vectorw, as shown n Fgure 2 (b). In partcular, we attempt to address each of them respectvely n the followng approach: To address the nosy nature of web mages, we propose to refne the ntal weak label nformaton Y by a graph-based refnement scheme for explotng the label smoothness" assumpton; To address the varances of web facal mages captured under varous condtons (llumnaton, poston, age, and gender, etc.), we can construct multple dverse query-neghbor smlarty functons and further mprove the smlarty measurements by employng dstance metrc learnng technques; To fnd the optmal multmodal fuson, we propose a supervsed learnng to rank scheme to optmze the weght vector w by applyng the structural SVM algorthms on a set of tranng query samples (N t query mages and ther correspondng top-n retreval results). In the followng, we wll ntroduce the solutons of the aforementoned three problems respectvely. 3.3 Weak Label Refnement In ths secton, we am to refne the ntal weak label matrx for each query ndependently. In partcular, for a query q ( the subscrpt of query ndex value s omtted ), ts top-n smlar samples are {d 1,...,d n} and the correspondng nosy label matrx s denoted by Ỹ. We enhance the ntal label matrx Ỹ n a manfold learnng scheme based on the key assumpton of label smoothness", whch means that the more smlar the vsual contents of two facal mages are, the more lkely they share the same labels [38]. In partcular, for two magesd andd j n the top-n nearest samples, we can compute ther smlarty value vector based on the query-neghbor smlarty functon: Φ(d,d j) R N f. By usng the weght vector w learned n Secton 3.5, we can get the smlarty value betweend andd j ass j = w Φ(d,d j). A large value ofs j ndcates thatd s more smlar tod j. Hence, a larger value of S j mples that the label vectors of d and d j are more lkely to be the same. Based on the above motvaton, we can obtan the followng formulaton to enhance the ntal weak label matrxỹ: S j Y : Y :j 2 +β (Y Ỹ) M 2 F (5) mn Y 0,j where denotes the Hadamard product of two matrces, and M s the mask matrx ndcatng the non-zeros values nỹ. In Eq.(5), the frst term enforces the label smoothness" assumpton. Followng the prevous work [39], the second term s a regularzaton term that prevents the refned label matrx beng devated too much from the ntal weak matrx Ỹ. Notce that the label refnement problem n Eq.(5) depends on the weght vector w acheved by Eq.(9), whle learnng the weght vectorwdepends on the nput datay,as shown n Eq.(9). In our problem, we update the label matrces Y, = 1,...,n and the weght vector w teratvely.

5 3.4 Multmodal Representaton Constructon In ths secton, we am to construct the multmodal representaton between the query nstances and ther correspondng topranked smlar samples, whch s based on the query-neghbor smlarty functon: Φ = {φ k } k=1,2,...,nf. Generally, we can represent one facal mage n dfferent feature space, e.g. LBP feature, GIST feature, and Gabor feature. Suppose there are K knds of features n total, we can represent by (q (k),d (k) ) the query-neghbor feature par between the query mage q and ts -th nearest sample d n the k-th feature space. Followng the exstng works on dstance metrc learnng [45], we can defne a dstance metrc M (k) n the k-th feature space, hence, the dstance between q (k) and d (k) be expressed by d M (k)(q (k),d (k) ) = (q (k) d (k) ) M (k) (q (k) d (k) ) and the nner product between q (k) and d (k) can be expressed by < q (k),d (k) > M (k)= (q (k) ) M (k) (d (k) ) can Based on the k-th feature space and dstance matrxm (k), there are two ways to compute the smlarty values between two nstances: one way s usng the heat kernel to transform the dstance value nto a smlarty value whch s wdely used n semsupervsed learnng [3]. In detal, the smlarty value betweenq (k) and d (k) can be computed as follows: φ(q,d ;k,γ) = exp( d 2 M (k)(q(k),d (k) )/γ 2 ), γ Γ where the query-neghbor smlarty functon φ depends on the feature type k and the parameter γ. As a result, we can obtan k Γ query-neghbor smlarty functons, whereγs the set of all possble parameters γ durng the experments. Another way to compute the smlarty value s usng the sparse representaton technque whch has been adopted to construct adjacency matrx n some recent works [34]. In detal, n the k- th feature space wth M (k) as the dstance metrc, we can obtan the sparse representaton s (k) for q (k) based on the dctonary D (k) = [d (k) 1,...,d(k) n ] wth the kernelzed sparse codng algorthm [13], whch can be formulated as follows: s (k) λ = arg mn < q (k),q (k) > M s (k) (k) +(s (k) ) K (k) DD (s(k) ) 0 where s (k) λ K (k) DD and K (k) Dq 2(s (k) ) K (k) Dq +λ s(k) 1 s the acheved sparse representaton wth parameter λ, s an n n matrx wth {K(k) DD }j =< d(k),d (k) j > M (k), s an n 1 vector wth{k(k) Dq } =< q(k),d (k) The -th tem n the sparse represent s (k) λ > M (k). presents the representatve ablty of the -th dctonary nstance d (k) for the encodng nstanceq (k). Hence, the smlarty value betweenq (k) andd (k) s computed as follows: φ(q,d ;k,λ) = s (k) λ [], λ Λ where the query-neghbor smlarty functon φ depends on the feature type k and the parameter λ. As a result, we can obtan k Λ query-neghbor smlarty functons, where Λ s the set of all possble parameters Λ durng the experments. Fnally, for each feature space, we must choose a dstance metrc M (k). We can use the orgnal feature space by settng M (k) wth the dentfy matrx. To keep all the data ponts wthn the same classes close and separate all the data ponts from dfferent classes far apart, t s better to adopt dstance metrc learnng (DML) technques to learn a better dstance metrc for each feature space respectvely. Generally, any supervsed DML algorthms can be used snce the query-neghbor smlarty functon φ s ndependent of the DML algorthms. In our problem, we adopt Metrc Learnng to Rank" (MLR) algorthm [27] that learns a metrc such that rankngs of data nduced by the learned dstance are optmzed aganst a rankng loss measure (e.g. ROC area (AUC) or MAP). In ths settng, the relevant" results (n the same class) should le close n space to the query, and rrelevant" results should be pushed far away. 3.5 Optmal Fuson of Multple Modaltes In ths secton, we am to fnd the optmal weght vector w for optmzng the multmodal fuson. In partcular, gven a label matrx Y and a query-neghbor smlarty matrx X, we can drectly acheve the annotaton result of q wth the fuson vector w accordng to Eq.(3). Hence, fndng the optmal fuson vector w that acheves the best ranked name lst n Eq.(4) s equvalent to learnng a multmodal annotaton functon wth parameter w as follows: f(w) : Y X Π based on a set of tranng samples {(q,y q,l,x )} wth = 1,2,...,N t by mnmzng the annotaton errors. The nput space contans all the multplcaton results between label matrx Y and query-neghbor smlarty matrx X. The output space Π contans all the possble annotaton results (the ranked name lst ). Obvously, the result of the functon f s a structural output nstead of a scalar value. Hence, t could be formulated as a structural SVM problem [35, 21], whch has been extensvely studed n several research works and has been used for rankng problems n [46, 27, 44]. To specalze a general structure SVM algorthm for a partcular problem, we defne two functons: the loss functon" and the feature combnaton functon" Ψ Loss Functon The loss functon s denoted as (π, ˆπ) n our problem, where π s the ground-truth ranked name lst generated by the groundtruth label vector y, whle ˆπ s the predcted ranked name lst generated by the predcted label vector ŷ. Notce that we omt the subscrpt for the query ndex for clarty. The ht rate" at the top t annotated names s used as the performance metrc, whch measures the lkelhood of havng the correct name among the top t annotated names. In real world applcatons, we prefer a hgh ht rate value wth a small t value, that s, the correct names are at the top-ranked poston n the ranked name lst ˆπ. For one query facal mage, suppose t hast 1 correct names (t 1 = 1 n our problem snce we assume there s only one name for each person), all the correct names are at the top postons of the groundtruth name lst π, followed by all the ncorrect names. For the predcted name lst ˆπ, f we only consder ts topt 2 names, the loss functon can be formulated as follows: (π, ˆπ) = 1 t 1 t 2 =1 j=1 h 1(π, ˆπ j) 1 j whereh 1(, ) s a judgement functon that equals1f the-th name π n π s the same wth the j-th name ˆπ j n ˆπ, and 0 otherwse. For example, f t 2 = 1, we focus on only the frst annotated name whch means that f the frst name n ˆπ s correct, then the loss value s 0, otherwse, the loss value s 1. If t 2 = m, the loss functon n Eq.(6) becomes a specal case of MAP loss. (6)

6 3.5.2 Structural-based Feature Combnaton Typcally, the feature combnaton functon ams to combne a set of feature vectors based on a rankng result. In our problem, the rankng result s denoted by π, whch contans all the m names n the name set C = {c 1,...,c m}. For a query facal mage q, ts name vector sy {0,1} m wth y 0 = 1 snce each facal mage has only one correct name. Hence, y k = 1 ndcates that the k-th namec k nc s the correct name for the query mageq. We denote byi 1 the ndex set of all the correct names, whch contans only one tem (k) accordng the prevous case. Smlarly, we denote by I 2 the ndex set of all the ncorrect names, whch contans m 1 tems {1,...,k 1,k +1,...,m}. Followng the prevous work [46], we defne the feature combnaton functonψto combne the nput label matrx Y and the nput query-neghbor smlarty matrx X based on a ranked name lstπ, as shown n Eq.(7): Ψ(Y,X,π) = 1 I 1 I 2 I 1 h 2(c,c j,π)[xy: XYj: ] j I 2 (7) where Y k: s the k-th row of the label matrx L, and h 2(,, ) s a rankng judgement functon. If the name c s ranked before c j n the ranked name lst π, h 2(c,c j,π) = 1; otherwse, h 2(c,c j,π) = 1 fc j s ranked before c. Remark. For one group of nput data (Y,X,w), we can compute the label vector wth Eq.(3): y = YX w, whch s used to generate the ranked name lst π subsequently followng the prevous dscusson. As shown n [44], based on the feature combnaton functon n Eq.(7), we can obtan the same ranked name lst by solvng the followng problem: π = argmax π Π F(w,Y,X,π) = w Ψ(Y,X,π) (8) where F(w, Y, X, π) s the dscrmnant functon. It ndcates that we can learn the weght vector w by maxmzng the dscrmnant functon F(w, L, X, π) over the a set of correct ranked label lsts, and predct the new label vector of the unseen query wth Eq.(3). Usng the loss functon n Eq.(6) and the feature combnaton functon n Eq.(7), we can obtan the objectve functon to learn the weght vector w based on the structural SVM, whch s shown as follows: mn w,ξ=[ξ1,...,ξ Nt ] 1 2 w w+ C Nt N t =1ξ (9) s.t.,ξ 0 and, π q Π, π Π\Π : w Ψ(Y,X,π q ) w Ψ(Y,X,π) (π q,π) ξ Algorthm 1: Cuttng plane algorthm for Eq.( 9) Input: (q,y q,x,y ), = 1,2,...,N t,c,ǫ Output: weght vector w 1 W, for all = 1,...,N t 2 repeat 3 for = 1 ton t do 4 E(π;w) (π q,π)+w Ψ(Y,X,π) 5 π = argmax π ΠE(π;w) 6 fe( π,w) > ξ +ǫthen 7 W W { π} 8 Get (w,ξ) by solve Eq.(9) over W = W 9 end 10 end 11 untl no W has changed durng teraton; In the above formulaton, the objectve functon of Eq.(9) s smlar to that of the general SVM algorthm, where C s a regularzaton parameter to tradeoff between the tranng error and the model complexty. For the constrants, f the value of dscrmnant functonf n Eq.(8) for an ncorrect rankngπ Π\Π s greater than that for one true rankng π q Π, the slack varable ξ must be at least (π q,π), whch ndcates the sum of slacks ξ upper bounds the emprcal rsk for the tranng samples based on the loss functon defned n Eq.(6). Snce the number of constrants n Eq.(9) s extremely large, we adopt the cuttng plane algorthm [21, 46] to effcently solve the optmzaton n Eq.(9), as shown n Algorthm 1. More detals about the cuttng plane algorthm can be found n [21]. 3.6 Algorthm for Learnng to Name Faces In the above, we separately dscuss the three key factors that affect the fnal annotaton result of the proposed SBFA framework, ncludng the label matrx Y, the query-neghbor smlarty matrx X and the weght vector w, whch collectvely determne the annotaton result as y = YX w. In ths secton, we wll present the overall tranng for unfyng all these three factors, and how to apply the models learned by L2NF for on-the-fly face annotaton of a novel query facal mage. Algorthm 2: L2NF Algorthm for tranng the models Input: Tranng set(q,y q,d j,y j) nkfeature spaces wth = 1,...,N t andj = 1,...,n, name sets C wth m names, parameters β, Λ and Γ Output: weght vector w and smlarty functon set Φ 1 for k = 1 tokdo 2 Learn the optmal dstance metrcm (k) n Secton end 4 Buld query-neghbor smlarty functons Φ wth vared combnatons of λ Λ,γ Γ, andm (k), k = 1,2,...,K 5 Construct query-neghbor feature matrxx, = 1,...,N t 6 repeat 7 Get the weght vector w by solvng Eq.(9) 8 for = 1 ton t do 9 Refne the label matrxy by solvng Eq.(5) 10 end 11 untl CONVERGENCE; Algorthm 2 shows the overall algorthmc framework for tranng the models by L2NF. At the begnnng, we attempt to optmze each of the dstance metrcs for each facal feature space usng the dstance metrc learnng technque as dscussed n Secton 3.4. After obtanng the set of optmal dstance metrcsm (k), we can then construct the set of query-neghbor smlarty functons Φ based on the set of multple dverse facal feature representatons and ther dstance measures. Usng the query-neghbor smlarty functons Φ, we can generate the query-neghbor feature matrces X for each query q n the tranng query set. Fnally, we optmze both the optmal weght vector w and the refned label matrces Y by an teratve scheme. At the end of the whole tranng scheme, we obtan the fnal model that conssts of the set of query-neghbor smlarty functons Φ and the optmal weght vector w for multmodal fuson. The above tranng framework as shown n Algorthm 2 can be done n an off-lne learnng manner. After completng the tranng, we can apply the model for onlne face annotaton for namng a novel query facal mage on-the-fly. Algorthm 3 summarzes the proposed algorthm for on-the-fly annotaton of an unseen query facal mage. Specfcally, gven a new query facal mage, we frst fnd a short lst of most smlar faces based on CBIR technques.

7 Algorthm 3: Algorthm of on-the-fly face annotaton by L2NF Input: Novel queryqnkfeature spaces, query-neghbor smlarty functonφ, optmal weght vector w Output: Annotaton result π 1 Retreval the top-n smlar mages the {(d,y )} =1,2,...,n 2 Construct the query-neghbor smlarty matrx X based on the query-neghbor smlarty functonφ 3 Obtan Y by refnng the ntal label matrx wth Eq.(5) 4 y = YX w 5 Get the ranked names lstπ by sortngyn descendng order After that, we construct the query-neghbor smlarty matrx X based on the query-neghbor smlarty functon Φ. We then refne the ntal label matrx Y of the current query usng Eq.(5). Fnally, we compute the label vector y by Eq.(3) and obtan the fnal annotaton result π by sortng the label vector y n descendng order. 4. EXPERIMENTAL RESULTS 4.1 Expermental Testbed Some web facal mage databases are avalable on the WWW, whch are used n some prevous research works, e.g, LFW [20], 4 Label Yahoo!News [16], 5 and FAN-Large. 6 Although the number of persons n these three databases s large, the number of mages for each person s qute small. For example, there are 13,233 mages of 5, 749 people n the LFW database. The recent database PubFg [23] 7 s dfferent from these databases. In detal, t was constructed by collectng onlne news sources. It contans 200 persons and 58, 797 mages. Due to the copyrght ssue, only mage URL addresses are released. As some URL lnks are not avalable any more, 41,609 mages are collected by our crawler n total. For each downloaded mage, we crop the face mage out accordng the provded face poston rectangle and resze all the face mages nto the same sze ( ). We construct the query set by randomly collectng 10 mages per person from the whole PubFg database. Hence, there are a total of 2,000 test query mages used for performance evaluaton, whle the rest 39,609 mages are used as the retreval database. To construct the tranng set, we randomly collect2,000 mages n the same way from the retreval database, wth the rest 37, 609 mages as the retreval database for the tranng set. Several facal mages samples are shown n the frst row of Fgure 3. Fgure 3: Face mage examples n Pubfg database (the frst row) and WDB database (the second row) To evaluate the L2NF framework on weakly labeled web facal mages, we use another western celebrty database: weakly labeled web facal mage database" (WDB for short), whch has been released n [39]. There are a total of 1,600 query mages wth ground truth n the WDB database. In our experment, we dvde these queres nto two parts of equal sze, and randomly choose one part for model tranng. In WDB" database, there are a total of four retreval databases of dfferent szes. In our experment, we use two sub-databases of dfferent scales: WDB-040K" and WDB-600K". WDB-040K" s a smaller database wth 53, 448 mages belongng to 400 persons, whle WDB-600K" s a largescale database wth 714,454 belongng to 6,000 persons. All the facal mages were algned nto the same well-defned postons by the face algnment algorthms n [48], as shown n the second row of Fgure 3. To construct the query-neghbor smlarty functons, we adopt three knds of features as the facal descrptors: the LBP feature [1, 2], the GIST feature [33, 39], and the Gabor feature [25]. In partcular, the 2891-dmensonal LBP feature s extracted by dvdng the face mages nto 7 7 blocks. To reduce the computaton complexty, the LBP feature s further projected nto a lower 500-dmensonal feature space usng Prncpal Component Analyss (PCA). Both GIST features and Gabor features are extracted over the whole algned facal mages. The parameter set for heat kernel sγ = { 1,0,1,2,3,4}, whle the parameter set for sparse representaton s Λ = {0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5}. The parameters β and C of L2NF are set as 10 and 1000, respectvely. For the dstance metrc learnng algorthm (MLR), the parameter C s set as 10. Followng the prevous works, we adopt the ht rate at top-t annotated results as the performance metrc to evaluate the annotaton performance, whch measures the lkelhood of havng the true label among the top-t annotated names for a query facal mage. We compare the proposed L2NF" framework wth several exstng works that are proposed for web-scale face annotaton or general mage annotaton, ncludng WLRLCC" [39], SGSSL" [34], MtBGS" [42], MRR" [43], and a smple baselne algorthm that smply adopts the weghted majorty votng WMV". We also extend the WLRLCC algorthm and the WMV algorthm nto a multmodal scheme, by equally combnng the face namng results from dfferent facal feature spaces, denoted as WLRLCC mm" and WMV mm". 4.2 Experments on WDB-040K" Ths experment ams to evaluate the face namng performance of the proposed L2NF" framework on the database WDB-040K" by comparng wth the aforementoned seven exstng algorthms. For the facal mage retreval task n L2NF, we adopt the JEC algorthm to combne the dstances from dfferent face descrptors [26], whch allows each ndvdual dstance to contrbute equally. The same retreval scheme s used to fnd the top-ranked smlar mages for the multmodal extensons: WLRLCC mm" and WMV mm". For the sngle model soluton, we use the GIST feature as the facal descrptor, whch s smlar to the experment settng n [39]. For the 1,600 query facal mages, we randomly select half of them to learn the dstance metrcs of dfferent facal features and the multmodal representaton combnaton w. Such a procedure s repeated 10 tmes and the average performance s computed over the 10 trals, as shown n Table.1. Several observatons can be drawn from the results. Frst, for the the sngle model soluton, the WLRLCC algorthms acheves the best performance by usng only one type of facal feature (GIST). In detal, the smple baselne WMV s about 60.9% wth T = 1, whch s boosted to 76.7% by WLRLCC. Second, f multple facal features are avalable, the performance of the multmodal WLRLCC mm ncreases to 80.9%, and 65.6% for the multmodal WMV mm. It ndcates that usng multple facal representatons s

8 Table 1: Face namng performance on database WDB-040K". T=01 T=02 T=03 T=04 T=05 WMV ± ± ± ± ± SGSSL ± ± ± ± ± MtBGS ± ± ± ± ± MRR ± ± ± ± ± WLRLCC ± ± ± ± ± WMV mm ± ± ± ± ± WLRLCC mm ± ± ± ± ± L2NF ± ± ± ± ± helpful for the face namng task whch valdates the mportance of ths study. More specfcally, the mprovements of these two algorthms ( WLRLCC mm and WMV mm ) n the multmodal scheme are manly ganed from two aspects: () the retreval result becomes better when multple features and dstance measures are combned by JEC [26]. For example, for the WMV algorthm, f we use the multple features for the retreval step but only use GIST feature for the annotaton step, ts performance s64.2%, whch s hgher than the one that uses only GIST feature for both retreval and annotaton steps(60.9%). () the combnaton enlarges the probablty that the correct name s chosen. Both of the two aspects are benefcal for the L2NF framework. Last but not least, the proposed L2NF framework can further mprove the face namng performance to 86.6%, whch ndcates the constructed multmodal representatons are dscrmnatve and the learned fuson vector can effcently combne varous query-neghbor smlarty functon n dfferent facal feature spaces. The performance mprovement s manly ganed from three aspects: the refned label matrx, the constructed multmodal representaton based on dstance metrc learnng technques, and the learned optmal combnaton of varous query-neghbor smlarty functons. More detals wll be further dscussed n Secton Experments on WDB-600K" & PubFg" Ths experment ams to evaluate the face namng performance of the proposed L2NF framework on two dfferent larger facal mage databases: WDB-600K" and PubFg". The two databases were collected under very dfferent approaches and settngs, whch can help us evaluate the generalzaton of the proposed technque on real-world data under dfferent scenaros. For clarty, we manly focus on the evaluaton of the algorthms usng multmodal representatons. The expermental results are shown n Fgure 4 and Table 2. We can make several observatons from the results. Frst of all, smlar to the prevous observatons, the proposed L2NF framework consstently acheves the best annotaton performance among all the compared algorthms. It shows that for dfferent databases the proposed L2NF algorthm s always helpful to mprove the annotaton performance. Secondly, the performance on WDB-600K" s lower than the one on WDB-400K" whch s consstent wth the prevous observaton n [39], snce ncreasng the number of persons leads to a larger database of more mages, whch makes the retreval task more challengng. Fnally, t s nterestng to observe the overall annotaton performance on the PubFg database s worse than the results on WDB-seres databases. There are several WDB-600K PubFg Fgure 4: Face namng performance (ht top-t) on the two databases: WDB-600K" and PubFg". reasons for ths observaton: () the number of mages per person vares a lot n the PubFg database, n whch several persons own only about 20 mages. It s nsuffcent for the data-drven scheme, hence the annotaton performance s reduced; () All the facal mages n the PubFg database are cropped accordng to the face poston wthout adoptng any face algnment algorthms, whch makes the facal descrptor senstve for face vews. 4.4 Evaluaton on Tranng Query Szes Ths experment ams to evaluate the mpact of the tranng query szes n the L2NF framework based on the WDB-040K" database. In the prevous experments, we adopt half of the 1,600 query mages as the tranng set. In ths experment, we evaluate the annotaton performance under vared number of tranng query mages. Specfcally, nstead of usng all the tranng samples (totally 800), we buld three small tranng sets by randomly collectng 200, 400 and 600 query mages, respectvely. The expermental results of top-1 performance are shown n Fgure 5. From the results, t s obvous that the face namng performance ncreases when more tranng samples are avalable, and the fnal performance tends to become saturated when the tranng query sze s above 600. Fnally, even wth a small number of queres for tranng, e.g., only 200 tranng samples, the L2NF algorthm can acheve a good performance (83.4%), whch remans much better than the state-of-the-art WLRLCC mm" scheme. Fgure 5: Face namng performance (ht top-t = 1) of L2NF wth vared szes of tranng queres. 4.5 Analyss of the Performance Gans Ths experment ams to analyze how dfferent factors affect the face namng performance by the proposed L2NF scheme as shown n Fgure 2 (b). In partcular, there are three key factors: the refned

9 Table 2: Face namng performance (Ht Rate) on database WDB-600K" and PubFg" Database: WDB-600K Database: PubFg T=01 T=02 T=03 T=04 T=05 T=01 T=02 T=03 T=04 T=05 WLRLCC WMV WMV mm WLRLCC mm L2NF label matrx, the constructed multple representatons, and the optmal weght vector for multmodal fuson. Table 3: Evaluaton and analyss of the performance gans. L2NF w=1 M=I L2NF w=w M=I L2NF w=1 M=M Ht Rate L2NFw=w M=M Frst of all, to examne the effcacy of the refned label matrx Y, we compare t wth the ntal raw label matrxỹ usng the smplest baselne algorthm WMV by excludng other factors n affectng annotaton performance. Our result ndcates that the refned label matrx can boost the performance from 60.9% (wthout refnement) to 62.0% (after refnement). Further, we examne the effcacy of another two factors as shown n Table 3. We denote by M the learned dstance metrc andw the optmal multmodal fuson vector. When the weght vectorws fxed to1and the dstance metrcs are based on Eucldean dstance (M s set to an dentty matrx), the top-1 performance of the resultng L2NF algorthm (denoted as L2NF w=1 M=I) s 79.4%. Ths value can be boosted to 84.3% f we adopt the optmzed metrc M (denoted as L2NF w=1 M=M ), and further boosted to 86.6% f we also use the optmal weght vector w for multmodal fuson (denoted as L2NF w=w M=M ). As a concluson, the proposed L2NF framework s able to leverage all the three factors for achevng the state-the-art performance n a systematc and synergc scheme. 5. LIMITATIONS Despte the promsng results on the benchmark search-based face annotaton tasks, our work stll have some lmtatons, partcularly for two mportant assumptons made n our scheme: () we assume each name corresponds to a unque sngle person, whch makes our problem more clearly. However, ths s not always true for real-lfe scenaros. For example, t s possble that two persons have the same name or one person may have multple names. Such knd of practcal duplcate name ssues may be partally solved by extendng our algorthms, e.g., va learnng the smlarty between any two names both n the name space and vsual space. () we assume the top retreved web facal mages are related to the query name. Ths s clearly true for celebrtes who have many photos on the nternet. However, when the query facal mage s not a well-known person, there may not exst many relevant facal mages on the WWW. Ths s a common lmtaton of all exstng datadrven annotaton technques. Fnally, although the performance of L2NF s much better, more facal feature are used whch means more computatonal cost and storage space. We may overcome the lmtaton by adoptng hashng technques n our further work. 6. CONCLUSIONS Ths paper nvestgated an emergng paradgm of search-based face annotaton for automated face namng through mnng largescale web facal mages freely avalable on the WWW. To fully explot the top-ranked smlar facal mages and ther weak labels for face annotaton, we proposed a novel framework of Learnng to Name Faces" (L2NF) by explorng mult-modal learnng on weakly labeled facal mage data. In partcular, our framework has three major contrbutons: () we suggest enhancng the ntal weak labels by a graph-based refnement scheme based on the label smoothness" assumpton; () we propose to explore multple facal feature representatons, and further optmze the dstance metrc on each facal feature space usng dstance metrc learnng technques; and () fnally, we propose to learn the optmal multmodal fuson of dverse facal features by formulatng the problem as a learnng to rank task, whch can be effcently solved by the exstng structural SVM algorthm. We conduct a set of extensve emprcal studes on two benchmark real-world facal mage databases, n whch encouragng results show that the proposed L2NF model sgnfcantly boosts the annotaton performance of the search-based face annotaton task. Acknowledgements Ths research was supported n part by MOE ter 1 project grant (RG33/11), Mcrosoft Research grant, Interactve and Dgtal Meda Programme Offce (IDMPO) and Natonal Research Foundaton (NRF) hosted at Meda Development Authorty (MDA) under Grant No.: MDA/IDM/2012/8/8-2 VOL 01. Janke Zhu was supported by Natonal Natural Scence Foundaton of Chna under the Grant ( ). 7. REFERENCES [1] T. Ahonen, A. Hadd, and M. Petkanen. Face recognton wth local bnary patterns. In ECCV 04, pages [2] T. Ahonen, A. Hadd, and M. Petkänen. Face descrpton wth local bnary patterns: Applcaton to face recognton. IEEE TPMAI, 28(12): , [3] M. Belkn, P. Nyog, and V. Sndhwan. Manfold regularzaton: A geometrc framework for learnng from labeled and unlabeled examples. Journal of Machne Learnng Research, 7: , [4] T. L. Berg, A. C. Berg, J. Edwards, and D. Forsyth. Who s n the pcture. In NIPS 05, Cambrdge, MA, MIT Press. [5] T. L. Berg, A. C. Berg, J. Edwards, M. Mare, R. Whte, Y. W. Teh, E. G. Learned-Mller, and D. A. Forsyth. Names and faces n the news. In IEEE CVPR 04, [6] J. Bu, B. Xu, C. Wu, C. Chen, J. Zhu, D. Ca, and X. He. Unsupervsed face-name assocaton va commute dstance. In ACM MM 12, pages , [7] Z. Cao, Q. Yn, X. Tang, and J. Sun. Face recognton wth learnng-based descrptor. In IEEE CVPR 10, 2010.

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

UB at GeoCLEF Department of Geography Abstract

UB at GeoCLEF Department of Geography   Abstract UB at GeoCLEF 2006 Mguel E. Ruz (1), Stuart Shapro (2), June Abbas (1), Slva B. Southwck (1) and Davd Mark (3) State Unversty of New York at Buffalo (1) Department of Lbrary and Informaton Studes (2) Department

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

Local Quaternary Patterns and Feature Local Quaternary Patterns

Local Quaternary Patterns and Feature Local Quaternary Patterns Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents

More information

FINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK

FINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK FINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK L-qng Qu, Yong-quan Lang 2, Jng-Chen 3, 2 College of Informaton Scence and Technology, Shandong Unversty of Scence and Technology,

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng

More information

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints TPL-ware Dsplacement-drven Detaled Placement Refnement wth Colorng Constrants Tao Ln Iowa State Unversty tln@astate.edu Chrs Chu Iowa State Unversty cnchu@astate.edu BSTRCT To mnmze the effect of process

More information

Query Clustering Using a Hybrid Query Similarity Measure

Query Clustering Using a Hybrid Query Similarity Measure Query clusterng usng a hybrd query smlarty measure Fu. L., Goh, D.H., & Foo, S. (2004). WSEAS Transacton on Computers, 3(3), 700-705. Query Clusterng Usng a Hybrd Query Smlarty Measure Ln Fu, Don Hoe-Lan

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Private Information Retrieval (PIR)

Private Information Retrieval (PIR) 2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

Face Detection with Deep Learning

Face Detection with Deep Learning Face Detecton wth Deep Learnng Yu Shen Yus122@ucsd.edu A13227146 Kuan-We Chen kuc010@ucsd.edu A99045121 Yzhou Hao y3hao@ucsd.edu A98017773 Mn Hsuan Wu mhwu@ucsd.edu A92424998 Abstract The project here

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,

More information

Improving Web Image Search using Meta Re-rankers

Improving Web Image Search using Meta Re-rankers VOLUME-1, ISSUE-V (Aug-Sep 2013) IS NOW AVAILABLE AT: www.dcst.com Improvng Web Image Search usng Meta Re-rankers B.Kavtha 1, N. Suata 2 1 Department of Computer Scence and Engneerng, Chtanya Bharath Insttute

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

A Bilinear Model for Sparse Coding

A Bilinear Model for Sparse Coding A Blnear Model for Sparse Codng Davd B. Grmes and Rajesh P. N. Rao Department of Computer Scence and Engneerng Unversty of Washngton Seattle, WA 98195-2350, U.S.A. grmes,rao @cs.washngton.edu Abstract

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Manifold-Ranking Based Keyword Propagation for Image Retrieval *

Manifold-Ranking Based Keyword Propagation for Image Retrieval * Manfold-Rankng Based Keyword Propagaton for Image Retreval * Hanghang Tong,, Jngru He,, Mngjng L 2, We-Yng Ma 2, Hong-Jang Zhang 2 and Changshu Zhang 3,3 Department of Automaton, Tsnghua Unversty, Bejng

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Supervsed vs. Unsupervsed Learnng Up to now we consdered supervsed learnng scenaro, where we are gven 1. samples 1,, n 2. class labels for all samples 1,, n Ths s also

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Learning an Image Manifold for Retrieval

Learning an Image Manifold for Retrieval Learnng an Image Manfold for Retreval Xaofe He*, We-Yng Ma, and Hong-Jang Zhang Mcrosoft Research Asa Bejng, Chna, 100080 {wyma,hjzhang}@mcrosoft.com *Department of Computer Scence, The Unversty of Chcago

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

MULTI-VIEW ANCHOR GRAPH HASHING

MULTI-VIEW ANCHOR GRAPH HASHING MULTI-VIEW ANCHOR GRAPH HASHING Saehoon Km 1 and Seungjn Cho 1,2 1 Department of Computer Scence and Engneerng, POSTECH, Korea 2 Dvson of IT Convergence Engneerng, POSTECH, Korea {kshkawa, seungjn}@postech.ac.kr

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning Internatonal Journal of Intellgent Informaton Systems 2018; 7(2): 15-22 http://www.scencepublshnggroup.com/j/js do: 10.11648/j.js.20180702.11 ISSN: 2328-7675 (Prnt); ISSN: 2328-7683 (Onlne) Kernel Collaboratve

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Human Face Recognition Using Generalized. Kernel Fisher Discriminant Human Face Recognton Usng Generalzed Kernel Fsher Dscrmnant ng-yu Sun,2 De-Shuang Huang Ln Guo. Insttute of Intellgent Machnes, Chnese Academy of Scences, P.O.ox 30, Hefe, Anhu, Chna. 2. Department of

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Towards Semantic Knowledge Propagation from Text to Web Images

Towards Semantic Knowledge Propagation from Text to Web Images Guoun Q (Unversty of Illnos at Urbana-Champagn) Charu C. Aggarwal (IBM T. J. Watson Research Center) Thomas Huang (Unversty of Illnos at Urbana-Champagn) Towards Semantc Knowledge Propagaton from Text

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Image Alignment CSC 767

Image Alignment CSC 767 Image Algnment CSC 767 Image algnment Image from http://graphcs.cs.cmu.edu/courses/15-463/2010_fall/ Image algnment: Applcatons Panorama sttchng Image algnment: Applcatons Recognton of object nstances

More information

Transductive Regression Piloted by Inter-Manifold Relations

Transductive Regression Piloted by Inter-Manifold Relations Huan Wang IE, The Chnese Unversty of Hong Kong, Hong Kong Shucheng Yan Thomas Huang ECE, Unversty of Illnos at Urbana Champagn, USA Janzhuang Lu Xaoou Tang IE, The Chnese Unversty of Hong Kong, Hong Kong

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Adaptive Transfer Learning

Adaptive Transfer Learning Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong {caobn,snnopan,zhangyu,dyyeung,qyang}@cse.ust.hk

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

Laplacian Eigenmap for Image Retrieval

Laplacian Eigenmap for Image Retrieval Laplacan Egenmap for Image Retreval Xaofe He Partha Nyog Department of Computer Scence The Unversty of Chcago, 1100 E 58 th Street, Chcago, IL 60637 ABSTRACT Dmensonalty reducton has been receved much

More information

CAN COMPUTERS LEARN FASTER? Seyda Ertekin Computer Science & Engineering The Pennsylvania State University

CAN COMPUTERS LEARN FASTER? Seyda Ertekin Computer Science & Engineering The Pennsylvania State University CAN COMPUTERS LEARN FASTER? Seyda Ertekn Computer Scence & Engneerng The Pennsylvana State Unversty sertekn@cse.psu.edu ABSTRACT Ever snce computers were nvented, manknd wondered whether they mght be made

More information

Learning a Class-Specific Dictionary for Facial Expression Recognition

Learning a Class-Specific Dictionary for Facial Expression Recognition BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for

More information

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM Classfcaton of Face Images Based on Gender usng Dmensonalty Reducton Technques and SVM Fahm Mannan 260 266 294 School of Computer Scence McGll Unversty Abstract Ths report presents gender classfcaton based

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

Incremental Learning with Support Vector Machines and Fuzzy Set Theory

Incremental Learning with Support Vector Machines and Fuzzy Set Theory The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Gender Classification using Interlaced Derivative Patterns

Gender Classification using Interlaced Derivative Patterns Gender Classfcaton usng Interlaced Dervatve Patterns Author Shobernejad, Ameneh, Gao, Yongsheng Publshed 2 Conference Ttle Proceedngs of the 2th Internatonal Conference on Pattern Recognton (ICPR 2) DOI

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Backpropagation: In Search of Performance Parameters

Backpropagation: In Search of Performance Parameters Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,

More information

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Learning-Based Top-N Selection Query Evaluation over Relational Databases Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **

More information

Categories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms

Categories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms 3. Fndng Determnstc Soluton from Underdetermned Equaton: Large-Scale Performance Modelng by Least Angle Regresson Xn L ECE Department, Carnege Mellon Unversty Forbs Avenue, Pttsburgh, PA 3 xnl@ece.cmu.edu

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Dictionary Pair Learning on Grassmann Manifolds for Image Denoising

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Dictionary Pair Learning on Grassmann Manifolds for Image Denoising IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, 2015 1 Dctonary Par Learnng on Grassmann Manfolds for Image Denosng Xanhua Zeng, We Ban, We Lu, Jale Shen, Dacheng Tao, Fellow, IEEE Abstract Image

More information