Heterogeneous Visual Features Fusion via Sparse Multimodal Machine
|
|
- Gabriella Carson
- 5 years ago
- Views:
Transcription
1 013 IEEE Conference on Computer Vson and Pattern Recognton Heterogeneous Vsual Features Fuson va Sparse Multmodal Machne Hua ang, Fepng Ne, Heng Huang, Chrs Dng Department of Electrcal Engneerng and Computer Scence Colorado School of Mnes, Golden, Colorado 80401, USA Department of Computer Scence and Engneerng Unversty of Texas at Arlngton, Arlngton, Texas 76019, USA Abstract To better understand, search, and classfy mage and vdeo nformaton, many vsual feature descrptors have been proposed to descrbe elementary vsual characterstcs, such as the shape, the color, the texture, etc. How to ntegrate these heterogeneous vsual features and dentfy the mportant ones from them for specfc vson tasks has become an ncreasngly crtcal problem. In ths paper, e propose a novel Sparse Multmodal Learnng (SMML) approach to ntegrate such heterogeneous features by usng the jont structured sparsty regularzatons to learn the feature mportance of for the vson tasks from both group-wse and ndvdual pont of vews. A new optmzaton algorthm s also ntroduced to solve the non-smooth objectve wth rgorously proved global convergence. e appled our SMML method to fve broadly used object categorzaton and scene understandng mage data sets for both snglelabel and mult-label mage classfcaton tasks. For each data set we ntegrate sx dfferent types of popularly used mage features. Compared to exstng scene and object categorzaton methods usng ether sngle modalty or multmodaltes of features, our approach always acheves better performances measured. 1. Introducton Scene categorzaton and vsual recognton problem analyzes and classfes the mages nto semantcally meanngful categores. It s wthout doubts a dffcult task n computer vson research feld, because any scene/object category can be characterzed by a hgh degree of dversty and potental ambgutes. To enhance the vsual understandng, computer vson researchers have proposed many feature representaton methods to descrbe the vsual objects n Correspondng author. Ths work was partally supported by NSF IIS dfferent types of mages [1, 18, 3]. However, t s usually not clear what the best feature descrptor to solve a gven applcaton problem s. Because dfferent features descrbe dfferent aspects of the vsual characterstcs, one descrptor can be better than others under certan crcumstances. How to ntegrate heterogeneous features s becomng a challengng as well as attractve problem nowadays. Consderng dfferent feature representatons gve rse to dfferent kernel functons, the Multple Kernel Learnng (MKL) approaches [19, 9, 13] have been recently studed and employed to ntegrate heterogenous features or data and select mult-modal features. Partcularly, [5] surveyed several MKL methods, as well as ther varants usng boostng method, for computer vson tasks and appled them n object classfcaton. However, such models tran a weght for each type of features,.e., when multple types of features are combned together, all features from the same type are weghted equally. Ths crucal drawback often causes the low performance. To address the ntegraton of the heterogeneous mage features for vsual recognton tasks, n ths paper, we propose a novel Sparse Multmodal Learnng (SMML) method by utlzng a new mxed structured sparsty norms. In our method, we concatenate all features of one mage together as ts feature vector and learn the weght of each feature n the classfcaton decson functons. The man dffculty of ntegratng heterogeneous mage features s to smultaneously consder the feature group property (e.g. GIST s good at detectng natural scenes and the GIST features should have large weghts for classes related to outdoor natural scenes) and ndvdual feature property,.e., although a type of features are not useful to categorze certan specfc classes, a small number of ndvdual features from the same type can stll be dscrmnatve for these classes. e propose to utlze two structured sparsty regularzers to capture both group and ndvdual propertes of dfferent modaltes (types) of features. Meanwhle the sparse weght matrx provdes a natural feature selecton results. Our new /13 $ IEEE DOI /CVPR
2 objectve, employng the hnge loss and the two non-smooth regularzers, s hghly non-smooth and dffcult to solve n general. Thus, we derve a new effcent algorthm to solve t wth rgorously proved global convergence. e appled our new sparse multmodal learnng method to fve broadly used object categorzaton and scene understandng mage data sets for both sngle-label and mult-label mage classfcaton tasks. For each data set we ntegrate sx dfferent types of popularly used mage features. In all expermental results, our new method always acheves a better object and scene categorzaton performance than tradtonal classfcaton methods utlzng each sngle type descrptor and the wdely used MKL based feature ntegraton methods.. Sparse Multmodal Learnng In recent research, sparse regularzatons have been wdely nvestgated and appled nto dfferent computer vson and machne learnng studes [1, 16, 11]. The sparse representatons are typcally acheved by mposng nonsmooth norms as regularzers n the optmzaton problems. Because the structured sparsty regularzers can capture the structures exstng n data ponts and features, they are useful n dscoverng the underlyng patterns. e wll use a new jont structured sparsty regularzers to explore both group-wse and ndvdual mportance of each feature for dfferent classes..1. Jont Structured Sparsty Regularzatons In the supervsed learnng settng, we are gven n tranng mages {(x, y )} n [, where (x ) 1 T ( ) ] x =,, x k T T R d s the nput vector ncludng all features from a total of k modaltes and each modalty j has d j features (d = k j=1 d j). y R c s the class label vector of data pont x, where c s the number of classes. Let X = [x 1,, x n ] R d n and Y =[y 1,, y c ] R c n. In ths paper, we wrte matrces as boldface uppercase letters and vectors as boldface lowercase letters. For matrx =(w j ), ts -th row and j-th column are denoted by w and w j respectvely. Dfferent to MKL, we drectly learn a d c feature weght matrx = [w1, 1, wc; 1,, ; w1, k, wc k ] R d c, where wp q R dq ndcates the weghts of all features from the q-th modalty n the classfcaton decson functon of the p-th class. Typcally we can use a convex loss functon L(X, ) to measure the loss ncurred by on the tranng mages. e choose to use the hnge loss, because the hnge loss based SVM has shown state-of-the-art performance n classfcatons. Compared to MKL approaches that learn weght matrces n unsupervsed way, our method wll learn all weghts of features usng supervsed learnng model. Because the heterogeneous features come from dfferent vsual descrptors, we cannot only use the loss functon that equvalents to assgn the unform weghts to all features. The regularzer R has to be added to mpose the nterrelatonshps of modaltes and features as: mn n j=1 ( ) 1 y j(w T x j + b ) + γr, (1) + where γ>0s a trade-off parameter and the functon (a) + s defned as (a) + = max(0,a). For brevty, we denote f (w,b )= n ( (1 y j w T x j + b ) ) n the sequel. + j=1 In heterogeneous features fuson, from multmodal vewpont, the features of a specfc modalty can be more or less dscrmnatve for specfc classes. For nstance, the color features substantally ncreases the detecton of stop sgns whle they are almost rrelevant for fndng cars n mages. Thus, we propose a new group l 1 -norm (G 1 -norm) as regularzer n Eq. (1), whch s defned as G1 = k w j [17]. Then Eq. (1) becomes: j=1 mn f (w,b )+γ 1 G1. () Because the group l 1 -norm uses l -norm wthn each modalty and l 1 -norm between modaltes, t enforces the sparsty between dfferent modaltes,.e., f one modalty of features are not dscrmnatve for certan tasks, the objectve n Eq. () wll assgn zeros (n deal case, usually they are very small values) to them for correspondng tasks; otherwse, ther weghts are large. Ths group l 1 -norm regularzer captures the global relatonshps between modaltes. However, n certan cases, even f most features n one modalty are not dscrmnatve for certan vsual categores, a small number of features from the same modalty can stll be hghly dscrmnatve. From the mult-task learnng pont of vew, such mportant features should be shared by all tasks. Thus, we add one more l,1 -norm regularzer nto Eq. () and get the fnal objectve as: mn f (w,b )+γ 1 G1 +γ,1. (3) Because the l,1 -norm regularzer mposes the sparsty between all features and non-sparsty between classes, the features that are dscrmnatve for all classes wll get large weghts. Because of the mposed sparsty, the weghts of most features are close to zeroes and only the features mportant to classfcaton tasks have large weghts. Thus, our SMML results automatcally perform the feature selecton procedure
3 .. A New Effcent Optmzaton Algorthm The objectve of Eq. (3) s a hghly non-smooth problem and cannot be easly solved n general. Thus, we derve a new effcent algorthm to solve ths problem as summarzed n Algorthm 1, whose convergence to the global optmum s guaranteed by the followng theorem. Theorem 1 In Algorthm 1, the value of the objectve n Eq. (3) s monotoncally decreased n each teraton. Proof: In each teraton t, accordng to the Step 3 n the Algorthm 1, we have: t+1 = mn f (w,b ) (4) ) + γ 1 D t w ( + γ tr T D t, by whch we can derve f ((w t+1), (b t+1) ) k (w t+1) j + γ 1 j=1 (w t) j + γ f ((w t), (b t) ) + γ 1 j=1 k (w t) j (w t) j + γ w t+1 w t w t w t. Because t can verfed that for functon g (x) =x x α, gven any x α R n, g (x) g (α) holds, we can derve: k γ 1 (w t+1) j k (w t+1) j γ 1 j=1 j=1 (w t) j (6) k γ 1 (w t) j k (w t) j γ 1 j=1 j=1 (w t) j ; γ w t+1 γ w t+1 wt wt γ wt γ. w t Addng Eqs. (5)-(7) n both sdes, we have k f ((w t+1), (b t+1) )+γ 1 (w t+1) j j=1 j=1 + γ k f ((w t), (b t) )+γ 1 (w t) j + γ w t (5) (7) wt+1. (8) Therefore, the algorthm decreases the objectve value n each teraton. Because the problem (3) s a convex problem, Algorthm 1 wll converge to the global optmum. Input: X =[x 1, x,, x n] R d n, Y =[y 1, y,, y c] R n c. 1. Let t =1. Intalze t R d c. whle Not converges do. Calculate the block dagonal matrces D t the j-th dagonal block of D t s 1 (w t) j (1 c), where I j. Calculate the dagonal matrx D t, where the -th dagonal element s 1 wt. 3. For each w (1 c), calculate (w t+1 ) = D 1 ( w t), where w =argmnf ( w,b ; X)+ w w T w, X = D 1 X and D = γ 1 D + γ D. 4. t = t +1. end Output: t R d c. Algorthm 1: An effcent teratve algorthm to solve the optmzaton problem n Eq. (3). 3. Expermental Results In ths secton, we expermentally evaluate the proposed Sparse Mult-Modal Learnng (SMML) approach n both sngle-label mage classfcaton tasks and mult-label mage classfcaton tasks Evaluaton n Sngle-Label Image Classfcaton e frst evaluate the proposed approach n sngle-label mage classfcaton, n whch each mage belongs to one and only one class. e experment wth the followng three benchmark sngle-label mult-modal mage data sets, whch are broadly used computer vson studes. NUS-IDE-Object data set 1 contans 30,000 mages and 31 classes. Followng [4], we select to use a subset of 6 classes n our experments. Anmal data set contans mages for 50 anmals (classes). MSRC-v1 data set 3 contans 40 mages wth 9 classes. Followng [], we refne the data set to get 7 classes ncludng tree, buldng, arplane, cow, face, car, bcycle, and each refned class has 30 mages. All the three data sets are descrbed by a set of 6 dfferent mage descrptors, whch are lsted n Table 1. Expermental setups. e classfy the mages n the above three data sets usng the proposed methods by ntegratng the sx types of mage features of each of them. e compare the proposed method aganst several most recent multple
4 Type Table 1. Image feature descrptors of mage data sets used n experments. NUS-IDE-Object Anmal MSRC-v1 / MSRC-v / TRECVID 005 Features Dmenson Features Dmenson Features Dmenson 1 Color moments 55 Self-Smlarty 000 Color moment 48 Color hstogram 64 Color hstogram 688 LBP 56 3 Color correlogram 144 PyramdHOG 5 HOG avelet texture 18 SIFT 000 SIFT Edge dstrbuton 73 colorsift 000 GIST 51 6 Vsual words 500 SURF 000 Centrst 130 kernel learnng (MKL) methods that are able to make use of multple types of data: (1) SVM l MKL method [13], () SVM l 1 MKL [9], (3) SVM l MKL method [8], (4) least square (LSSVM) l MKL method [19], (5) LSSVM l 1 MKL method [14] and (6) LSSVM l MKL method [0]. Besdes, we also compare our method to three most recent mult-model mage classfcaton methods publshed n computer vson communty, ncludng Gaussan process (GP) method [7], LPBoost-β method [5] and LPBoost-B method [5], whch have demonstrated state-of-the-art object categorzaton performance. In addton, we also report the classfcaton performances by SVM on each ndvdual type of features and a straghtforward concatenaton of all sx types of features as baselnes. e mplement three versons of the proposed method. Frst, we set γ n Eq. (3) as 0, whch only uses the G 1 - norm as regularzaton thereby only takes nto account the structure over modaltes. e denote t as Our method (G 1 -norm only). Second, we set γ 1 n Eq. (3) as 0 to only use l,1 -norm regularzaton, whch thereby select feature shared across tasks yet modalty structure s not consdered. e denote ths degenerate verson of the proposed method as Our method (l,1 -norm only). Fnally, the full verson of the proposed method by Eq. (3) s mplemented and denoted as Our method. e conduct standard 5-fold cross-valdaton and report the average results. For each of the 5 trals, wthn the tranng data, an nternal 5-fold cross-valdaton s performed to fne tune the parameters. The parameters of our method (γ 1 and γ n Eq. (3)) are optmzed n the range of { 10 5, 10 4,...,10 4, 10 5}. For SVM method and MKL methods, one Gaussan kernel s constructed ( for each type of features (.e., K (x, x j )=exp γ x x j ) ), where the parameter γ s fne tuned n the same range used as our method. e mplement the compared MKL methods usng the codes publshed by [0]. Followng [0], n LSSVM l and l methods, the regularzaton parameter λ s estmated jontly as the kernel coeffcent of an dentty matrx; n LSSVM l 1 method, λ s set to 1; n all other SVM approaches, the C parameter of the box constrant s fne tuned n the same range as γ. For LPBoost-β and LPBoost- B methods, we use the codes publshed by the authors 4.e 4 pgehler/projects/ccv09/ Table. Classfcaton accuraces (mean ± std) of the compared methods n three sngle-label mage classfcaton tasks. Methods NUS-IDE Anmal MSRC-v1 SVM (Type 1) 0.15± ± ±0.019 SVM (Type ) 0.149± ± ±0.018 SVM (Type 3) 0.146± ± ±0.0 SVM (Type 4) 0.150± ± ±0.06 SVM (Type 5) 0.141± ± ±0.03 SVM (Type 6) 0.149± ± ±0.01 SVM (all by concatenaton) 0.138± ± ±0.05 SVM l MKL method 0.11± ± ±0.03 SVM l 1 MKL method 0.07± ± ±0.019 SVM l MKL method 0.0± ± ±0.0 LSSVM l MKL method 0.00± ± ±0.05 LSSVM l 1 MKL method 0.195± ± ±0.07 LSSVM l MKL method 0.187± ± ±0.018 GP method 0.181± ± ±0.015 LPboost-β 0.0± ± ±0.010 LPboost-B 0.19± ± ±0.013 Our method (G 1-norm only) 0.± ± ±0.01 Our method (l,1-norm only) 0.3± ± ±0.013 Our method 0.45 ± ± ± 0.05 use LIBSVM 5 software package to mplement SVM n all our experments. Expermental results. Because we are concerned wth sngle-label mage classfcaton, we employ the most wdely used classfcaton accuracy to assess the classfcaton performance. The results of all compared methods n the three mage classfcaton tasks are reported n Table. A frst glance at the results shows that our methods generally outperform all other compared methods, whch demonstrate the effectveness of our methods n sngle-label mage classfcaton. In addton, the methods usng multple data sources are sgnfcantly better than SVM usng one sngle type of data. Ths confrms the usefulness of data ntegraton n mage classfcaton. Moreover, the results that our methods are always better than the MKL methods and boostng enhanced MKL methods s consstent wth the prevous theoretcal analyss n that, although both of them take advantage of the nformaton from multple dfferent sources, our method not only assgns proper weght to each type of features, but also rewards the relevances of the ndvdual features nsde a gve feature type. In contrast, the MKL methods and boostng enhanced MKL methods only address the former whle not beng able to take nto account the latter. 5 cjln/lbsvm/
5 Fnally, the performances of the full verson of the proposed method s consstently better than those of ts two degenerate versons, whch demonstrate that both taskspecfc modalty selecton and across-task feature selecton are necessary n mage categorzaton, no one less. 3.. Evaluaton n Mult-Label Image Classfcaton Now we evaluate the proposed method n mult-label mage classfcaton, n whch each mage can be assocated wth more than one class label. e evaluate the proposed approach on the followng two benchmark mult-label mage data for mage annotaton tasks. TRECVID data set contans mages and labeled wth 39 concepts (labels). As n most prevous works [15], we randomly sample the data such that each concept has at least 100 mages. MSRC-v 7 data set s an extenson of MSRC-v1 data set, whch has 591 mages annotated by classes. Same as the MSRC-v1 data set, we extract sx types of mage features for these two data sets as detaled n the last column of Table 1 followng []. Expermental setups. e stll mplement the three versons of our method and compare them aganst the MKL methods used n the above experments wth the same settngs. For our method and MKL methods, we conduct classfcaton for every class ndvdually. For each class, we consder t as a bnary classfcaton task by usng onevs.-others strategy. For the classfcaton on each ndvdual data type and the smple mxture of all types of features, nstead of usng SVM as n the prevous subsecton for sngle-label data, we use mult-label k-nearest Neghbor (Mk-NN) [1] method to classfy the mages, whch s a broadly used mult-label classfcaton method. In addton, we also mplement two most recent mult-label classfcaton methods ncludng mult-label correlated Green s functon (MCGF) [15] method and Mult-Label Least Square (MLLS) [6] method. However, these two methods are desgned for data wth sngle type of features, therefore we use the concatenaton of all types of features as ther nput. e mplement the two mult-label classfcaton methods usng the codes publshed by the authors. The conventonal classfcaton performance metrcs n statstcal learnng, precson and F1 score, are used to evaluate the compared methods. For every class, the precson and F1 score are computed followng the standard defnton for a bnary classfcaton problem. To address the multlabel scenaro, followng [10], macro average and mcro average of precson and F1 score are computed to assess the overall performance across multple labels. Expermental results. The classfcaton results by standard 5-fold cross-valdaton on TRECVID 005 data set (a) Face, meetng, person, studo. (b) Crowd, face, person. (d) Outdoor, face, person, vegetaton. (c) Bus, car, person. (e) Buldng, crowd, (f) Outdoor, person, person. Sports, vegetaton. Fgure 1. Sample mages from TRECVID 005 data set, whose labels can only be correctly and completely predcted by the proposed method, not other compared methods. The bolded and underlned labels can only be correctly predct by our method. and MSRC-v data set are reported n Table 3. From the results, we can see that our method stll performs the best on the both data sets. Besdes, MKL methods usng multple types of features are generally better than the methods usng sngle type of features. Although Mk-NN, MCGF and MLLS methods are desgned for mult-label data, they can only work wth one type of features. hen usng the feature concatenaton as nput, they assume all types of features as homogenous wth no dstncton, whch, however, s not true n both our experments as well as most real world applcatons. Although our method and MKL methods do not purposely address the mult-label settngs, we stll acheve better performance due to properly makng use of the avalable nformaton from varous types of features. By further checkng the detaled labelng results, we notce that many mages can only be correctly annotated by our method, some of whch are shown n Fgure 1. The bolded and underlned labels can only be correctly predct by our method, but not the other compared methods. All these observatons, agan, confrm the effectveness of the proposed approach n feature ntegraton for mult-label mage classfcaton tasks. 4. Conclusons e proposed a novel Sparse Multmodal Learnng method to ntegrate dfferent types of vsual features for scene and object classfcatons. Instead of learnng one parameter for all features from one modalty as n multple kernel learnng, our method learned the parameters for each feature on dfferent classes va the jont structured sparsty regularzatons. Our new combned convex regularzatons consder the mportance of both feature modalty and ndvdual feature. The natural property of sparse regularzaton automatcally dentfes the mportant vsual features for dfferent vsual recognton tasks. e derved an eff
6 Table 3. Mult-label classfcaton performances (mean ± std) of the compared methods. Data set TRECVID 005 data set MSRC-v data set Methods Macro average Mcro average Macro average Mcro average Precson F1 Precson F1 Precson F1 Precson F1 Mk-NN (Color moment) ± ± ± ± ± ± ± ± 0.01 Mk-NN (DoG-SIFT) 0.41 ± ± ± ± ± ± ± ± 0.00 Mk-NN (LBP) ± ± ± ± ± ± ± ± 0.00 Mk-NN (HOG) ± ± ± ± ± ± ± ± 0.01 Mk-NN (GIST) ± ± ± ± ± ± ± ± 0.0 Mk-NN (CENTRIST) 0.45 ± ± ± ± ± ± ± ± 0.04 Mk-NN (all by concatenaton) 0.47 ± ± ± ± ± ± ± ± 0.03 MCGF (all by concatenaton) ± ± ± ± ± ± ± ± 0.0 MLLS (all by concatenaton) ± ± ± ± ± ± ± ± 0.05 SVM l MKL method ± ± ± ± ± ± ± ± 0.0 SVM l 1 MKL method ± ± ± ± ± ± ± ± 0.03 SVM l MKL method ± ± ± ± ± ± ± ± 0.01 LSSVM l MKL method 0.45 ± ± ± ± ± ± ± ± 0.0 LSSVM l 1 MKL method ± ± ± ± ± ± ± ± 0.04 LSSVM l MKL method ± ± ± ± ± ± ± ± 0.05 GP method ± ± ± ± ± ± ± ± 0.01 LPboost-β ± ± ± ± ± ± ± ± LPboost-B ± ± ± ± ± ± ± ± Our method (G 1-norm only) 0.47 ± ± ± ± ± ± ± ± Our method (l,1-norm only) ± ± ± ± ± ± ± ± Our method ± ± ± ± ± ± ± ± 0.06 cent optmzaton algorthm to solve our non-smooth objectve and provded a rgorous proof on ts global convergence. Extensve experments have been performed on both sngle-label and mult-label mage categorzaton tasks, our approach outperforms other related methods n all benchmark data sets. References [1] A. Argyrou, T. Evgenou, and M. Pontl. Mult-task feature learnng. In NIPS, 007. [] X. Ca, F. Ne, H. Huang, and F. Kamangar. Heterogeneous mage feature ntegraton va mult-modal spectral clusterng. In CVPR, 011. [3] N. Dalal and B. Trggs. Hstograms of orented gradents for human detecton. In CVPR (1), pages , 005. [4] S. Gao, L. Cha, and I. Tsang. Mult-layer group sparse codngłfor concurrent mage classfcaton and annotaton. In CVPR, 011. [5] P. Gehler and S. Nowozn. On feature combnaton for multclass object classfcaton. In ICCV, 009. [6] S. J, L. Tang, S. Yu, and J. Ye. A shared-subspace learnng framework for mult-label classfcaton. ACM Transactons on Knowledge Dscovery from Data (TKDD), 4():1 9, 010. [7] A. Kapoor, K. Grauman, R. Urtasun, and T. Darrell. Gaussan processes for object categorzaton. Internatonal journal of computer vson, 88(): , 010. [8] M. Kloft, U. Brefeld, P. Laskov, and S. Sonnenburg. Nonsparse multple kernel learnng. In NIPS. [9] G. Lanckret, N. Crstann, P. Bartlett, L. Ghaou, and M. Jordan. Learnng the kernel matrx wth semdefnte programmng. JMLR, 5:7 7, 004. [10] D. Lews, Y. Yang, T. Rose, and F. L. Rcv1: A new benchmark collecton for text categorzaton research. The Journal of Machne Learnng Research, 5: , 004. [11] F. Ne, H. ang, H. Huang, and C. Dng. Unsupervsed and sem-supervsed learnng va l 1-norm graph. In ICCV, pages 68 73, 011. [1] A. Olva and A. B. Torralba. Modelng the shape of the scene: A holstc representaton of the spatal envelope. Internatonal Journal of Computer Vson, 4(3): , 001. [13] S. Sonnenburg, G. Rätsch, C. Schäfer, and B. Schölkopf. Large scale multple kernel learnng. JMLR, 7: , 006. [14] J. Suykens, T. Van Gestel, and J. De Brabanter. Least squares support vector machnes. orld Scentfc Pub Co Inc, 00. [15] H. ang, H. Huang, and C. Dng. Image annotaton usng mult-label correlated Green s functon. In ICCV, 009. [16] H. ang, F. Ne, H. Huang, S. L. Rsacher, C. Dng, A. J. Saykn, L. Shen, and ADNI. A new sparse mult-task regresson and feature selecton method to dentfy bran magng predctors for memory performance. ICCV 011: IEEE Conference on Computer Vson, pages , 011. [17] H. ang, F. Ne, H. Huang, S. L. Rsacher, A. J. Saykn, L. Shen, et al. Identfyng dsease senstve and quanttatve trat-relevant bomarkers from multdmensonal heterogeneous magng genetcs data va sparse multmodal multtask learnng. Bonformatcs, 8(1):17 136, 01. [18] J. u and J. M. Rehg. here am : Place nstance and category recognton usng spatal pact. In CVPR, 008. [19] J. Ye, S. J, and J. Chen. Mult-class dscrmnant kernel learnng va convex programmng. JMLR, 9: , 008. [0] S. Yu, T. Falck, A. Daemen, L. Tranchevent, J. Suykens, B. De Moor, and Y. Moreau. L -norm multple kernel learnng and ts applcaton to bomedcal data fuson. BMC bonformatcs, 11(1):309, 010. [1] M. Zhang and Z. Zhou. ML-KNN: A lazy learnng approach to mult-label learnng. Pattern Recognton, 40(7): ,
Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1
4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:
More informationLearning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationCollaboratively Regularized Nearest Points for Set Based Recognition
Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,
More informationDiscriminative Dictionary Learning with Pairwise Constraints
Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse
More informationTerm Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task
Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationFEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur
FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents
More informationSupport Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationImprovement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration
Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,
More informationRobust Dictionary Learning with Capped l 1 -Norm
Proceedngs of the Twenty-Fourth Internatonal Jont Conference on Artfcal Intellgence (IJCAI 205) Robust Dctonary Learnng wth Capped l -Norm Wenhao Jang, Fepng Ne, Heng Huang Unversty of Texas at Arlngton
More informationObject-Based Techniques for Image Retrieval
54 Zhang, Gao, & Luo Chapter VII Object-Based Technques for Image Retreval Y. J. Zhang, Tsnghua Unversty, Chna Y. Y. Gao, Tsnghua Unversty, Chna Y. Luo, Tsnghua Unversty, Chna ABSTRACT To overcome the
More informationMargin-Constrained Multiple Kernel Learning Based Multi-Modal Fusion for Affect Recognition
Margn-Constraned Multple Kernel Learnng Based Mult-Modal Fuson for Affect Recognton Shzh Chen and Yngl Tan Electrcal Engneerng epartment The Cty College of New Yor New Yor, NY USA {schen, ytan}@ccny.cuny.edu
More informationTitle: A Novel Protocol for Accuracy Assessment in Classification of Very High Resolution Images
2009 IEEE. Personal use of ths materal s permtted. Permsson from IEEE must be obtaned for all other uses, n any current or future meda, ncludng reprntng/republshng ths materal for advertsng or promotonal
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationDiscriminative classifiers for object classification. Last time
Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng
More informationMULTI-VIEW ANCHOR GRAPH HASHING
MULTI-VIEW ANCHOR GRAPH HASHING Saehoon Km 1 and Seungjn Cho 1,2 1 Department of Computer Scence and Engneerng, POSTECH, Korea 2 Dvson of IT Convergence Engneerng, POSTECH, Korea {kshkawa, seungjn}@postech.ac.kr
More informationFace Recognition Based on SVM and 2DPCA
Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty
More informationDetermining the Optimal Bandwidth Based on Multi-criterion Fusion
Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn
More informationLearning a Class-Specific Dictionary for Facial Expression Recognition
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for
More informationScale Selective Extended Local Binary Pattern For Texture Classification
Scale Selectve Extended Local Bnary Pattern For Texture Classfcaton Yutng Hu, Zhlng Long, and Ghassan AlRegb Multmeda & Sensors Lab (MSL) Georga Insttute of Technology 03/09/017 Outlne Texture Representaton
More informationA Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems
A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty
More informationEYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS
P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye
More informationTowards Semantic Knowledge Propagation from Text to Web Images
Guoun Q (Unversty of Illnos at Urbana-Champagn) Charu C. Aggarwal (IBM T. J. Watson Research Center) Thomas Huang (Unversty of Illnos at Urbana-Champagn) Towards Semantc Knowledge Propagaton from Text
More informationMulti-view Clustering with Adaptively Learned Graph
Proceedngs of Machne Learnng Research 77:113 128, 217 ACML 217 Mult-vew Clusterng wth Adaptvely Learned Graph Hong Tao taohong.nudt@hotmal.com Chenpng Hou hcpnudt@hotmal.com Jubo Zhu ju bo zhu@alyun.com
More informationLarge-Scale Multimodal Semantic Concept Detection for Consumer Video
Large-Scale Multmodal Semantc Concept Detecton for Consumer Vdeo Shh-Fu Chang, Dan Ells, We Jang, Keansub Lee, Akra Yanagawa, Alexander C. Lou, Jebo Luo ABSTRACT Columba Unversty, New York, NY {sfchang,
More information12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification
Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero
More informationFast Robust Non-Negative Matrix Factorization for Large-Scale Human Action Data Clustering
roceedngs of the Twenty-Ffth Internatonal Jont Conference on Artfcal Intellgence IJCAI-6) Fast Robust Non-Negatve Matrx Factorzaton for Large-Scale Human Acton Data Clusterng De Wang, Fepng Ne, Heng Huang
More informationLobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide
Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.
More informationSHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE
SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE Dorna Purcaru Faculty of Automaton, Computers and Electroncs Unersty of Craoa 13 Al. I. Cuza Street, Craoa RO-1100 ROMANIA E-mal: dpurcaru@electroncs.uc.ro
More informationClassification / Regression Support Vector Machines
Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM
More informationSteps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices
Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between
More informationAn Image Fusion Approach Based on Segmentation Region
Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua
More informationKeywords - Wep page classification; bag of words model; topic model; hierarchical classification; Support Vector Machines
(IJCSIS) Internatonal Journal of Computer Scence and Informaton Securty, Herarchcal Web Page Classfcaton Based on a Topc Model and Neghborng Pages Integraton Wongkot Srura Phayung Meesad Choochart Haruechayasak
More informationLocal Quaternary Patterns and Feature Local Quaternary Patterns
Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents
More informationReal-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input
Real-tme Jont Tracng of a Hand Manpulatng an Object from RGB-D Input Srnath Srdhar 1 Franzsa Mueller 1 Mchael Zollhöfer 1 Dan Casas 1 Antt Oulasvrta 2 Chrstan Theobalt 1 1 Max Planc Insttute for Informatcs
More informationWhat is Object Detection? Face Detection using AdaBoost. Detection as Classification. Principle of Boosting (Schapire 90)
CIS 5543 Coputer Vson Object Detecton What s Object Detecton? Locate an object n an nput age Habn Lng Extensons Vola & Jones, 2004 Dalal & Trggs, 2005 one or ultple objects Object segentaton Object detecton
More informationClassifying Acoustic Transient Signals Using Artificial Intelligence
Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15
CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc
More informationNetwork Intrusion Detection Based on PSO-SVM
TELKOMNIKA Indonesan Journal of Electrcal Engneerng Vol.1, No., February 014, pp. 150 ~ 1508 DOI: http://dx.do.org/10.11591/telkomnka.v1.386 150 Network Intruson Detecton Based on PSO-SVM Changsheng Xang*
More informationAction Recognition Using Completed Local Binary Patterns and Multiple-class Boosting Classifier
Acton Recognton Usng ompleted Local Bnary Patterns and Multple-class Boostng lassfer Yun Yang, Baochang Zhang, Lnln Yang School of Automaton Scence and Electrcal Engneerng Behang Unversty Beng, hna {yangyun,bczhang,yangln}@buaa.edu.cn
More informationAdaptive Transfer Learning
Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong {caobn,snnopan,zhangyu,dyyeung,qyang}@cse.ust.hk
More informationAn Evaluation of Divide-and-Combine Strategies for Image Categorization by Multi-Class Support Vector Machines
An Evaluaton of Dvde-and-Combne Strateges for Image Categorzaton by Mult-Class Support Vector Machnes C. Demrkesen¹ and H. Cherf¹, ² 1: Insttue of Scence and Engneerng 2: Faculté des Scences Mrande Galatasaray
More informationMachine Learning: Algorithms and Applications
14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of
More informationUB at GeoCLEF Department of Geography Abstract
UB at GeoCLEF 2006 Mguel E. Ruz (1), Stuart Shapro (2), June Abbas (1), Slva B. Southwck (1) and Davd Mark (3) State Unversty of New York at Buffalo (1) Department of Lbrary and Informaton Studes (2) Department
More informationHierarchical Image Retrieval by Multi-Feature Fusion
Preprnts (www.preprnts.org) NOT PEER-REVIEWED Posted: 26 Aprl 207 do:0.20944/preprnts20704.074.v Artcle Herarchcal Image Retreval by Mult- Fuson Xaojun Lu, Jaojuan Wang,Yngq Hou, Me Yang, Q Wang* and Xangde
More informationIN recent years, recommender systems, which help users discover
Heterogeneous Informaton Network Embeddng for Recommendaton Chuan Sh, Member, IEEE, Bnbn Hu, Wayne Xn Zhao Member, IEEE and Phlp S. Yu, Fellow, IEEE 1 arxv:1711.10730v1 [cs.si] 29 Nov 2017 Abstract Due
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationLearning to Name Faces: A Multimodal Learning Scheme for Search-Based Face Annotation
Learnng to Name Faces: A Multmodal Learnng Scheme for Search-Based Face Annotaton Dayong Wang, Steven C.H. Ho, Pengcheng Wu, Janke Zhu, Yng He, Chunyan Mao School of Computer Engneerng, Nanyang Technologcal
More informationHermite Splines in Lie Groups as Products of Geodesics
Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the
More informationFace Recognition using 3D Directional Corner Points
2014 22nd Internatonal Conference on Pattern Recognton Face Recognton usng 3D Drectonal Corner Ponts Xun Yu, Yongsheng Gao School of Engneerng Grffth Unversty Nathan, QLD, Australa xun.yu@grffthun.edu.au,
More informationOutline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:
Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A
More informationBioTechnology. An Indian Journal FULL PAPER. Trade Science Inc.
[Type text] [Type text] [Type text] ISSN : 0974-74 Volume 0 Issue BoTechnology 04 An Indan Journal FULL PAPER BTAIJ 0() 04 [684-689] Revew on Chna s sports ndustry fnancng market based on market -orented
More informationMachine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)
Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes
More informationSLAM Summer School 2006 Practical 2: SLAM using Monocular Vision
SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,
More informationDetection of an Object by using Principal Component Analysis
Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,
More informationDiscriminative Relational Representation Learning for RGB-D Action Recognition Yu Kong, Member, IEEE, and Yun Fu, Senior Member, IEEE
2856 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 6, JUNE 2016 Dscrmnatve Relatonal Representaton Learnng for RGB-D Acton Recognton Yu Kong, Member, IEEE, and Yun Fu, Senor Member, IEEE Abstract
More informationA Fast Content-Based Multimedia Retrieval Technique Using Compressed Data
A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,
More informationFace Detection with Deep Learning
Face Detecton wth Deep Learnng Yu Shen Yus122@ucsd.edu A13227146 Kuan-We Chen kuc010@ucsd.edu A99045121 Yzhou Hao y3hao@ucsd.edu A98017773 Mn Hsuan Wu mhwu@ucsd.edu A92424998 Abstract The project here
More information6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour
6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the
More informationA Background Subtraction for a Vision-based User Interface *
A Background Subtracton for a Vson-based User Interface * Dongpyo Hong and Woontack Woo KJIST U-VR Lab. {dhon wwoo}@kjst.ac.kr Abstract In ths paper, we propose a robust and effcent background subtracton
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationKernel Collaborative Representation Classification Based on Adaptive Dictionary Learning
Internatonal Journal of Intellgent Informaton Systems 2018; 7(2): 15-22 http://www.scencepublshnggroup.com/j/js do: 10.11648/j.js.20180702.11 ISSN: 2328-7675 (Prnt); ISSN: 2328-7683 (Onlne) Kernel Collaboratve
More informationRelevance Assignment and Fusion of Multiple Learning Methods Applied to Remote Sensing Image Analysis
Assgnment and Fuson of Multple Learnng Methods Appled to Remote Sensng Image Analyss Peter Bajcsy, We-Wen Feng and Praveen Kumar Natonal Center for Supercomputng Applcaton (NCSA), Unversty of Illnos at
More informationA Fast Visual Tracking Algorithm Based on Circle Pixels Matching
A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng
More informationGender Classification using Interlaced Derivative Patterns
Gender Classfcaton usng Interlaced Dervatve Patterns Author Shobernejad, Ameneh, Gao, Yongsheng Publshed 2 Conference Ttle Proceedngs of the 2th Internatonal Conference on Pattern Recognton (ICPR 2) DOI
More informationImproving Web Image Search using Meta Re-rankers
VOLUME-1, ISSUE-V (Aug-Sep 2013) IS NOW AVAILABLE AT: www.dcst.com Improvng Web Image Search usng Meta Re-rankers B.Kavtha 1, N. Suata 2 1 Department of Computer Scence and Engneerng, Chtanya Bharath Insttute
More informationContent Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers
IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth
More informationMULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION
MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and
More informationMulti-View Surveillance Video Summarization via Joint Embedding and Sparse Optimization
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. XX, NO. XX, DECEMBER 20XX 1 Mult-Vew Survellance Vdeo Summarzaton va Jont Embeddng and Sparse Optmzaton Rameswar Panda and Amt K. Roy-Chowdhury, Senor Member, IEEE
More informationTsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance
Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for
More informationWIRELESS CAPSULE ENDOSCOPY IMAGE CLASSIFICATION BASED ON VECTOR SPARSE CODING.
WIRELESS CAPSULE ENDOSCOPY IMAGE CLASSIFICATION BASED ON VECTOR SPARSE CODING Tao Ma 1, Yuexan Zou 1 *, Zhqang Xang 1, Le L 1 and Y L 1 ADSPLAB/ELIP, School of ECE, Pekng Unversty, Shenzhen 518055, Chna
More informationSVM-based Learning for Multiple Model Estimation
SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:
More informationLearning Ensemble of Local PDM-based Regressions. Yen Le Computational Biomedicine Lab Advisor: Prof. Ioannis A. Kakadiaris
Learnng Ensemble of Local PDM-based Regressons Yen Le Computatonal Bomedcne Lab Advsor: Prof. Ioanns A. Kakadars 1 Problem statement Fttng a statstcal shape model (PDM) for mage segmentaton Callosum segmentaton
More informationManifold-Ranking Based Keyword Propagation for Image Retrieval *
Manfold-Rankng Based Keyword Propagaton for Image Retreval * Hanghang Tong,, Jngru He,, Mngjng L 2, We-Yng Ma 2, Hong-Jang Zhang 2 and Changshu Zhang 3,3 Department of Automaton, Tsnghua Unversty, Bejng
More informationLarge-scale Web Video Event Classification by use of Fisher Vectors
Large-scale Web Vdeo Event Classfcaton by use of Fsher Vectors Chen Sun and Ram Nevata Unversty of Southern Calforna, Insttute for Robotcs and Intellgent Systems Los Angeles, CA 90089, USA {chensun nevata}@usc.org
More informationCHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION
48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue
More informationMercer Kernels for Object Recognition with Local Features
TR004-50, October 004, Department of Computer Scence, Dartmouth College Mercer Kernels for Object Recognton wth Local Features Swe Lyu Department of Computer Scence Dartmouth College Hanover NH 03755 A
More information430 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 3, MARCH Boosting for Multi-Graph Classification
430 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 3, MARCH 2015 Boostng for Mult-Graph Classfcaton Ja Wu, Student Member, IEEE, Shru Pan, Xngquan Zhu, Senor Member, IEEE, and Zhhua Ca Abstract In ths
More informationEfficient Segmentation and Classification of Remote Sensing Image Using Local Self Similarity
ISSN(Onlne): 2320-9801 ISSN (Prnt): 2320-9798 Internatonal Journal of Innovatve Research n Computer and Communcaton Engneerng (An ISO 3297: 2007 Certfed Organzaton) Vol.2, Specal Issue 1, March 2014 Proceedngs
More informationNew l 1 -Norm Relaxations and Optimizations for Graph Clustering
Proceedngs of the Thrteth AAAI Conference on Artfcal Intellgence (AAAI-6) New l -Norm Relaxatons and Optmzatons for Graph Clusterng Fepng Ne, Hua Wang, Cheng Deng 3, Xnbo Gao 3, Xuelong L 4, Heng Huang
More informationExploiting the Entire Feature Space with Sparsity for Automatic Image Annotation
Explotng the Entre eature Space wth Sparsty for Automatc Image Annotaton Zhgang Ma DISI, Unversty of Trento, Italy ma@ds.untn.t Jasper Ujlngs DISI, Unversty of Trento, Italy ujlngs@ds.untn.t Y Yang SCS,
More informationCategorizing objects: of appearance
Categorzng objects: global and part-based models of appearance UT Austn Generc categorzaton problem 1 Challenges: robustness Realstc scenes are crowded, cluttered, have overlappng objects. Generc category
More informationS1 Note. Basis functions.
S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More informationEfficient Text Classification by Weighted Proximal SVM *
Effcent ext Classfcaton by Weghted Proxmal SVM * Dong Zhuang 1, Benyu Zhang, Qang Yang 3, Jun Yan 4, Zheng Chen, Yng Chen 1 1 Computer Scence and Engneerng, Bejng Insttute of echnology, Bejng 100081, Chna
More informationIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng
More informationTECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.
TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationGeneral Regression and Representation Model for Face Recognition
013 IEEE Conference on Computer Vson and Pattern Recognton Workshops General Regresson and Representaton Model for Face Recognton Janjun Qan, Jan Yang School of Computer Scence and Engneerng Nanjng Unversty
More informationEfficient Sparsity Estimation via Marginal-Lasso Coding
Effcent Sparsty Estmaton va Margnal-Lasso Codng Tzu-Y Hung 1,JwenLu 2, Yap-Peng Tan 1, and Shenghua Gao 3 1 School of Electrcal and Electronc Engneerng, Nanyang Technologcal Unversty, Sngapore 2 Advanced
More informationA mathematical programming approach to the analysis, design and scheduling of offshore oilfields
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and
More informationData-dependent Hashing Based on p-stable Distribution
Data-depent Hashng Based on p-stable Dstrbuton Author Ba, Xao, Yang, Hachuan, Zhou, Jun, Ren, Peng, Cheng, Jan Publshed 24 Journal Ttle IEEE Transactons on Image Processng DOI https://do.org/.9/tip.24.2352458
More informationUnsupervised object segmentation in video by efficient selection of highly probable positive features
Unsupervsed object segmentaton n vdeo by effcent selecton of hghly probable postve features Emanuela Haller 1,2 and Marus Leordeanu 1,2 1 Unversty Poltehnca of Bucharest, Romana 2 Insttute of Mathematcs
More informationThe Codesign Challenge
ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationClassifier Swarms for Human Detection in Infrared Imagery
Classfer Swarms for Human Detecton n Infrared Imagery Yur Owechko, Swarup Medasan, and Narayan Srnvasa HRL Laboratores, LLC 3011 Malbu Canyon Road, Malbu, CA 90265 {owechko, smedasan, nsrnvasa}@hrl.com
More information