Learning Semantic Visual Dictionaries: A new Method For Local Feature Encoding
|
|
- Hortense Freeman
- 5 years ago
- Views:
Transcription
1 Learnng Semantc Vsual Dctonares: A new Method For Local Feature Encodng Bng Shua, Zhen Zuo, Gang Wang School of Electrcal and Electronc Engneerng, Nanyang Technologcal Unversty, Sngapore. Emal: {bshua001, zzuo1, wanggang}@ntu.edu.sg Abstract In ths paper, we develop a new method to learn semantc vsual dctonares for local mage feature encodng. Conventonal methods usually learn dctonares from random local mage patches. Dfferent from them, we manually select a number of object classes whose vsual patterns can be seen at local mage patch level n complex mages (Fgure 1, and learn dctonares from ther thumbnal mages. The beneft s that these thumbnal mages have class labels, so we can cluster semantcally smlar mages together to generate meanngful cluster centers. Some other contrbutons of ths paper nclude developng an adaptaton method to adapt the learned dctonares to target datasets, and developng effcent algorthms to encode local patches wth our semantc vsual dctonares. Expermental results on three benchmark datasets demonstrate the effectveness of the proposed methods. Index Terms Semantc Dctonary Learnng, Greedy Group Sparse Codng, Scene Classfcaton. centpede blackboard eraser Bathtub cross ball Lght bulb grass pen I. INTRODUCTION Buldng vsual dctonares for local mage feature encodng s an mportant component for modern mage classfcaton ppelnes [1], [2], [3]. The dctonary atoms are expected to encode dstnctve and representatve vsual patterns, whch can act as vsual prmtves to represent complcated mage content. Prevous methods learn dctonares by clusterng features from random mage patches, whch have several lmtatons: (1 No precse category labels can be assgned to local patches. Therefore, clusterng methods may group patches whch are not semantcally smlar together. The resultng clusterng centers then cannot represent meanngful vsual patterns. (2 Most cluster centers may correspond to hghfrequent local patterns due to the lack of proper supervson n the clusterng procedure[4]. In many cases, the vsual patterns of low-frequent mage patches play a sgnfcant role n dscrmnatng dfferent classes, for example, class bocce and croquet n UIUC-Sports dataset[5] are almost the same except several cross and lne patterns appearng n croquet. To overcome these lmtatons, we propose a novel dea to buld semantc vsual dctonares. We observe that many smple object categores exhbt basc vsual patterns that can be seen at local patch level n complex-content mages. One example s shown n Fgure 1. We manually select a number of object categores, whch are expected to cover dverse vsual patterns useful for the classfcaton task. We resze the objectcentrc mages to produce thumbnals, whch are utlzed to learn dctonares. An nterestng property of ths method s Fg. 1. Image patches n complex mages can be represented by smple object categores, whch exhbt basc yet mportant elementary vsual patterns. For example, centpede category can approxmate curve vsual pattern, pen can be a substtute to lne pattern, and ball carres crcle pattern, etc. that the class labels of these thumbnals are known. Durng the dctonary learnng procedure, we only cluster thumbnals from the same object class together. As a result, the cluster centers correspond to vsual patterns wth semantc meanngs. Besdes, by carefully selectng object categores, we could also ensure that our dctonary encodes dverse patterns rather than domnant ones. Each sub-dctonary s learned from an auxlary dataset (ImageNet, and the concatenaton of them consttute our global dctonary. So t naturally has a group/block structure, n whch each group corresponds to an object semantcs. We explot ths group structure when encodng a local mage patch by selectng only a small quantty of groups. However, t s neffcent to solve ths problem through Group Lasso solver [6]. Therefore, two effcent greedy codng algorthms are proposed to generate the group sparse codes. We further develop a doman adapton approach to adapt the learned dctonares to ft the target database (e.g., Indoor scene 67 by learnng a lnear transformaton matrx. On three benchmark datasets, we demonstrate that the state-of-the-art codng method [2] can beneft from our adapted semantc dctonary. Meanwhle, the experments also show that our codng methods can acheve promsng results based on our semantc vsual dctonary /15/$ IEEE. 901
2 II. RELATED WORK Dctonary Learnng: Dctonary learnng has receved ncreasngly attenton recently, as the performance of mage classfcaton accuracy heavly depends on the qualty of dctonares. A number of papers have been publshed [7], [8], [9], [10], [11], [12], [13], [14], [15] n the past several years. All of these papers learn dctonares from random mage patches. Our work s novel because we use thumbnals of object mages to learn vsual dctonares, whch can represent vsual patterns wth semantc meanngs. The work that s closest to ours s to learn a dscrmnatve dctonary for classfcaton [7], [10], [11], [12]. They treat the labels of patches to be the same wth the labels of correspondng mages. However, the labels assgned to patches n ths way s qute nosy. In our work, the labels for the thumbnal mages are accurate and semantc meanngful, therefore more dscrmnatve dctonary are expected to learn from them. Usng auxlary object data for classfcaton: The dea of usng auxlary object data to help classfcaton or retreval can be seen n [16], [17], [18], [19]. Dfferent from our work, these papers buld the global representaton based on ther responses to the pre-traned object classfers/detectors. We leverage auxlary object data to learn semantc dctonary to mprove the qualty of local feature encodng based on the assumpton that local patches n mages can be approxmated by object-centrc mage thumbnal. Moreover, we emprcally demonstrate that our approach works much better than [17] on several scene classfcaton benchmarks. III. LEARNING SEMANTIC VISUAL DICTIONARIES We observe that many object categores exhbt basc yet mportant vsual patterns that can be seen at local patch level n complex content mages (Fgure 1. It motvates us to buld vsual semantc dctonares by clusterng the object-centrc mage thumbnals. Instead of clusterng random local patches, the dctonary generated from labeled thumbnals carry more dscrmnatve and semantc nformaton. Specfcally, ther class labels are known, whch allows the clusterng algorthms to group mages from the same category, thus avodng producng clusters whose atoms are not really semantcally smlar, as seen n the tradtonal dctonary learnng methods. We manually select 55 object categores based on the followng crtera: (1 the object categores exhbt smple and elementary vsual patterns; (2 the vsual appearance of dfferent object classes does not overlap and (3 some textureless background classes are also ncluded. The selected objects nclude pen, box, cross, grass, etc, whch are supposed to carry mportant and basc vsual patterns. We download ther mages from ImageNet dataset, and boundng boxes are used to generate clean foreground object regon. Next, each object regon mage s reszed to I I thumbnal (I = 40 n our case, so they can be consdered as local mage patches. For each object class, we learn a sub-dctonary D (, and the concatenaton of sub-dctonares yelds our fnal group-structured semantc dctonary D. IV. FEATURE ENCODING BASED ON SEMANTIC VISUAL DICTIONARY Our semantc dctonary s organzed n group structure ( D = [D (1...D (...D (M ], where D ( contans K cluster centers [D ( (1...D ( (K]. Prevous works [6], [20], [21] have shown that, n some cases, t s more robust to reconstruct the nput sgnal by mposng group/block structure to ts reconstructon coeffcent. Here we develop a local feature encodng approach to account for the group-structured dctonary: only a few semantc concepts (groups are actvated to reconstruct the mage patch. [21] Therefore, the group sparse code β for the nput y s generated va optmzng the functon: z =argmn y Dβ 2 2 β s.t. β 0 L (1... β ( 1,... 0 T, 1 M where β s the reconstructon coeffcent of nput y, β ( denotes the code for the -th group n the dctonary D; T and L are the maxmum number of selected groups and selected atoms respectvely; M s the number of groups n the groupstructured dctonary D and β 0 s the pseudo-norm whch counts the number of non-zero elements n β. Ths optmzaton problem s NP hard, whch requres T ( M L ( j j Nj evaluatons, n whch N j s the total number of atoms n the selected j(j T groups. We have to relax t to obtan a good suboptmal soluton. The frst relaxaton case s group Lasso [6], whch only consders group-level sparsty, and t mnmzes the l 1 /l 2 norm regularzaton problem [6]. A better relaxaton case s the sparse group Lasso [20], whch enforces the sparsty over the group and wthn-group level reconstructon coeffcents. Both of these two relaxed problems are convex to β, sotcan be solved usng convex optmzaton solvers. However, these two approxmatons are computatonally expensve [6], [20]. To speed up the encodng modules, we approach the orgnal NP hard problem (Equaton 1 from a dfferent perspectve. Our relaxed problem s cast as a Lasso problem nsde the selected few selected groups. It has the l 0 /l 1 norm form: z =argmn y Dβ λ β 1 β (2 s.t.... β ( 1,... 0 T, 1 M where λ controls the tradeoff between reconstructon error and wthn group sparsty. The group-structured reconstructon code β s generated through the followng way: the coeffcents are set to zero for the groups that are not selected, and the codes β T for the selected T groups are obtaned by solvng a small-scale Lasso problem: z =argmn β T y D T β T λ β T 1 (3 where D T s a smaller dctonary made up atoms from T selected groups. Ths relaxed optmzaton problem s also a NP combnatoral problem, whch requres T C ( M 902
3 Algorthm 1: Greedy Group Sparse Codng Input: D: Dctonary wth group structure [G 1,G 2,...,G M ] y: The nput sgnal T : Maxmum number of selected groups λ: sparse regularzaton term coeffcent ɛ: tolerance of reconstructon error Output: β (k : the group sparse coeffcents Intalzaton Normalze each atom n D to be unt vector β (0 = 0 T,g (0 =,D (0 O = for k = 1...T do Let j (k = argmax j DG T j (y Dβ (k 1 2 g (k = Unon(g (k 1, G j (k D (k O = Unon(D(k 1 O, D(:,G j (k α = argmn α y D (k O α λ α 1 β (k = 0 T, β (k (g (k =α f (y Dβ (k 2 ɛ then break end end evaluatons, n whch M s the number of groups and C s the cost of solvng Equaton 3. The computaton cost to solve our l 0 /l 1 problem s much less compared to group Lasso, whch ( Nj evaluatons to Equaton 1. Furthermore, two requres L effcent encodng methods are proposed to select T groups to solve t: Greedy Group Sparse Codng(GGSC: It works lke Orthogonal Matchng Pursut[22] (OMP that selects the dctonary atoms. At each teraton, t chooses the groups that best correlates to the sgnal resdue. Algorthm 1 present the detals. Local-constraned Lnear Group Codng(LLGC: The T groups that most correlate wth the nput sgnal y are chosen to be the vsual base for nput y. The correlaton between the nput sgnal (resdue y and dctonary group (sub-dctonary D ( s defned as the square of l 2 norm of cosne smlarty of y and every atom D ( j n D (. Our encodng methods are effcent as the soluton of the small-scale Lasso problem s very effcent. V. DICTIONARY ADAPTATION The semantc dctonary s learned n a dfferent doman wthout awareness of the characterstcs of mage patches n the target classfcaton dataset. Therefore, we ntroduce a smple doman adaptaton method to adapt the learned semantc dctonary to our classfcaton task, by learnng a transformaton matrx W. The adaptaton matrx W s learned by mnmzng the overall reconstructon error on mage patches from the target dataset, z =argmn W 1 2 N y WDβ λω(β (4 =1 Fg. 2. Comparson of dctonary atoms. The left column presents 6 group atoms of our semantc dctonary, whch exhbt semantcally smlar vsual patterns. The rght column shows 30 randomly sampled atoms from the dctonary generated through clusterng local mage patches. where N s the number of patches; D s the off-learned semantc dctonary and Ω(β denotes the regularzaton term wth respect to dfferent encodng methods (LLC, GGSC. Snce the objectve functon s not jontly convex to W and β, we learn W usng an teratve method: frst, W s ntalzed to be an dentty matrx; next, β s obtaned va applyng correspondng encodng solver; fnally, gven the new β,we update the adaptaton matrx W by the gradent ΔW of the objectve functon (Equaton 4 to W : ΔW = N (y WDβ β T D T (5 =1 We terate the update procedure untl t converges. VI. EXPERIMENTS AND RESULTS Dataset: We evaluate our semantc dctonary and encodng methods on three scene classfcaton datasets, ncludng generc natural scene mages(15-scene, cluttered ndoor mages (MIT 67 Indoor Scenes, and complex event and actvty mages (UIUC-Sports. The classfcaton performance s the mean of mult-way classfcaton accuracy scores over all the classes n the dataset. Experment Setup: We densely sample mage patches followng [23], the sze of whch s and the sldng strde s 8 pxels. Meanwhle, only SIFT [24] feature s used to represent the mage patches 1. All mages are reszed to maxmum 300 pxels along the smaller axs. Fnally, The mage-level representaton s generated by performng max poolng [2], [1], [17] operaton to all the sparse codes of mage patches. Besdes, Spatal Pyramd Matchng (SPM [23] s also appled to enforce the spatal layout constrant. A. Superorty of Semantc Dctonary We verfy the effectveness of our semantc dctonary n three datasets, and the results are nsprng: the state-of-the-art codng method (LLC [2] can beneft from our semantc vsual dctonary. Table I demonstrates that LLC acheves better result on our semantc dctonary than on the dctonary generated 1 buldng rcher feature for mage patches and usng mult-scale patches can sgnfcantly mprove the performance [25], but that s not our man focus. 903
4 Methods 15-Scene UIUC-Sports Indoor GIST[27] % DPM[28] % Patches[29] % Object Bank[17] 80.9% 76.3% 37.6% Topc Model[30] 78% 82.5% - Kwtt[31] 82.3% 83% - LLC(RP+KSVD[32] 74.9% ± 0.48% 77.5% ± 1.73% 29.2% LLC(RP+Kmeans[2] 80.6% ± 0.72% 85.2% ± 1.53% 41.2% LLC(Semantc 81.0% ± 0.37% 85.1% ± 1.42% 42.0% LLC(ASemantc 81.1% ± 0.52% 85.8% ± 1.57% 43.2% LLC(RP+ASemantc 82.0% ± 0.49% 86.5% ± 1.26% 45.8% GGSC(Semantc 82.0% ± 0.66% 85.6% ± 2.07% 43.2% LLGC(Semantc 81.2% ± 0.42% 84.4% ± 2.50% 42.0% TABLE I CLASSIFICATION PERFORMANCE ON THREE SCENE CLASSIFICATION DATASETS. RP AND SEMANTIC DENOTE DICTIONARY GENERATED FROM RANDOM PATCHES AND SEMANTIC THUMBNAIL IMAGES RESPECTIVELY, ASEMANTIC REPRESENTS THE ADAPTED SEMANTIC DICTIONARY. through clusterng random mage patches. The largest performance gan s observed when these representaton codes are fused, demonstratng that our semantc and random patch (conventonal dctonares are complementary. We further nvestgate the performance of dfferent encodng methods. SLEP [26] toolbox s utlzed to solve the group Lasso (GLasso and sparse group Lasso (Sparse GLasso problem. For GGSC and LLGC, at most T =5groups are actvated to generate the sparse codes. The classfcaton accuracy and average elapsed tme for every mage are reported n Table II. We notce that both our two encodng alternatves acheve sgnfcant better performance than GLasso and Sparse GLasso, whle n the mean tme ours are much more effcent. Table I shows that GGSC also outperforms LLC, whch elucdates the advantage of consderng the group structure of our semantc dctonary. However, we fal to see obvous mprovement over GGSC on the adapted semantc dctonary, as the adaptaton may have broken the group structure of the semantc dctonary. B. Dscrmnatve power over sze of sub-dctonary We beleve that each object group has a dfferent optmal sze for correspondng sub-dctonary, as the ntra-class semantc varance and number of object examples n each object class vares dramatcally. For example, the object semantc ball ncludes basketball, soccer, golf ball, bowlng ball, tenns ball and poolng ball n our case, whch exhbt large ntra-class varance; whle the object class hook has a much smaller ntra-class semantc varance and very few tranng nstances. We smply fx the sze of all the sub-dctonares to be the same (K = D ( n our mplementaton. Then we analyze how K affects the dscrmnatve power of our semantc dctonary. On one hand, the sub-dctonary wth larger sze encodes fner-graned vsual patterns, whle on the other hand, the dmensonalty of the sparse codes grows lnearly wth the sze of sub-dctonary. As demonstrated n Fgure 3, K s fxed to be 20 n our experments as t s a good compromse between the dctonary sze and dscrmnatve power. Methods accuracy tme (Seconds/mage GLasso 82.60% ± 1.33% Sparse GLasso 83.62% ± 1.13% GGSC 85.60% ± 2.07% 1.99 LLGC 84.35% ± 2.50% 0.73 TABLE II CLASSIFICATION ACCURACY AND TIME CONSUMPTION COMPARISON ON UIUC-SPORTS. (a Fg. 3. (a classfcaton accuracy w.r.t sze of sub-dctonary. X-axs denotes the sze of sub-dctonary. (b classfcaton performance w.r.t number of object groups that consttute the semantc dctonary (each group s composed of 20 atoms, X-axs s the number of groups. 5 rounds of randomzed samplng s performed to choose the groups from 55 object groups. Statstcs are based on the experments on 8-UIUC-Sports. C. Proftablty over growng number of object groups We envsage that the larger-sze dctonary wth more object semantcs s gong to preserve rcher nformaton, therefore may serve as a better dctonary. In ths part, we verfy ths assumpton. To smulate what wll happen wth growng number of groups, we randomly sample P = 11(20%, 22(40%, 33(60%, 44(80% and 55(100% out of M =55 object classes for multple tmes. The subset dctonary s generated through concatenatng the correspondng P subdctonares, therefore D sub = Unon(D (1...D (P, note that D sub D. As shown n Fgure 3, the classfcaton performance of larger-sze dctonary contnuously ncreases as more object semantcs are encoded n the dctonary. Therefore, we conclude that the dscrmnatve power of our semantc dctonary could be further boosted by addng more complementary object semantcs. VII. CONCLUSION In ths paper, we propose a new perspectve to learn semantc dctonary from thumbnal mages. Next, we explot the group structure of the dctonary and develop a greedy group sparse codng algorthm to effcently solve the orgnal NP l 0 /l 1 optmzaton problem. Fnally, to make our semantc dctonary better ft the target classfcaton task, a domanaware adaptaton algorthm s developed. The promsng results on three benchmarks demonstrated the effectveness of our proposed methods. REFERENCES [1] J. Yang, K. Yu, Y. Gong, and T. Huang, Lnear spatal pyramd matchng usng sparse codng for mage classfcaton, n Computer Vson and Pattern Recognton, CVPR IEEE Conference on. IEEE, 2009, pp (b 904
5 [2] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong, Localtyconstraned lnear codng for mage classfcaton, n Computer Vson and Pattern Recognton (CVPR, 2010 IEEE Conference on. IEEE, 2010, pp [3] J. Maral, F. Bach, J. Ponce, G. Sapro, and A. Zsserman, Dscrmnatve learned dctonares for local mage analyss, n Computer Vson and Pattern Recognton, CVPR IEEE Conference on. IEEE, 2008, pp [4] O. Boman, E. Shechtman, and M. Iran, In defense of nearest-neghbor based mage classfcaton, n Computer Vson and Pattern Recognton, CVPR IEEE Conference on. IEEE, 2008, pp [5] L.-J. L and L. Fe-Fe, What, where and who? classfyng events by scene and object recognton, n Computer Vson, ICCV IEEE 11th Internatonal Conference on. IEEE, 2007, pp [6] M. Yuan and Y. Ln, Model selecton and estmaton n regresson wth grouped varables, Journal of the Royal Statstcal Socety: Seres B (Statstcal Methodology, vol. 68, no. 1, pp , [7] S. Kong and D. Wang, A dctonary learnng approach for classfcaton: separatng the partcularty and the commonalty, n Computer Vson ECCV Sprnger, 2012, pp [8] J. Maral, F. Bach, J. Ponce, and G. Sapro, Onlne dctonary learnng for sparse codng, n Proceedngs of the 26th Annual Internatonal Conference on Machne Learnng. ACM, 2009, pp [9] F. Bach, J. Maral, J. Ponce, and G. Sapro, Sparse codng and dctonary learnng for mage analyss, n IEEE Conference on Computer Vson and Pattern Recognton (CVPR. tutoral, San Francsco, [10] Z. Jang, Z. Ln, and L. S. Davs, Learnng a dscrmnatve dctonary for sparse codng va label consstent k-svd, n Computer Vson and Pattern Recognton (CVPR, 2011 IEEE Conference on. IEEE, 2011, pp [11] Q. Zhang and B. L, Dscrmnatve k-svd for dctonary learnng n face recognton, n Computer Vson and Pattern Recognton (CVPR, 2010 IEEE Conference on. IEEE, 2010, pp [12] M. Yang, L. Zhang, X. Feng, and D. Zhang, Fsher dscrmnaton dctonary learnng for sparse representaton, n Computer Vson (ICCV, 2011 IEEE Internatonal Conference on. IEEE, 2011, pp [13] H. V. Nguyen, V. M. Patel, N. M. Nasrabad, and R. Chellappa, Kernel dctonary learnng, n Acoustcs, Speech and Sgnal Processng (ICASSP, 2012 IEEE Internatonal Conference on. IEEE, 2012, pp [14] Z. Szabó, B. Póczos, and A. Lorncz, Onlne group-structured dctonary learnng, n Computer Vson and Pattern Recognton (CVPR, 2011 IEEE Conference on. IEEE, 2011, pp [15] H. Wang, C. Yuan, W. Hu, and C. Sun, Supervsed class-specfc dctonary learnng for sparse modelng n acton recognton, Pattern Recognton, vol. 45, no. 11, pp , [16] G. Wang, D. Hoem, and D. Forsyth, Learnng mage smlarty from flckr groups usng fast kernel machnes, [17] L.-J. L, H. Su, L. Fe-Fe, and E. P. Xng, Object bank: A hghlevel mage representaton for scene classfcaton & semantc feature sparsfcaton, n Advances n neural nformaton processng systems, 2010, pp [18] L. Torresan, M. Szummer, and A. Ftzgbbon, Effcent object category recognton usng classemes, n Computer Vson ECCV Sprnger, 2010, pp [19] J. Vogel and B. Schele, Semantc modelng of natural scenes for content-based mage retreval, Internatonal Journal of Computer Vson, vol. 72, no. 2, pp , [20] J. Fredman, T. Haste, and R. Tbshran, A note on the group lasso and a sparse group lasso, arxv preprnt arxv: , [21] E. Elhamfar and R. Vdal, Robust classfcaton usng structured sparse representaton, n Computer Vson and Pattern Recognton (CVPR, 2011 IEEE Conference on. IEEE, 2011, pp [22] G. Swrszcz, N. Abe, and A. C. Lozano, Grouped orthogonal matchng pursut for varable selecton and predcton, n Advances n Neural Informaton Processng Systems, 2009, pp [23] S. Lazebnk, C. Schmd, and J. Ponce, Beyond bags of features: Spatal pyramd matchng for recognzng natural scene categores, n Computer Vson and Pattern Recognton, 2006 IEEE Computer Socety Conference on, vol. 2. IEEE, 2006, pp [24] D. G. Lowe, Dstnctve mage features from scale-nvarant keyponts, Internatonal journal of computer vson, vol. 60, no. 2, pp , [25] M. Juneja, A. Vedald, C. Jawahar, and A. Zsserman, Blocks that shout: Dstnctve parts for scene classfcaton, n Proceedngs of the IEEE Conf. on Computer Vson and Pattern Recognton (CVPR, vol. 1, [26] J. Lu, S. J, and J. Ye, SLEP: Sparse Learnng wth Effcent Projectons, Arzona State Unversty, [Onlne]. Avalable: jye02/software/slep [27] A. Quatton and A. Torralba, Recognzng ndoor scenes, n Computer Vson and Pattern Recognton, 2009 IEEE Computer Socety Conference on, [28] M. Pandey and S. Lazebnk, Scene recognton and weakly supervsed object localzaton wth deformable part-based models, n Computer Vson (ICCV, 2011 IEEE Internatonal Conference on. IEEE, 2011, pp [29] S. Sngh, A. Gupta, and A. A. Efros, Unsupervsed dscovery of md-level dscrmnatve patches, n Computer Vson ECCV Sprnger, 2012, pp [30] Z. Nu, G. Hua, X. Gao, and Q. Tan, Context aware topc model for scene recognton, n Computer Vson and Pattern Recognton (CVPR, 2012 IEEE Conference on. IEEE, 2012, pp [31] R. Kwtt, N. Vasconcelos, and N. Raswasa, Scene recognton on the semantc manfold, n Computer Vson ECCV Sprnger, 2012, pp [32] M. Aharon, M. Elad, and A. Brucksten, K-svd: An algorthm for desgnng overcomplete dctonares for sparse representaton, Sgnal Processng, IEEE Transactons on, vol. 54, no. 11, pp ,
Discriminative Dictionary Learning with Pairwise Constraints
Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse
More informationLearning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationRobust Dictionary Learning with Capped l 1 -Norm
Proceedngs of the Twenty-Fourth Internatonal Jont Conference on Artfcal Intellgence (IJCAI 205) Robust Dctonary Learnng wth Capped l -Norm Wenhao Jang, Fepng Ne, Heng Huang Unversty of Texas at Arlngton
More informationCollaboratively Regularized Nearest Points for Set Based Recognition
Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationOutline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1
4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:
More informationA Fast Visual Tracking Algorithm Based on Circle Pixels Matching
A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationWIRELESS CAPSULE ENDOSCOPY IMAGE CLASSIFICATION BASED ON VECTOR SPARSE CODING.
WIRELESS CAPSULE ENDOSCOPY IMAGE CLASSIFICATION BASED ON VECTOR SPARSE CODING Tao Ma 1, Yuexan Zou 1 *, Zhqang Xang 1, Le L 1 and Y L 1 ADSPLAB/ELIP, School of ECE, Pekng Unversty, Shenzhen 518055, Chna
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationLearning a Class-Specific Dictionary for Facial Expression Recognition
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationA Binarization Algorithm specialized on Document Images and Photos
A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a
More informationImprovement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration
Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationSupport Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationKernel Collaborative Representation Classification Based on Adaptive Dictionary Learning
Internatonal Journal of Intellgent Informaton Systems 2018; 7(2): 15-22 http://www.scencepublshnggroup.com/j/js do: 10.11648/j.js.20180702.11 ISSN: 2328-7675 (Prnt); ISSN: 2328-7683 (Onlne) Kernel Collaboratve
More informationEfficient Sparsity Estimation via Marginal-Lasso Coding
Effcent Sparsty Estmaton va Margnal-Lasso Codng Tzu-Y Hung 1,JwenLu 2, Yap-Peng Tan 1, and Shenghua Gao 3 1 School of Electrcal and Electronc Engneerng, Nanyang Technologcal Unversty, Sngapore 2 Advanced
More informationAn Image Fusion Approach Based on Segmentation Region
Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua
More informationTerm Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task
Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto
More informationIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationA Study on Clustering for Clustering Based Image De-Noising
Journal of Informaton Systems and Telecommuncaton, Vol. 2, No. 4, October-December 2014 196 A Study on Clusterng for Clusterng Based Image De-Nosng Hossen Bakhsh Golestan* Department of Electrcal Engneerng,
More informationA Background Subtraction for a Vision-based User Interface *
A Background Subtracton for a Vson-based User Interface * Dongpyo Hong and Woontack Woo KJIST U-VR Lab. {dhon wwoo}@kjst.ac.kr Abstract In ths paper, we propose a robust and effcent background subtracton
More informationRange images. Range image registration. Examples of sampling patterns. Range images and range surfaces
Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples
More informationClassification / Regression Support Vector Machines
Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM
More informationImage Alignment CSC 767
Image Algnment CSC 767 Image algnment Image from http://graphcs.cs.cmu.edu/courses/15-463/2010_fall/ Image algnment: Applcatons Panorama sttchng Image algnment: Applcatons Recognton of object nstances
More informationHigh resolution 3D Tau-p transform by matching pursuit Weiping Cao* and Warren S. Ross, Shearwater GeoServices
Hgh resoluton 3D Tau-p transform by matchng pursut Wepng Cao* and Warren S. Ross, Shearwater GeoServces Summary The 3D Tau-p transform s of vtal sgnfcance for processng sesmc data acqured wth modern wde
More informationFace Recognition University at Buffalo CSE666 Lecture Slides Resources:
Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural
More informationSLAM Summer School 2006 Practical 2: SLAM using Monocular Vision
SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,
More informationFitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
Fttng & Matchng Lecture 4 Prof. Bregler Sldes from: S. Lazebnk, S. Setz, M. Pollefeys, A. Effros. How do we buld panorama? We need to match (algn) mages Matchng wth Features Detect feature ponts n both
More informationLecture 5: Multilayer Perceptrons
Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented
More informationCorner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity
Journal of Sgnal and Informaton Processng, 013, 4, 114-119 do:10.436/jsp.013.43b00 Publshed Onlne August 013 (http://www.scrp.org/journal/jsp) Corner-Based Image Algnment usng Pyramd Structure wth Gradent
More informationContent Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers
IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationA Bilinear Model for Sparse Coding
A Blnear Model for Sparse Codng Davd B. Grmes and Rajesh P. N. Rao Department of Computer Scence and Engneerng Unversty of Washngton Seattle, WA 98195-2350, U.S.A. grmes,rao @cs.washngton.edu Abstract
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More informationCS 534: Computer Vision Model Fitting
CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust
More informationLECTURE : MANIFOLD LEARNING
LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors
More informationA Fast Content-Based Multimedia Retrieval Technique Using Compressed Data
A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,
More informationComparison Study of Textural Descriptors for Training Neural Network Classifiers
Comparson Study of Textural Descrptors for Tranng Neural Network Classfers G.D. MAGOULAS (1) S.A. KARKANIS (1) D.A. KARRAS () and M.N. VRAHATIS (3) (1) Department of Informatcs Unversty of Athens GR-157.84
More informationLecture 4: Principal components
/3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd
More informationImage Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization
Image Deblurrng and Super-resoluton by Adaptve Sparse Doman Selecton and Adaptve Regularzaton Wesheng Dong a,b, Le Zhang b,1, Member, IEEE, Guangmng Sh a, Senor Member, IEEE, and Xaoln Wu c, Senor Member,
More information12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification
Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero
More informationEYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS
P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye
More informationMachine Learning: Algorithms and Applications
14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of
More informationWeighted Sparse Image Classification Based on Low Rank Representation
Copyrght 08 Tech Scence Press CMC, vol.56, no., pp.9-05, 08 Weghted Sparse Image Classfcaton Based on Low Rank Representaton Qd Wu, Ybng L, Yun Ln, * and Ruoln Zhou Abstract: The conventonal sparse representaton-based
More informationEfficient Segmentation and Classification of Remote Sensing Image Using Local Self Similarity
ISSN(Onlne): 2320-9801 ISSN (Prnt): 2320-9798 Internatonal Journal of Innovatve Research n Computer and Communcaton Engneerng (An ISO 3297: 2007 Certfed Organzaton) Vol.2, Specal Issue 1, March 2014 Proceedngs
More informationSimulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010
Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationProgramming in Fortran 90 : 2017/2018
Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values
More informationMULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION
MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and
More informationAn Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices
Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal
More informationSVM-based Learning for Multiple Model Estimation
SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:
More informationSingle Sample Face Recognition via Learning Deep Supervised Auto-Encoders Shenghua Gao, Yuting Zhang, Kui Jia, Jiwen Lu, Yingying Zhang
1 Sngle Sample Face Recognton va Learnng Deep Supervsed Auto-Encoders Shenghua Gao, Yutng Zhang, Ku Ja, Jwen Lu, Yngyng Zhang Abstract Ths paper targets learnng robust mage representaton for sngle tranng
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationFace Detection with Deep Learning
Face Detecton wth Deep Learnng Yu Shen Yus122@ucsd.edu A13227146 Kuan-We Chen kuc010@ucsd.edu A99045121 Yzhou Hao y3hao@ucsd.edu A98017773 Mn Hsuan Wu mhwu@ucsd.edu A92424998 Abstract The project here
More informationTsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance
Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for
More informationTone-Aware Sparse Representation for Face Recognition
Tone-Aware Sparse Representaton for Face Recognton Lngfeng Wang, Huayu Wu and Chunhong Pan Abstract It s stll a very challengng task to recognze a face n a real world scenaro, snce the face may be corrupted
More informationDetermining the Optimal Bandwidth Based on Multi-criterion Fusion
Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn
More informationMULTI-VIEW ANCHOR GRAPH HASHING
MULTI-VIEW ANCHOR GRAPH HASHING Saehoon Km 1 and Seungjn Cho 1,2 1 Department of Computer Scence and Engneerng, POSTECH, Korea 2 Dvson of IT Convergence Engneerng, POSTECH, Korea {kshkawa, seungjn}@postech.ac.kr
More informationMaximum Variance Combined with Adaptive Genetic Algorithm for Infrared Image Segmentation
Internatonal Conference on Logstcs Engneerng, Management and Computer Scence (LEMCS 5) Maxmum Varance Combned wth Adaptve Genetc Algorthm for Infrared Image Segmentaton Huxuan Fu College of Automaton Harbn
More informationJoint Example-based Depth Map Super-Resolution
Jont Example-based Depth Map Super-Resoluton Yanje L 1, Tanfan Xue,3, Lfeng Sun 1, Janzhuang Lu,3,4 1 Informaton Scence and Technology Department, Tsnghua Unversty, Bejng, Chna Department of Informaton
More informationRecognizing Faces. Outline
Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &
More informationLarge-scale Web Video Event Classification by use of Fisher Vectors
Large-scale Web Vdeo Event Classfcaton by use of Fsher Vectors Chen Sun and Ram Nevata Unversty of Southern Calforna, Insttute for Robotcs and Intellgent Systems Los Angeles, CA 90089, USA {chensun nevata}@usc.org
More informationRobust Mean Shift Tracking with Corrected Background-Weighted Histogram
Robust Mean Shft Trackng wth Corrected Background-Weghted Hstogram Jfeng Nng, Le Zhang, Davd Zhang and Chengke Wu Abstract: The background-weghted hstogram (BWH) algorthm proposed n [] attempts to reduce
More informationIEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Dictionary Pair Learning on Grassmann Manifolds for Image Denoising
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, 2015 1 Dctonary Par Learnng on Grassmann Manfolds for Image Denosng Xanhua Zeng, We Ban, We Lu, Jale Shen, Dacheng Tao, Fellow, IEEE Abstract Image
More informationOutline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:
Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A
More informationPruning Training Corpus to Speedup Text Classification 1
Prunng Tranng Corpus to Speedup Text Classfcaton Jhong Guan and Shugeng Zhou School of Computer Scence, Wuhan Unversty, Wuhan, 430079, Chna hguan@wtusm.edu.cn State Key Lab of Software Engneerng, Wuhan
More informationVol. 5, No. 3 March 2014 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.
Journal of Emergng Trends n Computng and Informaton Scences 009-03 CIS Journal. All rghts reserved. http://www.csjournal.org Unhealthy Detecton n Lvestock Texture Images usng Subsampled Contourlet Transform
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationAlternating Direction Method of Multipliers Implementation Using Apache Spark
Alternatng Drecton Method of Multplers Implementaton Usng Apache Spark Deterch Lawson June 4, 2014 1 Introducton Many applcaton areas n optmzaton have benefted from recent trends towards massve datasets.
More informationConformal and Low-Rank Sparse Representation for Image Restoration
Conformal and Low-Rank Sparse Representaton for Image Restoraton Janwe L, Xaowu Chen, Dongqng Zou, Bo Gao, We Teng State Key Laboratory of Vrtual Realty Technology and Systems School of Computer Scence
More informationDetection of an Object by using Principal Component Analysis
Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,
More informationRecommended Items Rating Prediction based on RBF Neural Network Optimized by PSO Algorithm
Recommended Items Ratng Predcton based on RBF Neural Network Optmzed by PSO Algorthm Chengfang Tan, Cayn Wang, Yuln L and Xx Q Abstract In order to mtgate the data sparsty and cold-start problems of recommendaton
More informationAn Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed
More informationCompetitive Sparse Representation Classification for Face Recognition
Vol. 6, No. 8, 05 Compettve Sparse Representaton Classfcaton for Face Recognton Yng Lu Chongqng Key Laboratory of Computatonal Intellgence Chongqng Unversty of Posts and elecommuncatons Chongqng, Chna
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationChinese Word Segmentation based on the Improved Particle Swarm Optimization Neural Networks
Chnese Word Segmentaton based on the Improved Partcle Swarm Optmzaton Neural Networks Ja He Computatonal Intellgence Laboratory School of Computer Scence and Engneerng, UESTC Chengdu, Chna Department of
More information3D vector computer graphics
3D vector computer graphcs Paolo Varagnolo: freelance engneer Padova Aprl 2016 Prvate Practce ----------------------------------- 1. Introducton Vector 3D model representaton n computer graphcs requres
More informationSPARSE-CODED NET MODEL AND APPLICATIONS
TO APPEAR IN 2016 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING SPARSE-CODED NET MODEL AND APPLICATIONS Youngjune Gwon 1, Mram Cha 2, Wllam Campbell 1, H. T. Kung 2, Cagr Dagl 1
More informationWavefront Reconstructor
A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes
More informationFusion of Deep Features and Weighted VLAD Vectors based on Multiple Features for Image Retrieval
MATEC Web of Conferences, 0500 (07) DTS-07 DO: 005/matecconf/070500 Fuson of Deep Features and Weghted VLAD Vectors based on Multple Features for mage Retreval Yanhong Wang,, Ygang Cen,, Lequan Lang,*,
More informationPERFORMANCE EVALUATION FOR SCENE MATCHING ALGORITHMS BY SVM
PERFORMACE EVALUAIO FOR SCEE MACHIG ALGORIHMS BY SVM Zhaohu Yang a, b, *, Yngyng Chen a, Shaomng Zhang a a he Research Center of Remote Sensng and Geomatc, ongj Unversty, Shangha 200092, Chna - yzhac@63.com
More informationAn optimized workflow for coherent noise attenuation in time-lapse processing
An optmzed workflow for coherent nose attenuaton n tme-lapse processng Adel Khall 1, Hennng Hoeber 1, Steve Campbell 2, Mark Ibram 2 and Dan Daves 2. 1 CGGVertas, 2 BP 75 th EAGE Conference & Exhbton ncorporatng
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationNUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS
ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana
More informationCategorizing objects: of appearance
Categorzng objects: global and part-based models of appearance UT Austn Generc categorzaton problem 1 Challenges: robustness Realstc scenes are crowded, cluttered, have overlappng objects. Generc category
More informationReducing Frame Rate for Object Tracking
Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg
More informationHermite Splines in Lie Groups as Products of Geodesics
Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the
More informationGeneral Regression and Representation Model for Face Recognition
013 IEEE Conference on Computer Vson and Pattern Recognton Workshops General Regresson and Representaton Model for Face Recognton Janjun Qan, Jan Yang School of Computer Scence and Engneerng Nanjng Unversty
More informationAS a classical problem in low level vision, image denoising. Group Sparsity Residual Constraint for Image Denoising
1 Group Sparsty Resdual Constrant for Image Denosng Zhyuan Zha, Xnggan Zhang, Qong Wang, Lan Tang and Xn Lu arxv:1703.00297v6 [cs.cv] 31 Jul 2017 Abstract Group-based sparse representaton has shown great
More informationFace Recognition using 3D Directional Corner Points
2014 22nd Internatonal Conference on Pattern Recognton Face Recognton usng 3D Drectonal Corner Ponts Xun Yu, Yongsheng Gao School of Engneerng Grffth Unversty Nathan, QLD, Australa xun.yu@grffthun.edu.au,
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15
CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc
More informationA fast algorithm for color image segmentation
Unersty of Wollongong Research Onlne Faculty of Informatcs - Papers (Arche) Faculty of Engneerng and Informaton Scences 006 A fast algorthm for color mage segmentaton L. Dong Unersty of Wollongong, lju@uow.edu.au
More informationSuper-resolution with Nonlocal Regularized Sparse Representation
Super-resoluton wth Nonlocal Regularzed Sparse Representaton Wesheng Dong a, Guangmng Sh a, Le Zhang b, and Xaoln Wu c a Key Laboratory of Intellgent Percepton and Image Understandng (Chnese Mnstry of
More informationCategories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms
3. Fndng Determnstc Soluton from Underdetermned Equaton: Large-Scale Performance Modelng by Least Angle Regresson Xn L ECE Department, Carnege Mellon Unversty Forbs Avenue, Pttsburgh, PA 3 xnl@ece.cmu.edu
More informationElectronic version of an article published as [International Journal of Neural Systems, Volume 24, Issue 3, 2014, Pages] [Article DOI doi:
Electronc verson of an artcle publshed as [Internatonal Journal of Neural Systems, Volume 24, Issue 3, 204, Pages] [Artcle DOI do: 0.42/S029065743000] [World Scentfc Publshng Company] [http://www.worldscentfc.com/worldscnet/ns]
More informationAn efficient method to build panoramic image mosaics
An effcent method to buld panoramc mage mosacs Pattern Recognton Letters vol. 4 003 Dae-Hyun Km Yong-In Yoon Jong-Soo Cho School of Electrcal Engneerng and Computer Scence Kyungpook Natonal Unv. Abstract
More information