CS246: Mining Massive Datasets Jure Leskovec, Stanford University
|
|
- Jeffrey Johnston
- 5 years ago
- Views:
Transcription
1 CS246: Mnng Massve Datasets Jure Leskovec, Stanford Unversty
2 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 2 Hgh dm. data Graph data Infnte data Machne learnng Apps Localty senstve hashng PageRank, SmRank Flterng data streams SVM Recommen der systems Clusterng Communty Detecton Web advertsng Decson Trees Assocaton Rules Dmensonal ty reducton Spam Detecton Queres on streams Perceptron, knn Duplcate document detecton
3 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 3 Study of algorthms that mprove ther performance at some task th experence
4 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 4 Gven some data: Learn a functon to map from the nput to the output Gven: Tranng examples xx, yy = ff xx for some unknon functon ff Fnd: A good approxmaton to ff
5 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 5 Would lke to do predcton: estmate a functon f(x) so that y = f(x) Where y can be: Real number: Regresson Categorcal: Classfcaton Complex object: Rankng of tems, Parse tree, etc. Data s labeled: Have many pars {(x, y)} x vector of bnary, categorcal, real valued features y class ({1, 1}, or a real number)
6 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 6 Task: Gven data (X,Y) buld a model f() to predct Y based on X Strategy: Estmate yy = ff xx on (XX, YY). Hope that the same ff(xx) also orks to predct unknon YY The hope s called generalzaton Tranng data Test data Overfttng: If f(x) predcts ell Y but s unable to predct Y We ant to buld a model that generalzes ell to unseen data But Jure, ho can e ell on data e have never seen before?!? X X Y Y
7 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 7 tranng ponts 1) Tranng data s dran ndependently at random accordng to unknon probablty dstrbuton PP(xx, yy) 2) The learnng algorthm analyzes the examples and produces a classfer ff Gven ne data xx, yy dran from PP, the classfer s gven xx and predcts yy = ff(xx) The loss LL(yy, yy) s then measured Goal of the learnng algorthm: Fnd ff that mnmzes expected loss EE PP [LL]
8 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 8 tranng data PP(xx, yy) Tranng set SS test data Learnng algorthm (xx, yy) yy xx ff Why s t hard? We estmate ff on tranng data but ant the ff to ork ell on unseen future (.e., test) data yy yy loss functon L(yy, yy)
9 Goal: Mnmze the expected loss mn EE P[LL] But, e don t have access to PP but only to tranng sample DD: mn EE D[LL] So, e mnmze the average loss on the tranng data: 2/17/2015 mn NN JJ = 1 NN L h(xx), yy =1 Problem: Just memorzng the tranng data gves us a perfect model (th zero loss) Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 9
10 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 10 Gven: A set of N tranng examples {(xx (1), yy (1) ), (xx (2), yy (2) ),, (xx (nn), yy (nn) )} A loss functon LL Fnd: The eght vector that mnmzes the expected loss on the tranng data NN JJ = 1 NN L ssssss xx, yy =1
11 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 11 Problem: Stepse Constant Loss functon Loss *x Dervatve s ether 0 or
12 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 12 Approxmatng the expected loss by a smooth functon Replace the orgnal objectve functon by a surrogate loss functon. E.g., hnge loss: NN JJ = 1 NN max 0, 1 yy xx () =1 When yy = 1:
13 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 13
14 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 14 Mnmze ff by Gradent Descent Start th eght vector (0) Compute gradent JJ (0) = JJ (0) 0, JJ (0) 1,, Compute (1) = (0) ηηηηjj (0) here ηη s a step sze parameter Repeat untl convergence JJ (0) nn
15 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 15 Example: Spam flterng Instance space x X ( X = n data ponts) Bnary or realvalued feature vector x of ord occurrences d features (ords other thngs, d~100,000) Class y Y y: Spam (1), Ham (1)
16 PP(xx, yy): dstrbuton of emal messages xx and ther true labels yy ( spam, ham ) Tranng sample: a set of emal messages that have been labeled by the user Learnng algorthm: What e study! ff: The classfer output by the learnng alg. Test pont: A ne emal xx (th ts true, but hdden, label yy) Loss functon LL(yy, yy): 2/17/2015 predcted label yy true label yy spam ham spam 0 10 not spam 1 0 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 16
17 Idea: Pretend e do not kno the data/labels e actually do kno Tranng set Buld the model f(x) on X Valdaton the tranng data (mnmze J) set See ho ell f(x) does on Test X set the valdaton data If t does ell, then apply t also to X Refnement: Cross valdaton Estmate y = f(x) on X,Y. Hope that the same f(x) also orks on unseen X, Y Splttng nto tranng/valdaton set s brutal Let s splt our data (X,Y) nto 10folds (buckets) Take out 1fold for valdaton, tran on remanng 9 Repeat ths 10 tmes, report average performance 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 17 Y
18 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 19 We ll talk about the follong methods: Support Vector Machnes Decson trees Man queston: Ho to effcently tran (buld a model/fnd model parameters)?
19
20 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 21 Want to separate from usng a lne Data: Tranng examples: (x 1, y 1 ) (x n, y n ) Each example : x = ( x (1),, x (d) ) x (j) s real valued y { 1, 1 } Inner product: dd xx = (jj) xx (jj) jj=1 Whch s best lnear separator (defned by )?
21 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 22 A B C Dstance from the separatng hyperplane corresponds to the confdence of predcton Example: We are more sure about the class of A and B than of C
22 Margn γγ: Dstance of closest example from the decson lne/hyperplane The reason e defne margn ths ay s due to theoretcal convenence and exstence of generalzaton error bounds that depend on the value of margn. 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 23
23 Remember: the Dot product AA BB = AA BB cccccc θθ AA cccccccc AA = AA (jj) 22 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 24 dd jj=11
24 Dot product AA BB = AA BB cccccc θθ What s xx 11, xx 22? x 2 x 1 x 2 x 1 x 2 x 1 In ths case γγ In ths case γγ So, γγ roughly corresponds to the margn Bottom lne: Bgger γγ bgger the separaton 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 25
25 Dstance from a pont to a lne A (x A (1), x A (2) ) H L Let: Lne L: xb = (1) x (1) (2) x (2) b=0 = ( (1), (2) ) Pont A = (x A (1), x A (2) ) Note e assume 22 = 11 Pont M on a lne = (x M (1), x M (2) ) (0,0) M (x M (1), x M (2) ) d(a, L) = AH = (AM) = (x A (1) x M (1) ) (1) (x A (2) x M (2) ) (2) = x A (1) (1) x A (2) (2) b = A b Remember x (1) M (1) x (2) M (2) = b snce M belongs to lne L 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 26
26 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 27 Predcton = sgn( x b) Confdence = ( x b) y For th datapont: γγ = xx bb yy Want to solve: mmmmmm mmmmmm Can rerte as maxγ, γ γγ s. t., y ( x b) γ
27 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 28 Maxmze the margn: Good accordng to ntuton, theory (c.f. VC dmenson ) and practce maxγ, γ s. t., y ( x b) γ γ γ γ xb=0 γγ s margn dstance from the separatng hyperplane Maxmzng the margn
28
29 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 30 Separatng hyperplane s defned by the support vectors Ponts on / planes from the soluton If you kne these ponts, you could gnore the rest Generally, d1 support vectors (for d dm. data)
30 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 31 Problem: Let xx bb yy = γγ then 22 xx 22bb yy = 22γγ Scalng ncreases margn! Soluton: Work th normalzed : γγ = xx bb yy x 2 x 1 Let s also requre support vectors xx jj to be on the plane defned by: xx jj bb = ±11 dd = (jj) 2 jj=1
31 Want to maxmze margn γγ! What s the relaton beteen x 1 and x 2? xx 11 = xx We also kno: xx 11 bb = 11 xx 22 bb = 11 So: xx 11 bb = 11 xx xx 22 bb bb = 11 = 11 γ = 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 32 x 2 2γ x 1 1 = Note: = 2
32 We started th max, γ γ s. t., y ( x b) arg maxγ = arg max mn s. t., y 2 ( x b) γ But can be arbtrarly large! We normalzed and... Then: = arg mn arg mn Ths s called SVM th hard constrants = 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, x 2 2γ x 1
33 If data s not separable ntroduce penalty: mn 1 2 s. t., y 2 ( x C (#number of b) 1 Mnmze ǁǁ 2 plus the number of tranng mstakes Set C usng cross valdaton Ho to penalze mstakes? All mstakes are not equally bad! mstakes) 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 34
34 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 35 Introduce slack varables ξ mn, b, ξ 0 s. t., 1 2 y 2 ( x If pont x s on the rong sde of the margn then get penalty ξ n C ξ b) = 1 1 ξ ξ ξ j For each data pont: If margn 1, don t care If margn < 1, pay lnear penalty
35 mn s. t., 1 2 y 2 ( x C (#number of b) 1 What s the role of slack penalty C: C= : Only ant to, b that separate the data C=0: Can set ξ to anythng, then =0 (bascally gnores the data) (0,0) mstakes) 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 36 small C good C bg C
36 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 37 SVM n the natural form n arg mn C max, b 1 2 SVM uses Hnge Loss : 0/1 loss { 0,1 y ( x b) } Margn = 1 Emprcal loss L (ho ell e ft tranng data) Regularzaton parameter penalty mn, b 1 2 s. t., y 2 ( x n C ξ = 1 b) 1 ξ Hnge loss: max{0, 1z} z = y ( x b)
37
38 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 39 mn, b 1 2 s. t., y ( x C Want to estmate and bb! Standard ay: Use a solver! Solver: softare for fndng solutons to common optmzaton problems Use a quadratc solver: Mnmze quadratc functon Subject to lnear constrants Problem: Solvers are neffcent for bg data! n = 1 b) ξ 1 ξ
39 Want to estmate, b! Alternatve approach: Want to mnmze J(,b): Sde note: Ho to mnmze convex functons gg(zz)? Use gradent descent: mn z g(z) Iterate: z t1 z t η g(z t ) 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 40 = = = n d j j j b x y C b J 1 1 ) ( ) ( 2 1 ) ( 0,1 max ), ( n b b x y s t C ξ ξ = 1 ) (,.. mn 1 2 1, g(z) z
40 Want to mnmze J(,b): Compute the gradent (j).r.t. (j) 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 41 = = = n j j j j y x L C b L J 1 ) ( ) ( ) ( ) ( ), ( ), ( else 1 ) ( f 0 ), ( ) ( ) ( j j x y b x y y x L = = ( ) = = = = n d j j j d j j b x y C b J 1 1 ) ( ) ( 1 2 ) ( 2 1 ) ( 0,1 max ), ( Emprcal loss LL(xx yy )
41 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 42 Gradent descent: Iterate untl convergence: For j = 1 d j) f (, b) Evaluate: J = ( j) Update: (j) (j) η J (j) n ( ( j) L( x =, y ) C ( j) = 1 Problem: Computng J (j) takes O(n) tme! n sze of the tranng dataset η learnng rate parameter C regularzaton parameter
42 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 43 Stochastc Gradent Descent Instead of evaluatng gradent over all examples evaluate t for each ndvdual tranng example j) ( j) L( x, y J ( x ) = C ( j) Stochastc gradent descent: ( ) Iterate untl convergence: For = 1 n For j = 1 d Compute: J (j) (x ) Update: (j) (j) η J (j) (x ) J We just had: n ( j) ( j) L( x, y ) = C ( j) = 1 Notce: no summaton over anymore
43
44 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 45 Example by Leon Bottou: Reuters RCV1 document corpus Predct a category of a document One vs. the rest classfcaton n = 781,000 tranng examples (documents) 23,000 test examples d = 50,000 features One feature per ord Remove stopords Remove lo frequency ords
45 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 46 Questons: (1) Is SGD successful at mnmzng J(,b)? (2) Ho quckly does SGD fnd the mn of J(,b)? (3) What s the error on a test set? Standard SVM Fast SVM SGDSVM Tranng tme Value of J(,b) Test error (1) SGDSVM s successful at mnmzng the value of J(,b) (2) SGDSVM s super fast (3) SGDSVM test set error s comparable
46 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 48 SGD SVM Conventonal SVM Optmzaton qualty: J(,b) J ( opt,b opt ) For optmzng J(,b) thn reasonable qualty SGDSVM s super fast
47 SGD on full dataset vs. Conjugate Gradent on a sample of n tranng examples Theory says: Gradent descent converges n lnear tme kk. Conjugate gradent converges n kk. Bottom lne: Dong a smple (but fast) SGD update many tmes s better than dong a complcated (but slo) CG update a fe tmes kk condton number 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 49
48 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 51 Sparse Lnear SVM: Feature vector x s sparse (contans many zeros) Do not do: x = [0,0,0,1,0,0,0,0,5,0,0,0,0,0,0, ] But represent x as a sparse vector x =[(4,1), (9,5), ] Can e do the SGD update more effcently? C η Approxmated n 2 steps: L( x, y ) ηc ( 1 η) ) cheap: x s sparse and so fe coordnates j of ll be updated expensve: s not sparse, all coordnates need to be updated x L, ( y
49 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 52 Soluton 1: = ss vv Represent vector as the product of scalar s and vector v Then the update procedure s: To step update procedure: (1) (2) L( x, y ) ηc ( 1 η) (1) vv = vv ηηηη xx,yy (2) ss = ss(11 ηη) Soluton 2: Perform only step (1) for each tranng example Perform step (2) th loer frequency and hgher η
50 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 53 Stoppng crtera: Ho many teratons of SGD? Early stoppng th cross valdaton Create a valdaton set Montor cost functon on the valdaton set Stop hen loss stops decreasng Early stoppng Extract to (very) small sets of tranng data A and B Tran on A, stop by valdatng on B Number of tranng epochs on A s an estmate of k Tran for k epochs on the full dataset
51 Idea 1: One aganst all Learn 3 classfers vs. {o, } vs. {o, } o vs. {, } Obtan: b, b, o b o Ho to classfy? Return class c arg max c c x b c 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 54
52 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 55 Idea 2: Learn 3 sets of eghts smoultaneously! For each class c estmate c, b c Want the correct class y to have hghest margn: y x b y 1 c x b c c y, (x, y )
53 Optmzaton problem: To obtan parameters c, b c (for each class c) e can use smlar technques as for 2 class SVM SVM s dely perceved a very poerful learnng algorthm 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 56 c c y y n c b b x b x C ξ ξ = 1 mn 1 c 2 2 1, y c 0,, ξ
54
55 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 58 Ne settng: Onlne Learnng Allos for modelng problems here e have a contnuous stream of data We ant an algorthm to learn from t and sloly adapt to the changes n data Idea: Do slo updates to the model SGDSVM makes updates f msclassfyng a datapont So: Frst tran the classfer on tranng data. Then for every example from the stream, f e msclassfy, update the model (usng a small learnng rate)
56 Protocol: User comes and tell us orgn and destnaton We offer to shp the package for some money ($10 $50) Based on the prce e offer, sometmes the user uses our servce (y = 1), sometmes they don't (y = 1) Task: Buld an algorthm to optmze hat prce e offer to the users Features x capture: Informaton about user Orgn and destnaton Problem: Wll user accept the prce? 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 59
57 Model hether user ll accept our prce: yy = ff(xx; ) Accept: y =1, Not accept: y=1 Buld ths model th say Perceptron or SVM The ebste that runs contnuously Onlne learnng algorthm ould do somethng lke User comes User s represented as an (x,y) par here x: Feature vector ncludng prce e offer, orgn, destnaton y: If they chose to use our servce or not The algorthm updates usng just the (x,y) par Bascally, e update the parameters every tme e get some ne data 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 60
58 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 61 We dscard ths dea of a data set Instead e have a contnuous stream of data Further comments: For a major ebste here you have a massve stream of data then ths knd of algorthm s pretty reasonable Don t need to deal th all the tranng data If you had a small number of users you could save ther data and then run a normal algorthm on the full dataset Dong multple passes over the data
59 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, 62 An onlne algorthm can adapt to changng user preferences For example, over tme users may become more prce senstve The algorthm adapts and learns ths So the system s dynamc
CS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd
More informationSupport Vector Machines. CS534 - Machine Learning
Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationClassification / Regression Support Vector Machines
Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM
More informationSupport Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationAnnouncements. Supervised Learning
Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples
More informationMachine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)
Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes
More information12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification
Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero
More informationOutline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1
4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:
More informationDiscriminative classifiers for object classification. Last time
Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationINF 4300 Support Vector Machine Classifiers (SVM) Anne Solberg
INF 43 Support Vector Machne Classfers (SVM) Anne Solberg (anne@f.uo.no) 9..7 Lnear classfers th mamum margn for toclass problems The kernel trck from lnear to a hghdmensonal generalzaton Generaton from
More informationLearning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationCS 534: Computer Vision Model Fitting
CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust
More informationGraph-based Clustering
Graphbased Clusterng Transform the data nto a graph representaton ertces are the data ponts to be clustered Edges are eghted based on smlarty beteen data ponts Graph parttonng Þ Each connected component
More informationKent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming
CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationGSLM Operations Research II Fall 13/14
GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are
More informationCHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION
48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15
CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc
More informationLecture 5: Multilayer Perceptrons
Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationToday s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss.
Today s Outlne Sortng Chapter 7 n Wess CSE 26 Data Structures Ruth Anderson Announcements Wrtten Homework #6 due Frday 2/26 at the begnnng of lecture Proect Code due Mon March 1 by 11pm Today s Topcs:
More informationEfficient Text Classification by Weighted Proximal SVM *
Effcent ext Classfcaton by Weghted Proxmal SVM * Dong Zhuang 1, Benyu Zhang, Qang Yang 3, Jun Yan 4, Zheng Chen, Yng Chen 1 1 Computer Scence and Engneerng, Bejng Insttute of echnology, Bejng 100081, Chna
More informationData Mining: Model Evaluation
Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct
More informationOptimizing Document Scoring for Query Retrieval
Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng
More informationMachine Learning. Topic 6: Clustering
Machne Learnng Topc 6: lusterng lusterng Groupng data nto (hopefully useful) sets. Thngs on the left Thngs on the rght Applcatons of lusterng Hypothess Generaton lusters mght suggest natural groups. Hypothess
More informationThe Codesign Challenge
ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.
More information6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour
6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the
More informationThe Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique
//00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy
More informationUnsupervised Learning
Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationINF Repetition Anne Solberg INF
INF 43 7..7 Repetton Anne Solberg anne@f.uo.no INF 43 Classfers covered Gaussan classfer k =I k = k arbtrary Knn-classfer Support Vector Machnes Recommendaton: lnear or Radal Bass Functon kernels INF 43
More informationLECTURE : MANIFOLD LEARNING
LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors
More informationTaxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems
Taxonomy of Large Margn Prncple Algorthms for Ordnal Regresson Problems Amnon Shashua Computer Scence Department Stanford Unversty Stanford, CA 94305 emal: shashua@cs.stanford.edu Anat Levn School of Computer
More informationRange images. Range image registration. Examples of sampling patterns. Range images and range surfaces
Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationBiostatistics 615/815
The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationDiscriminative Dictionary Learning with Pairwise Constraints
Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse
More informationUsing Neural Networks and Support Vector Machines in Data Mining
Usng eural etworks and Support Vector Machnes n Data Mnng RICHARD A. WASIOWSKI Computer Scence Department Calforna State Unversty Domnguez Hlls Carson, CA 90747 USA Abstract: - Multvarate data analyss
More informationJournal of Chemical and Pharmaceutical Research, 2014, 6(6): Research Article. A selective ensemble classification method on microarray data
Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(6):2860-2866 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A selectve ensemble classfcaton method on mcroarray
More informationEfficient Distributed Linear Classification Algorithms via the Alternating Direction Method of Multipliers
Effcent Dstrbuted Lnear Classfcaton Algorthms va the Alternatng Drecton Method of Multplers Caoxe Zhang Honglak Lee Kang G. Shn Department of EECS Unversty of Mchgan Ann Arbor, MI 48109, USA caoxezh@umch.edu
More informationAn Entropy-Based Approach to Integrated Information Needs Assessment
Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology
More informationA Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines
A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationCMPSCI 670: Computer Vision! Object detection continued. University of Massachusetts, Amherst November 10, 2014 Instructor: Subhransu Maji
CMPSCI 670: Computer Vson! Object detecton contnued Unversty of Massachusetts, Amherst November 10, 2014 Instructor: Subhransu Maj No class on Wednesday Admnstrva Followng Tuesday s schedule ths Wednesday
More informationAll-Pairs Shortest Paths. Approximate All-Pairs shortest paths Approximate distance oracles Spanners and Emulators. Uri Zwick Tel Aviv University
Approxmate All-Pars shortest paths Approxmate dstance oracles Spanners and Emulators Ur Zwck Tel Avv Unversty Summer School on Shortest Paths (PATH05 DIKU, Unversty of Copenhagen All-Pars Shortest Paths
More informationIncremental Learning with Support Vector Machines and Fuzzy Set Theory
The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and
More informationLecture 4: Principal components
/3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness
More information2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements
Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.
More informationCSCI 5417 Information Retrieval Systems Jim Martin!
CSCI 5417 Informaton Retreval Systems Jm Martn! Lecture 11 9/29/2011 Today 9/29 Classfcaton Naïve Bayes classfcaton Ungram LM 1 Where we are... Bascs of ad hoc retreval Indexng Term weghtng/scorng Cosne
More informationProgramming in Fortran 90 : 2017/2018
Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values
More informationFace Recognition Based on SVM and 2DPCA
Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty
More informationMachine Learning: Algorithms and Applications
14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of
More informationELEC 377 Operating Systems. Week 6 Class 3
ELEC 377 Operatng Systems Week 6 Class 3 Last Class Memory Management Memory Pagng Pagng Structure ELEC 377 Operatng Systems Today Pagng Szes Vrtual Memory Concept Demand Pagng ELEC 377 Operatng Systems
More informationAn Anti-Noise Text Categorization Method based on Support Vector Machines *
An Ant-Nose Text ategorzaton Method based on Support Vector Machnes * hen Ln, Huang Je and Gong Zheng-Hu School of omputer Scence, Natonal Unversty of Defense Technology, hangsha, 410073, hna chenln@nudt.edu.cn,
More informationGreedy Technique - Definition
Greedy Technque Greedy Technque - Defnton The greedy method s a general algorthm desgn paradgm, bult on the follong elements: confguratons: dfferent choces, collectons, or values to fnd objectve functon:
More informationCost-efficient deployment of distributed software services
1/30 Cost-effcent deployment of dstrbuted software servces csorba@tem.ntnu.no 2/30 Short ntroducton & contents Cost-effcent deployment of dstrbuted software servces Cost functons Bo-nspred decentralzed
More informationSI485i : NLP. Set 5 Using Naïve Bayes
SI485 : NL Set 5 Usng Naïve Baes Motvaton We want to predct somethng. We have some text related to ths somethng. somethng = target label text = text features Gven, what s the most probable? Motvaton: Author
More informationSimplification of 3D Meshes
Smplfcaton of 3D Meshes Addy Ngan /4/00 Outlne Motvaton Taxonomy of smplfcaton methods Hoppe et al, Mesh optmzaton Hoppe, Progressve meshes Smplfcaton of 3D Meshes 1 Motvaton Hgh detaled meshes becomng
More informationOutline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:
Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A
More informationFixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations
Fxng Max-Product: Convergent Message Passng Algorthms for MAP LP-Relaxatons Amr Globerson Tomm Jaakkola Computer Scence and Artfcal Intellgence Laboratory Massachusetts Insttute of Technology Cambrdge,
More informationSpecialized Weighted Majority Statistical Techniques in Robotics (Fall 2009)
Statstcal Technques n Robotcs (Fall 09) Keywords: classfer ensemblng, onlne learnng, expert combnaton, machne learnng Javer Hernandez Alberto Rodrguez Tomas Smon javerhe@andrew.cmu.edu albertor@andrew.cmu.edu
More informationSUMMARY... I TABLE OF CONTENTS...II INTRODUCTION...
Summary A follow-the-leader robot system s mplemented usng Dscrete-Event Supervsory Control methods. The system conssts of three robots, a leader and two followers. The dea s to get the two followers to
More informationFast Feature Value Searching for Face Detection
Vol., No. 2 Computer and Informaton Scence Fast Feature Value Searchng for Face Detecton Yunyang Yan Department of Computer Engneerng Huayn Insttute of Technology Hua an 22300, Chna E-mal: areyyyke@63.com
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationIntelligent Information Acquisition for Improved Clustering
Intellgent Informaton Acquston for Improved Clusterng Duy Vu Unversty of Texas at Austn duyvu@cs.utexas.edu Mkhal Blenko Mcrosoft Research mblenko@mcrosoft.com Prem Melvlle IBM T.J. Watson Research Center
More informationNetwork Intrusion Detection Based on PSO-SVM
TELKOMNIKA Indonesan Journal of Electrcal Engneerng Vol.1, No., February 014, pp. 150 ~ 1508 DOI: http://dx.do.org/10.11591/telkomnka.v1.386 150 Network Intruson Detecton Based on PSO-SVM Changsheng Xang*
More informationMathematics 256 a course in differential equations for engineering students
Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the
More informationPerformance Evaluation of Information Retrieval Systems
Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence
More informationCollaboratively Regularized Nearest Points for Set Based Recognition
Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,
More informationCollaborative Filtering Ensemble for Ranking
Collaboratve Flterng Ensemble for Rankng ABSTRACT Mchael Jahrer commo research & consultng 8580 Köflach, Austra mchael.ahrer@commo.at Ths paper provdes the soluton of the team commo on the Track2 dataset
More informationSorting: The Big Picture. The steps of QuickSort. QuickSort Example. QuickSort Example. QuickSort Example. Recursive Quicksort
Sortng: The Bg Pcture Gven n comparable elements n an array, sort them n an ncreasng (or decreasng) order. Smple algorthms: O(n ) Inserton sort Selecton sort Bubble sort Shell sort Fancer algorthms: O(n
More informationWhat s Next for POS Tagging. Statistical NLP Spring Feature Templates. Maxent Taggers. HMM Trellis. Decoding. Lecture 8: Word Classes
Statstcal NLP Sprng 2008 Lecture 8: Word Classes Dan Klen UC Berkeley What s Next for POS Taggng Better features! RB PRP VBD IN RB IN PRP VBD. They left as soon as he arrved. We could fx ths wth a feature
More informationSequential search. Building Java Programs Chapter 13. Sequential search. Sequential search
Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to
More informationLoop Transformations, Dependences, and Parallelization
Loop Transformatons, Dependences, and Parallelzaton Announcements Mdterm s Frday from 3-4:15 n ths room Today Semester long project Data dependence recap Parallelsm and storage tradeoff Scalar expanson
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu [Kumar et al. 99] 2/13/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
More informationAPPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT
3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ
More informationJapanese Dependency Analysis Based on Improved SVM and KNN
Proceedngs of the 7th WSEAS Internatonal Conference on Smulaton, Modellng and Optmzaton, Bejng, Chna, September 15-17, 2007 140 Japanese Dependency Analyss Based on Improved SVM and KNN ZHOU HUIWEI and
More informationPolyhedral Compilation Foundations
Polyhedral Complaton Foundatons Lous-Noël Pouchet pouchet@cse.oho-state.edu Dept. of Computer Scence and Engneerng, the Oho State Unversty Feb 8, 200 888., Class # Introducton: Polyhedral Complaton Foundatons
More informationThree supervised learning methods on pen digits character recognition dataset
Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru
More informationA novel feature selection algorithm based on hypothesis-margin
JOURNAL OF COMPUTERS, VOL. 3, NO. 1, DECEMBER 008 7 A novel feature selecton algorthm based on hypothess-margn Mng Yang* Fe Wang and Png Yang Department of Computer Scence, Nanjng Normal Unversty, Nanjng,
More informationActive Contours/Snakes
Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng
More informationA Selective Sampling Method for Imbalanced Data Learning on Support Vector Machines
Iowa State Unversty Dgtal Repostory @ Iowa State Unversty Graduate Theses and Dssertatons Graduate College 2010 A Selectve Samplng Method for Imbalanced Data Learnng on Support Vector Machnes Jong Myong
More informationComputer Vision. Pa0ern Recogni4on Concepts Part II. Luis F. Teixeira MAP- i 2012/13
Computer Vson Pa0ern Recogn4on Concepts Part II Lus F. Texera MAP- 2012/13 Last lecture The Bayes classfer yelds the op#mal decson rule f the pror and class- cond4onal dstrbu4ons are known. Ths s unlkely
More informationIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng
More informationRelevance Feedback Document Retrieval using Non-Relevant Documents
Relevance Feedback Document Retreval usng Non-Relevant Documents TAKASHI ONODA, HIROSHI MURATA and SEIJI YAMADA Ths paper reports a new document retreval method usng non-relevant documents. From a large
More informationEYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS
P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye
More informationA MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS
Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung
More informationSimulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010
Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement
More informationCLASSIFICATION OF ULTRASONIC SIGNALS
The 8 th Internatonal Conference of the Slovenan Socety for Non-Destructve Testng»Applcaton of Contemporary Non-Destructve Testng n Engneerng«September -3, 5, Portorož, Slovena, pp. 7-33 CLASSIFICATION
More informationConcurrent Apriori Data Mining Algorithms
Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng
More informationMULTI-VIEW ANCHOR GRAPH HASHING
MULTI-VIEW ANCHOR GRAPH HASHING Saehoon Km 1 and Seungjn Cho 1,2 1 Department of Computer Scence and Engneerng, POSTECH, Korea 2 Dvson of IT Convergence Engneerng, POSTECH, Korea {kshkawa, seungjn}@postech.ac.kr
More informationA Robust LS-SVM Regression
PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc
More informationRadial Basis Functions
Radal Bass Functons Mesh Reconstructon Input: pont cloud Output: water-tght manfold mesh Explct Connectvty estmaton Implct Sgned dstance functon estmaton Image from: Reconstructon and Representaton of
More information