Why Neural Networks? An Enduring Synthesis. Neural Networks. After s: Hebb, McCulloch and Pitts

Size: px
Start display at page:

Download "Why Neural Networks? An Enduring Synthesis. Neural Networks. After s: Hebb, McCulloch and Pitts"

Transcription

1 Neural Networks Donald H. Cooley An Endurng Synthess An endurng synthess for how the bran works wll enable us to explan how we rapdly and spontaneously adapt to nosy and complex envronments whose rules may change UNPREDICABILIY How do we cope wth the bloomng buzzng confuson of every day? (Wm. James) Central queston of bologcal ntellgence: How to acheve autonomous behavor n a changng and unpredctable world? What facultes are needed f an automaton s to acheve bologcal ntellgence? percepton see, hear, smell, touch cognton recognze, recall, plan, hypothesze, test acton exploratory and goal orented movements cogntve/emotonal recognze an object... so what? Why Neural Networks? Ever snce the 94 s t has been known that the: functonal unts that govern actons are dstrbuted patterns (electrcal & chemcal) across a network of cells or neurons. Hence: to smulate human behavor smulate neural networks 3 4 Why are bologcal NN s dffcult to study? Anmals are desgned to hde ther neural mechansms from behavoral ntrospecton. We want to analyze at the mcrolevel; whereas, we can only observe at the macro-level. After 94 94s: Hebb, McCulloch and Ptts Mechansm for learnng n bologcal neurons Neural-lke networks can compute any arthmetc functon 5 6

2 McCulloch-Ptts Neuron he McCulloch-Ptts Neuron, developed n 943 Bnary nputs / Bnary outputs / hreshold transfer functon Weghts were + (exctatory) or - (nhbtory) All exctatory connectons to a partcular neuron have the same weght; however, the values comng nto one unt Y do not have to be the same as those nto Y McCulloch-Ptts Neuron Each neuron has a fxed thresholdθ such that f the net value s > θ the neuron fres (outputs a ) θ s set so that nhbton s absolute,.e. any nonzero nhbtory nput wll prevent the neuron from frng It takes one tme step for a sgnal to pass over one connecton lnk 7 8 McCulloch-Ptts Neuron McCulloch/Ptts Neuron X.. X n X n+... X n+m w w -p -p nhbton s absolute hence > nw - p,.e. one nhbtor shuts t off Y θ Y wll fre (output a ) f t receves k or more exctatory nputs and no nhbtory nputs such that kw θ > (k-)w For the followng neurons, (the threshold) = X X Y X X Y What logc functons are mplemented by these networks? θ 9 McCulloch/Ptts Neuron For the followng neurons, (the threshold) = X X - - Z Z θ Y Although smple, a MP neuron can mplement any Boolean functon Logcal completeness AND OR NO What logc functon s mplemented by ths network?

3 After the MP Neuron Learnng/ranng/eachng he MP neuron seemed to have functonalty, but there was nothng about how to update or automatcally generate the weghts. 949 Donald Hebb Proposed a scheme for updatng a neuron s weghts Stated that nformaton (memores) could be stored n the connectons (synaptc weghts) Proposed a learnng scheme n whch synaptc weghts changed durng learnng 3 After the MP Neuron Learnng/ranng/eachng Hebb s Rule: When an axon of cell A s near enough to excte cell B and repeatedly takes place n frng t, some growth process or metabolc change takes place n one or both cells such that A s effcency as one of the cells frng B s ncreased. Is there bologcal support for Hebban learnng? Pavlov s dog 4 Perceptrons & Lnear Separablty 5 Applcatons Aerospace Hgh performance arcraft autoplots, flght path smulatons, arcraft control systems, autoplot enhancements, arcraft component smulatons, arcraft component fault detectors Automotve Automoble automatc gudance systems, warranty actvty analyzers Bankng Check and other document readers, credt applcaton evaluators 6 Applcatons Defense Weapon steerng, target trackng, object dscrmnaton, facal recognton, new knds of sensors, sonar, radar and mage sgnal processng ncludng data compresson, feature extracton and nose suppresson, sgnal/mage dentfcaton Electroncs Code sequence predcton, ntegrated crcut chp layout, process control, chp falure analyss, machne vson, voce synthess, nonlnear modelng 7 Applcatons Fnancal Real estate apprasal, loan advsor, mortgage screenng, corporate bond ratng, credt lne use analyss, portfolo tradng program, corporate fnancal analyss, currency prce predcton Manufacturng Manufacturng process control, product desgn and analyss, process and machne dagnoss, real-tme partcle dentfcaton, vsual qualty nspecton systems, beer testng, weldng qualty analyss, paper qualty predcton, computer chp qualty analyss, analyss of grndng operatons, chemcal product desgn analyss, machne mantenance analyss, project bddng, plannng and management, dynamc modelng of chemcal process systems 8

4 Applcatons Medcal Breast cancer cell analyss, EEG and ECG analyss, prosthess desgn, optmzaton of transplant tmes, hosptal expense reducton, hosptal qualty mprovement, emergency room test advsement Robotcs rajectory control, forklft robot, manpulator controllers, vson systems Speech Speech recognton, speech compresson, vowel classfcaton, text to speech synthess Applcatons Securtes Market analyss, automatc bond ratng, stock tradng advsory systems elecommuncatons Image and data compresson, automated nformaton servces, real-tme translaton of spoken language, customer payment processng systems ransportaton ruck brake dagnoss systems, vehcle schedulng, routng systems Vehcle recognton & countng 9 he Bran as a Neural Network he bran has ~ neurons Neurons respond slowly (mllseconds) here are many dfferent types of neurons Neurons communcate wth other neurons through synapses Each neuron may be connected to as many as, other neurons Axons connect neurons to one another axon etc. NEURON Synapses (synaptc juncton) nucleus axon dendrte note: In the bran, there may be as many as, connectons Response tme ~= mllseconds Neural Network Organzaton Neurons are organzed nto local crcuts. All of the neurons n a crcut/network may not be of the same type Neural networks perform specfc functons, e.g. vson 3 ermnology Neuron artfcal bologcal Neural Network Names NN ANN parallel dstrbuted processng model PDP connectvst/connectonsm model adaptve system self-organzng system neurocomputng neuromorphc system 4

5 An Artfcal Neuron NN Learnng x x n w w n f(x) Y = output = f(x) Y = n + exp( wx) = f(x) s sometmes called the neuron transfer functon. here are many such functons 5 Most mportant feature of NN s s ablty to learn Learnng mples that they are able to mprove ther performance he process of learnng nvolves adaptng or modfyng the network s free parameters to mprove performance 6 NN Learnng- Free Parameters What are a NN s free parameters? NN Learnng- Free Parameters What are a NN s free parameters? Weghts Bases 7 8 Basc Learnng Rules Error-Correcton Learnng here are 5 basc learnng rules Error correctng (optmzaton) Memory based (memorzaton) Hebban (bologcally nspred) Compettve (bologcally nspred) Boltzman (statstcal) 9 A partcular weght s adjusted accordng to the error of that neuron s output ξ( n) = ek ( n) ξ( n) = value of the error for tranng sample n e ( n) = d ( n) y ( n) => error sgnal k k k d ( n) = desred output, y ( n) = actual output k k 3

6 Error-Correcton Learnng he goal s to mnmze ξ, the cost or error functon he learnng rule s commonly referred to as the delta rule or the Wdrow-Huff learnng rule Error-Correcton Learnng wkj ( n) = ηek ( n) xj ( n) η = learnng rate he weght adjustment s proportonal to the product of the error sgnal and the nput sgnal of the synapse n queston η s a postve constant that corresponds to the rate of learnng 3 3 Delta Rule he adjustment made to a synaptc weght of a neuron s proportonal to the product of the error sgnal and the nput of the synapse n queston. Delta Rule Assumes that the error s drectly measureable X s a presynaptc value and v s a postsynaptc value Error-Correcton Learnng w ( n + ) = w + w ( n) kj k ( n) kj he precedng represents a closed-loop feedback system (CLFS) he stablty of a CLFS s determned by what s fed back he most mportant element of the feedback s the learnng rate η Learnng Rate Error correcton learnng occurs n a closed-loop feedback system Stablty s determned by the values of the parameters n the feedback loop(s) here s only one feedback loop and there s only one parameter (adjustable); namely η hus η s very mportant n how the network acts as t learns 35 36

7 Learnng asks Pattern Assocaton Assocatve Memory Pattern Recognton Classfcaton Functon Approxmaton Control Flterng Optmzaton In any problem (optmzaton), the basc dea s to fnd a (cost/error/etc.) functon to optmze and then teratvely change the parameters of the cost functon n such a way as to keep mprovng t untl t can no longer be mproved Optmzaton he four most mportant questons are What do we optmze? How (n what drecton) do we change the parameter(s) to optmze? How much do we change the parameters? How do we know when we are done? Optmzaton x k = ( x k + x k ) = α k p k x k p k - Search Drecton αk p k x k + α k - Learnng Rate 39 4 What to Optmze Assume a sngle lnear neuron, wth weght s w For each nput x() there s an assocated output y() here s also a desred output d(), and thus the error s e()=d()-y() - Overdamped

8 Underdamped η oo Large η oo Large η oo Large Can you gve an explanaton as to why the system would go unstable as shown n the precedng pcture? Can you gve an explanaton as to why the system would go unstable as shown n the precedng pcture? Note that movement s perpendcular to the gradent lne. he gradent lne(s) are lnes of constant slope If the dstance moved places the pont on a pont of greater slope then the next move wll do the same, etc Least Mean Square (LMS) Algorthm LMS algorthm s based on nstantaneous error values What t means s that we don t look at the error over a sequence of nputs, but only the current nput. By lookng at nstantaneous error values, we can only get an estmate of the error gradent 47 Least Mean Square (LMS) Algorthm What the theory behnd the LMS algorthm shows s that f we choose a small enough learnng rate, then over tme we wll fnd the w that gves the mnmum LMS error Lke lnear least-squares, we are usng a lnear neuron 48

9 Least Mean Square (LMS) Because the update s only an estmate of the gradent, unlke the steepest descent algorthm, the LMS algorthm follows an unpredctable path to the mnmum. Sometmes ths s called stochastc gradent descent Least Mean Square (LMS) he LMS algorthm works as follows: set w() = choose a value for η for each nput x(), and output d() compute e(n) = d(n)-w(n) x(n) w(n+)=w(n)+ η x(n)e(n) 49 5 LMS Example LMS Convergence Consderatons hat whch s changng the weght at each teraton s the learnng rate η, and the nput vector Stablty of the LMS algorthm, s a functon of the statstcal characterstcs of the nput and the sze of the learnng rate. η Stated another way we have to select accordng to the envronment n whch the X s are gven 5 5 Perceptron LMS bult on lnear neuron Perceptron bult on McCullch Pts neuron +/- for output If we consder a sngle neuron as a classfer, then we have the followng defntons m Perceptron v = w x + b = v > y = v 53 54

10 Perceptron Perceptron If we combne the bas wth the weght matrx w, and nclude an addtonal nput of for the x matrx x hen we have m vn ( ) = w( nx ) ( n) = w ( nxn ) ( ) = Perceptron Perceptron What s the relatonshp between the hyperplane (decson surface) and the weght vector? 57 What s the relatonshp between the hyperplane (decson surface) and the weght vector? Hyperplane defnes a lne (surface) for whch v= he hyperplane s perpendcular to the weght vector he bas defnes the dstance of the hyperplane from the orgn 58 Perceptron Perceptron Is a bas value mportant to a perceptron? Is a bas value mportant to a perceptron? Wthout the bas value, the hyperplane can only go through the orgn. 59 6

11 Example Let s say we have a group of lnearly separable values that we want a perceptron to recognze. Say the nput pars are (,) = class (,)=class (,) = class (,) = class 6 Example hus we want w and b such that F(w*x + b) = for class and for class We want a decson surface defned as [ w w b] x x = wx + wx + b = 6 Example here are an nfnte number of equaton coeffcents that wll satsfy ths relaton Draw a lne between the two regons, and then choose w s for coeffcents that are perpendcular to ths lne. Fnally, solve for b One soluton s [ 3] 63 Perceptron Learnng Rule x( n) = ( m + ) by nput vector = [, x ( n),..., x ( n)] w( n) = ( m + ) by weght vector = [ bn ( ), w( n),..., w ( n)] m b( n) = bas y( n) = output ( quantzed to ± ) = sgn( w x) d( n) = desred response η = learnng rate postve cons tant < m 64 Perceptron Learnng Rule. Intalze w()=. Input x(n) and compute y(n) 3. Update w as w(n+)=w(n) + η [d(n)-y(n)]x(n) Repeat steps and 3 untl no more weght changes 65 Heurstc Improvements Sequental vs Batch mode update Use sequental smpler, faster, requres less temporary storage Maxmze nformaton content Examples should contan maxmum nformaton Results n the largest tranng error Radcally dfferent from prevous examples 66

12 Heurstc Improvements Maxmze nformaton content (cont d) Always re-randomze tranng patterns For tranng patterns choose ones that are dffcult to recognze Potental problem exsts f dffcult pattern s actually an outler. Heurstc Improvements Actvaton Functon Generally learns faster f t s antsymmetrc ϕ( v) = ϕ( v) Not true of log-sgmod, but true of anh Heurstc Improvements Actvaton Functon (cont d) he next fgure s the log sgmod whch does not meet the crteron he fgure after that s the anh whch s antsymmetrc 69 7 Heurstc Improvements Actvaton Functon (cont d) For the anh functon, emprcal studes have shown the followng values for a and b to be approprate ϕ( v) = a tanh( bv) a = 759. b = 3 7 7

13 Heurstc Improvements Actvaton Functon (cont d) Note that ϕ() = and ϕ( ) = slope s max mum at v = ( t' s unty) Heurstc Improvements arget values Choose wthn range of output values Really should be some value from the maxmum For the tanh, wth a=.759, choose =.759, and then the targets can be +/ Heurstc Improvements Normalzng the nputs Preprocess nputs so that average over range s ~ or t s small compared to ts standard devaton Input values should be scaled so that ther covarances are approxmately equal ensures that weghts learn at about same rate If possble, nput values should be uncorrelated Covarance & Correlaton [ µ x µ y ] cov( XY, ) = E( x )( y ) cov( XY, ) corr( X, Y) = ρ = VAR( X ) VAR( Y) Correlaton & Covarance Note that for a postve correlaton t means as X ncreases, so does y, and vce versa Of the followng plots whch has the hghest covarance? 77 78

14 Heurstc Improvements Intalzaton t can be shown (approxmately, under certan condtons) that a good choce for weghts s to select them randomly from a dstrbuton wth µ = and σ = m = total number of weghts m Feature Detecton Hdden neurons play the role of feature detectors end to transform the nput vector space nto a hdden or feature space Each hdden neuron s output s then a measure of how well that feature s present n the current nput 79 8 Generalzaton Network generalzes well when for nontraned-on-data produces correct (or near correct) outputs. Can overft or overtran Generally want to select smoothest/smplest mappng of functon n absence of pror knowledge Generalzaton Influenced by four factors Sze of tranng set How representatve tranng set s of data Neural network archtecture Physcal complexty of problem at hand Often NN confguraton or tranng set fxed and so have only other to work wth 8 8 Generalzaton A commonly used value s N=O(W/ ) where O => s lke Bg-O, W= total number of weghts, =fracton of classfcaton errors permtted on test data. 83 Approxmatons of Functons NN acts as a non-lnear mappng from nput to output space Everywhere dfferentable f all transfer functons are dfferentable What s the mnmum number of hdden layers n a multlayer perceptron wth an I/O mappng that provdes an approxmate mappng of any contnuous mappng 84

15 Approxmatons of Functons Part of unversal approxmaton theorem hs theorem states (n essence) that a NN wth bounded, nonconstant, monotone ncreasng contnuous transfer functons and one hdden layer can approxmate any functon Says nothng about optmum n terms of learnng tme, ease of mplementaton, or generalzaton Approxmatons of Functons In general, for good generalzaton, the number of tranng samples N should be larger than the rato of the total number of free parameters (weghts) n the network to the mean-square value of the estmaton error Practcal Consderatons For hgh dmensonal spaces, t s often better to have -layer networks so that neurons n layers do not nteract so much. Cross Valdaton Randomly dvde data nto tranng and testng sets Further randomly dvde tranng set nto estmaton and valdaton subsets. Use valdaton set to test accuracy of model, and then test set for actual accuracy value Cross Valdaton leave-one-out ran on everythng but one and then test on t Repeat for all parttons RBF We have seen that a BPNN (backpropagaton neural network) can be used for functon approxmaton and classfcaton A RBFNN (radal bass functon neural network) s another network that can be used for both such problems 89 9

16 RBFNN vs BPNN RBFNN vs BPNN RBFNN has only two layers, whereas BPNN can have any number Normally all nodes of the BPNN have the same model, whereas the RBFNN has two dfferent models RBFNN has a nonlnear and a lnear layer Argument of RBFNN computes a Eucldean norm whereas a BPNN computes a dot product RBFNN trans faster than BPNN 9 RBFNN often leads to better decson boundares Hdden layer unts of RBFNN have a much more natural nterpretaton RBFNN learnng phase may be unsupervsed and thus could lose nformaton BPNN may gve a more compact representaton RBF here are general categores of RBFNN s Classfcaton separaton of hyperspace frst porton of chapter Functon approxmaton use a RBFNN to approxmate some non-lnear functon RBF-Classfcaton Cover s heorem separablty of patterns A complex pattern classfcaton problem re-cast n a hgh(er)- dmensonal space nonlnearly s more lkely to be lnearly separable than n a low-dmenson space 95 96

17 Cover s heorem Cover s heorem - Corollary Gven an nput vector X=[x, x k ] ( of dmenson k), f we recast t usng some set of nonlnear transfer functons on each of the nput parameters (of dmenson m>= k) then t s more lkely that ths new set wll be lnearly separable In some cases smply usng a nonlnear mappng and not changng (ncreasng) the dmensonalty s suffcent 97 he expected maxmum number of randomly assgned patterns (vectors) that are lnearly separable n a space of dmensonalty m s m Stated another way: m s a defnton of the separatng capacty of a famly of decson surfaces havng m degrees of freedom 98 XOR Problem In the XOR problem x x we want a output when the nputs are not equal, else output a. hs s not a lnearly separable problem Observe what happens f we use two non-lnear functons appled to x and x ϕ ( X ) = e XOR [, ] = [,] ϕ (,) = e = X (( ) + ( ) ϕ ( X ) = e ) = e X = XOR ϕ Input X (X) ϕ (X) (,).353. XOR If one plots these new ponts (next slde), they can see that they are now lnearly separable (,) (,) (,)

18 (,) XOR (,) (,) (,) What the prevous sldes show s that by ntroducng a nonlnearty, even wthout ncreasng the dmensonalty, the space becomes lnearly separable 3 4 Regularzaton An mportant pont to remember s that we always have nose n the data, and thus t s probably not a good dea to develop a functon that exactly fts the data. Another way to thnk of t s that fttng the data too closely wll lkely gve poor generalzaton he next slde s bad, and the one after that s good

19 9 Interpolaton here s sort of a converse to Cover s theorem about separablty. It s One can often use a nonlnear mappng to transform a dffcult flterng (regresson) problem nto one that can be solved lnearly. More Regresson/Interpolaton In a sense, the regresson problem bulds an equaton to gve us the nput/output relatonshp on a set of data. Gven one of the nputs we traned, we should be able to use ths equaton to get the output wthn some error he nterpolaton problem addresses the ssue of what value(s) do we get for nputs that we have not traned on RBF Regresson he RBF approach to regresson chooses F such that N F( X) = wϕ( X X ) = where ϕ( X X ) s a set of N arbtrary ( generally nonlnear) functons known as radal bass functons and denotes a norm usually Eucldean he X are the centers of the RBF' s RBF he radal Bass Functons most commonly used are N=# of data ponts F( x) = N = we σ x x 3 4

20 RBF hus, generally we accept a suboptmal soluton n whch the number of bass functons s <N, and thus the number of centers of bass functons s < N Classfcaton RBF s can also be used as classfers. Remember BPNN s tend to dvde the nput space (fgure a next slde) whereas RBF s tend to kernelze the space (fgure b next slde) 5 6 (a) BPNN (b) RBFNN Classfcaton One can nterpret the RBF s as posteror probabltes of the presence of a data pont n the nput space, and weghts can be vewed as posteror probabltes of class membershp We therefore have a -layer organzaton for a RBFNN used as a classfer

21 Learnng Strateges ypcally, the weghts of the two layers are determned separately,.e. fnd RBF weghts, and then fnd output layer weghts here are several methods for parameterzng the RBF s and selectng the output layer weghts Fxed Centers Randomly select from the tranng data some set of representatve x s and use these as the centers (the t s) m ϕ( x) = exp( x t ) =,,..., m d max m = number of centers, and d max s the maxmum ds tance between centers he effectve s tan dard devaton of these RBF' s s d max σ = m Fxed Centers From ths, what parameters are left to fnd? Fxed Centers From ths, what parameters are left to fnd? he w s of the output layer For ths, we need a matrx w such that w = ϕ X where X s the matrx of tranng data 3 4 Fxed Centers he problem wth the precedng s that ϕ s probably not square, and hence can t be nverted. o do ths we use the pseudo-nverse, and thus + w = ϕ X + ϕ = ( ϕ ϕ) ϕ ϕ = where { ϕ j } Fxed Centers m ϕ j = exp( xj t ) j =,,..., N =,,.., m d x s the jth nput vector of the tranng sample j max Example gven on page 84 of text 5 6

22 Pseudo-Inverse In order to nvert a matrx, t must meet certan crtera. One of those crtera s that t be square he Pseudo-nverse algorthm allows us to convert a non-square matrx nto an equvalent square matrx Pseudo-Inverse Remember that ultmately for a neural network we want W (a weght matrx) such that arget = WX hus W= X - If an nverse exsts, then the error can be mnmzed If no nverse exsts, then use the pseudo-nverse to get mnmum error 7 8 Pseudo-Inverse he pseudo-nverse s defned as W=X + Where X + = (X X) - X Example X =, t = [ ], X =, t = [ ] X X = = = X X X = ( ) = Self-Organzed Center Selecton Fxed centers may requre a relatvely large number of randomly selected centers to work well hs method (self-organzed) uses a two stage process (teratve) Self-organzed learnng stage estmate RBF centers Supervsed Learnng estmate lnear weghts Self-Organzed Learnng Stage For ths stage we need a clusterng algorthm that dvdes or parttons the data ponts nto subgroups Commonly use K-means clusterng Place RBF centers only n those regons of the data space where there are sgnfcant data of output layer 3 3

23 Self-Organzed Learnng Stage Self-Organzed Learnng Stage { } t ( n) denotes the centers of the RBF' s k m k = at teraton n 33 Intalzaton choose random values for the ntal set of centers t k (); subject to the restrcton that these ntal values must all be dfferent Randomly draw a sample vector from the nput space Smlarty matchng Let k(x) denote the best matchng center for nput x (closest) 34 Self-Organzed Learnng Stage Fnd k(x) at teraton n usng the mnmum dstance Eucldean crteron: k( x) = arg mn x( n) t ( n) k =,,..., m k where t k ( n) s the center of the kth radal bass functon at teraton n k Self-Organzed Learnng Stage Updatng Adjust the center of the closest RBF center as t ( n + ) = t ( n) + η[ x( n) t ( n)] < η < k k k Contnuaton Repeat untl no notceable changes n centers occur Generally, reduce the learnng rate over tme K-means Clusterng he prevous s dependent on the selecton of the ntal centers. Once the centers are found, must stll set and output layer weghts Adaptve K-means wth dynamc ntalzaton Randomly pck a set of centers c,..ck. Dsable all cluster center. Read an nput vector X 3. If the closest enabled cluster center c s wthn dstance r of X or f all cluster centers are already enabled, update c as η c = c + (X-c) 37 38

24 Adaptve K-means wth dynamc ntalzaton - Contnued 4. Otherwse, enable a new cluster ck and set t equal to X 5. Contnue untl a fxed number of teratons, or untl learnng rate has decayed to Next Step he next step s to set the wdths of the centers σ k How? 39 4 Next Step Can use a P nearest neghbor heurstc Gven a cluster center c k, select the P nearest neghborng clusters σ k Set to the root mean square of the dstances to these clusters Next Step σ k = c P k c p p 4 4 Fnally he last step s to set the weghts of the output layer Snce t s lnear, just use the delta tranng rule Note also, that there s no real requrement that the output layer be lnear, e.g. could be a BPNN 43 LVQ Learnng Vector Quantzaton Network Frst layer s compettve Output layer selects class (/) from output of neurons n compettve layer Output of a neuron s a f t s closest to the nput vector 44

25 LVQ For closeness, can we just use the dot product? LVQ For closeness, can we just use the dot product? Need some other dstance measure because dot product s nfluenced by vector magntude as much as angle (closeness) LVQ Input s P = arget or class s = LVQ For an nput P, want the output of the K competetve neurons to be all s except for one of them In the output layer there wll be one neuron wth -/ as weghts. hs neuron wll be hardlm neuron so ts output wll be / for the two classes SVM SVMs ntroduced n 99 by Boser, Guyon,Vapnk. Kernal machnes (KM) are a more general class of learnng machnes, SVM s are a subclass of KM s Kernal methods explot nformaton about the nner (dot, scalar) product between data tems 49 SVM Kernel generally s vewed as the set of features about a decson, e.g. defnes the vector space. Very complex decson problems can be defned n terms of dot products just have to recast (probably non-lnearly) nto a probably hgher dmenson space Remember Cover s heorem 5

26 SVM Bascally a lnear machne wth some nce propertes Means for non-lnearly separable applcatons need to apply cover s theorem apply non-lnear transformatons and possbly put n hgher dmensonal space SVM s more general than say MLBP learnng algorthm 5 5 Dot Product Remember, the functon WX + b = s SVM s he learnng algorthm for SVM s s dfferent from say BPNN s BPNN s seek to mnmze the error n the tranng data (emprcal rsk mnmzaton ERM) SVM s seek to mnmze what s called the structural rsk measure (SRM),.e. the error (rsk) n the testng data SRM places an upper bound on the generalzaton error.e. they generalze better SVM s - Classfcaton he tranng of a classfcaton SVM nvolves the fttng of a hyperplane such that the largest margn s formed between classes of vectors whle mnmzng the effects of classfcaton errors he vectors closest to the hyperplane are called support vectors 55 56

27 he optmal hyperplane s orthogonal to the shortest lne connectng the convex hulls of the two classes (dotted), and ntersects t half way. here s a weght vector w and a threshold b such that y ((w x ) + b) >. Rescalng w and b such that the pont(s) closest to the hyperplane satsfy (w x ) + b =, we obtan a form (w, b) of the hyperplane wth y ((w x ) + b) >=. Note that the margn, measured perpendcularly to the hyperplane, equals / w. o maxmze the margn, we thus have to mnmze w subject to y ((w x ) + b) = SVM he separaton (dstance) between the hyperplane and the closest data pont(s) s called the margn of separaton ρ he goal of the SVM s to fnd the hyperplane that maxmzes ρ When ρs maxmzed, ths decson surface s sad to be the optmal hyperplane. 59 Hyperplanes Remember that n an n-dmensonal space, a decson hyperplane s defned as [ bw,, w,... wn ] [,,... ] n. = W X + b = w w w OR = x x.. xn x x + b. xn 6 Hyperplane If d =+/-(d=desred output) then we have W X + b for d = + W X + b < for d = If w and b denote the optmum values of the weght and bas then we have W X + b = For X on the hyperplane 6 Hyperplane he dscrmnant functon then becomes g( X) = W X + b hs gves a measure of the dstance from X to the optmal hyperplane Another nterpretaton of X n terms of these parameters s X X r W = ρ + W 6

28 Hyperplanes g X W X b W X r W ( ) = + = ( p + ) + b W W X r W W = p + + b W because g( X ) = W X + b =, we have p p g X r W W r W ( ) = = = rw or r= W W g( X) W Hyperplanes he dstance from the orgn to the hyperplane s b / w. If b > the orgn s on the postve sde of the hyperplane he problem s to fnd the values for b and w from the tranng set {x, d } Hyperplanes By scalng b and w we can set the dscrmnants so that W X + b for d = + W X + b < for d = he data ponts {X, d } for whch the equaltes hold are called the support vectors, e.g. s s g( X ) = W X + b = Hyperplanes he support vectors le closest to the decson surface and thus they are the most dffcult to classfy For these support vectors, ther dstance to the hyperplane s s g( X ) r = W W r = W f d f d s s = + =

29 Matlab In Matlab Under the pull-down for help select Help->Matlab Help hs wll gve you a lst of topcs under whch you can read tutorals Clck on the Neural Network oolbox and expand t to gve the topcs ranng several dfferent functons tranb trans a network wth weght and bas learnng rules wth batch updates. he weghts and bases are updated at the end of an entre pass through the nput data. tranr trans a network wth weght and bas learnng rules wth ncremental updates after each presentaton of an nput. Inputs are presented n random order ranng 4 dfferent functons trans rans a network wth weght and bas learnng rules wth ncremental updates after each presentaton of an nput. Inputs are presented n sequental order. Perceptrons Remember lnearly separable Perceptron tranng rule If lnearly separable, and the learnng rate not too hgh, wll always fnd a soluton What does fnd a soluton mean? 7 7 Demop Demop Demop4 Demop5 Demop6 Perceptrons 73 Perceptrons Four functons are mportant for perceptron networks Newp creates a new perceptron network Int ntalzes a perceptron network Sm smulates (executes) a perceptrton network wth a set of values (nput & target) ran trans a network Updatedoes one tranng teraton 74

30 Perceptrons - newp Newp s for the creaton of a perceptron based neural network net=newp(pr,s) PR=> Rx matrx of max and mn values of nput elements s=> number of neurons Perceptrons - newp net=newp([ ],); one neuron wth a sngle nput. Range of values for the nput s to net=newp([,;-,],); two neurons each wth a two nputs. Range of values for the nputs are to, and - to respectvely Perceptrons - newp net=newp([,;-,],); net.b{}=[;]; sets bases to and net.b{} gves values of bases Perceptrons newp, ntalze net=newp([- ; 3,5],) weghts & bases ntalzed to net.iw{,}=[- ;,] net.b{}=[;] Perceptrons - sm Sm runs through the nput data and outputs the value acqured by the network for each nput he nput can be a sngle nput vector or multple n whch case t outputs for each Perceptrons - sm p={[;] [;] [-;3]}; sm(net,p) 79 8

31 Perceptrons - nt Intalzes or rentalzes network Int(net) sets parameters back to orgnal Perceptrons - nt Can change the way that a perceptron s ntalzed wth nt. net.bases{}.ntfcn = 'rands'; net = nt(net); wts = bases = Perceptrons - tran net = newp([- ;- +],); p =[; ]; t=[]; net.tranparam.epochs = 5; net = tran(net,p,t); RAINC, Epoch /5 RAINC, Epoch /5 RAINC, Performance goal met. MatLab GUI Matlab has a very powerful GUI for creatng and manpulatng networks nntool brngs t up It s explaned under matlab help->neural network toolbox->perceptrons->graphcal user nterface BP - GUI Create the nput and the target data Inputdata=[ 3; -] hs means there are 3 data values to tran on and they are (,),(,), and (3,-) arget=[ 3] hus for an nput of (,) we want an output of, etc. 85

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros. Fttng & Matchng Lecture 4 Prof. Bregler Sldes from: S. Lazebnk, S. Setz, M. Pollefeys, A. Effros. How do we buld panorama? We need to match (algn) mages Matchng wth Features Detect feature ponts n both

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

Fitting: Deformable contours April 26 th, 2018

Fitting: Deformable contours April 26 th, 2018 4/6/08 Fttng: Deformable contours Aprl 6 th, 08 Yong Jae Lee UC Davs Recap so far: Groupng and Fttng Goal: move from array of pxel values (or flter outputs) to a collecton of regons, objects, and shapes.

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

SUMMARY... I TABLE OF CONTENTS...II INTRODUCTION...

SUMMARY... I TABLE OF CONTENTS...II INTRODUCTION... Summary A follow-the-leader robot system s mplemented usng Dscrete-Event Supervsory Control methods. The system conssts of three robots, a leader and two followers. The dea s to get the two followers to

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Image Alignment CSC 767

Image Alignment CSC 767 Image Algnment CSC 767 Image algnment Image from http://graphcs.cs.cmu.edu/courses/15-463/2010_fall/ Image algnment: Applcatons Panorama sttchng Image algnment: Applcatons Recognton of object nstances

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Backpropagation: In Search of Performance Parameters

Backpropagation: In Search of Performance Parameters Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,

More information

Artificial Intelligence (AI) methods are concerned with. Artificial Intelligence Techniques for Steam Generator Modelling

Artificial Intelligence (AI) methods are concerned with. Artificial Intelligence Techniques for Steam Generator Modelling Artfcal Intellgence Technques for Steam Generator Modellng Sarah Wrght and Tshldz Marwala Abstract Ths paper nvestgates the use of dfferent Artfcal Intellgence methods to predct the values of several contnuous

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Mult-stable Percepton Necker Cube Spnnng dancer lluson, Nobuuk Kaahara Fttng and Algnment Computer Vson Szelsk 6.1 James Has Acknowledgment: Man sldes from Derek Hoem, Lana Lazebnk, and Grauman&Lebe 2008

More information

Discriminative classifiers for object classification. Last time

Discriminative classifiers for object classification. Last time Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010 Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

KOHONEN'S SELF ORGANIZING NETWORKS WITH "CONSCIENCE"

KOHONEN'S SELF ORGANIZING NETWORKS WITH CONSCIENCE Kohonen's Self Organzng Maps and ther use n Interpretaton, Dr. M. Turhan (Tury) Taner, Rock Sold Images Page: 1 KOHONEN'S SELF ORGANIZING NETWORKS WITH "CONSCIENCE" By: Dr. M. Turhan (Tury) Taner, Rock

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z. TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of

More information

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE Dorna Purcaru Faculty of Automaton, Computers and Electroncs Unersty of Craoa 13 Al. I. Cuza Street, Craoa RO-1100 ROMANIA E-mal: dpurcaru@electroncs.uc.ro

More information

A Saturation Binary Neural Network for Crossbar Switching Problem

A Saturation Binary Neural Network for Crossbar Switching Problem A Saturaton Bnary Neural Network for Crossbar Swtchng Problem Cu Zhang 1, L-Qng Zhao 2, and Rong-Long Wang 2 1 Department of Autocontrol, Laonng Insttute of Scence and Technology, Benx, Chna bxlkyzhangcu@163.com

More information

Machine Learning. Topic 6: Clustering

Machine Learning. Topic 6: Clustering Machne Learnng Topc 6: lusterng lusterng Groupng data nto (hopefully useful) sets. Thngs on the left Thngs on the rght Applcatons of lusterng Hypothess Generaton lusters mght suggest natural groups. Hypothess

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

K-means and Hierarchical Clustering

K-means and Hierarchical Clustering Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n gvng your own lectures. Feel free to use these sldes verbatm, or to modfy them to ft your

More information

Recognizing Faces. Outline

Recognizing Faces. Outline Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Supervsed vs. Unsupervsed Learnng Up to now we consdered supervsed learnng scenaro, where we are gven 1. samples 1,, n 2. class labels for all samples 1,, n Ths s also

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Face Recognition Method Based on Within-class Clustering SVM

Face Recognition Method Based on Within-class Clustering SVM Face Recognton Method Based on Wthn-class Clusterng SVM Yan Wu, Xao Yao and Yng Xa Department of Computer Scence and Engneerng Tong Unversty Shangha, Chna Abstract - A face recognton method based on Wthn-class

More information

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe CSCI 104 Sortng Algorthms Mark Redekopp Davd Kempe Algorthm Effcency SORTING 2 Sortng If we have an unordered lst, sequental search becomes our only choce If we wll perform a lot of searches t may be benefcal

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

General Vector Machine. Hong Zhao Department of Physics, Xiamen University

General Vector Machine. Hong Zhao Department of Physics, Xiamen University General Vector Machne Hong Zhao (zhaoh@xmu.edu.cn) Department of Physcs, Xamen Unversty The support vector machne (SVM) s an mportant class of learnng machnes for functon approach, pattern recognton, and

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

Efficient Distributed File System (EDFS)

Efficient Distributed File System (EDFS) Effcent Dstrbuted Fle System (EDFS) (Sem-Centralzed) Debessay(Debsh) Fesehaye, Rahul Malk & Klara Naherstedt Unversty of Illnos-Urbana Champagn Contents Problem Statement, Related Work, EDFS Desgn Rate

More information

Parallelization of a Series of Extreme Learning Machine Algorithms Based on Spark

Parallelization of a Series of Extreme Learning Machine Algorithms Based on Spark Parallelzaton of a Seres of Extreme Machne Algorthms Based on Spark Tantan Lu, Zhy Fang, Chen Zhao, Yngmn Zhou College of Computer Scence and Technology Jln Unversty, JLU Changchun, Chna e-mal: lutt1992x@sna.com

More information

(1) The control processes are too complex to analyze by conventional quantitative techniques.

(1) The control processes are too complex to analyze by conventional quantitative techniques. Chapter 0 Fuzzy Control and Fuzzy Expert Systems The fuzzy logc controller (FLC) s ntroduced n ths chapter. After ntroducng the archtecture of the FLC, we study ts components step by step and suggest a

More information

Overview. Basic Setup [9] Motivation and Tasks. Modularization 2008/2/20 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION

Overview. Basic Setup [9] Motivation and Tasks. Modularization 2008/2/20 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION Overvew 2 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION Introducton Mult- Smulator MASIM Theoretcal Work and Smulaton Results Concluson Jay Wagenpfel, Adran Trachte Motvaton and Tasks Basc Setup

More information

Cost-efficient deployment of distributed software services

Cost-efficient deployment of distributed software services 1/30 Cost-effcent deployment of dstrbuted software servces csorba@tem.ntnu.no 2/30 Short ntroducton & contents Cost-effcent deployment of dstrbuted software servces Cost functons Bo-nspred decentralzed

More information

Incremental Learning with Support Vector Machines and Fuzzy Set Theory

Incremental Learning with Support Vector Machines and Fuzzy Set Theory The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Human Face Recognition Using Generalized. Kernel Fisher Discriminant Human Face Recognton Usng Generalzed Kernel Fsher Dscrmnant ng-yu Sun,2 De-Shuang Huang Ln Guo. Insttute of Intellgent Machnes, Chnese Academy of Scences, P.O.ox 30, Hefe, Anhu, Chna. 2. Department of

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

A Background Subtraction for a Vision-based User Interface *

A Background Subtraction for a Vision-based User Interface * A Background Subtracton for a Vson-based User Interface * Dongpyo Hong and Woontack Woo KJIST U-VR Lab. {dhon wwoo}@kjst.ac.kr Abstract In ths paper, we propose a robust and effcent background subtracton

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss.

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss. Today s Outlne Sortng Chapter 7 n Wess CSE 26 Data Structures Ruth Anderson Announcements Wrtten Homework #6 due Frday 2/26 at the begnnng of lecture Proect Code due Mon March 1 by 11pm Today s Topcs:

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

EECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science

EECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science EECS 730 Introducton to Bonformatcs Sequence Algnment Luke Huan Electrcal Engneerng and Computer Scence http://people.eecs.ku.edu/~huan/ HMM Π s a set of states Transton Probabltes a kl Pr( l 1 k Probablty

More information

Discrimination of Faulted Transmission Lines Using Multi Class Support Vector Machines

Discrimination of Faulted Transmission Lines Using Multi Class Support Vector Machines 16th NAIONAL POWER SYSEMS CONFERENCE, 15th-17th DECEMBER, 2010 497 Dscrmnaton of Faulted ransmsson Lnes Usng Mult Class Support Vector Machnes D.hukaram, Senor Member IEEE, and Rmjhm Agrawal Abstract hs

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Computer Animation and Visualisation. Lecture 4. Rigging / Skinning

Computer Animation and Visualisation. Lecture 4. Rigging / Skinning Computer Anmaton and Vsualsaton Lecture 4. Rggng / Sknnng Taku Komura Overvew Sknnng / Rggng Background knowledge Lnear Blendng How to decde weghts? Example-based Method Anatomcal models Sknnng Assume

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs246.stanford.edu 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 2 Hgh dm. data Graph data

More information