Graph-Theoretic Methods

Size: px
Start display at page:

Download "Graph-Theoretic Methods"

Transcription

1 Graph-heoretc Methods Motvaton and Introducton One s often faced wth analyzng large spatal or spatotemporal datasets say nvolvng nodes, or tme seres. If one s only nterested n the ndvdual behavor of each node or tme seres, analyss remans tractable (or at least, as order ) a massvely unvarate approach (for spatal datasets). However, ths approach (a) gnores any possble nteractons between the nodes or tme seres, and, mplctly, makes the assumpton that the nodes or tme seres drectly represent the underlyng varables of nterest -- the more lkely alternatve beng that the observables each represent a mxture of the underlyng varables. A natural approach s therefore to look at covarances (n the case of spatal datasets) or, n the case of tme seres, cross-spectra the quanttes PX, ( ) X w defned earler. Whle ths s comprehensve (at least for parwse nteractons), there s a scalng problem the number of covarances (or cross-spectra) s ( - )/. So even f s relatvely small (say, 5 to 50), can be suffcently large so as to be dffcult to vsualze, and when s large (e.g., magng pxels), a comprehensve approach may be computatonally mpractcal. So ths motvates the development of methods that summarze the collectve propertes of a set of covarances, or a matrx of cross-spectra. One knd of approach s to recognze that these covarances do form a matrx, and, as such, the standard ways of descrbng a matrx are applcable. A good example s the global coherence ths s the rato of the largest egenvalue of the matrx of cross-spectra to ts trace,.e., a measure of the extent to whch the cross-spectral matrx can be approxmated by a matrx of rank (see Homework 3 for LSBB, 0-03). A rank- cross-spectral matrx mples that the observed covarances can all be accounted for by a sngle common source of nose, so the global coherence has an mmedate nterpretaton. Graph-theoretc methods represent another knd of approach. Here, the basc dea s also one of dmensonal reducton, but the dmensonal reducton occurs at an earler stage: reducng the orgnal parwse measurements, whch may be real or complex numbers, to somethng smpler typcally a matrx A whose elements a are ust 0 s and s. hen, one looks at the propertes of ths array. Overall, the advantage of ths knd of approach s that t focuses on the qualtatve nature of the nterrelatonshps of the observed varables, and ther network-lke (or topologcal ) attrbutes. he man cautonary note s that the reducton from covarances or cross-spectra to bnary quanttes nvolves thresholdng (and, n the case of cross-spectra, the choce of analyss freqeuency.) So to check on the robustness of the results, one mght want to nvestgate how they depend on ths choce even f the choce s obectve (as n, a level of statstcal sgnfcance). One should also note that graph-theoretc methods have n general been developed for applcaton to domans n whch ths thresholdng s not an ssue.e., domans n whch the Graph-heoretc Methods, of 3

2 fundamental measurements are bnary. A good example s the case of socal networks: a matrx element a = 0 means that ndvduals and are strangers, whle a = means that they know each other. ote also that n ths case, the matrx A s necessarly symmetrc. hs symmetry typcally s taken to be part of the defnton of a graph. But many graph-theoretc measures are applcable far beyond the restrcted doman of symmetrc, bnary graphs. Asymmetrc but bnary arrays correspond to drected graphs (for example, has contacted, or webste lnks to webste ). Arrays that are not bnary correspond to weghted graphs (some lnks are stronger than others). Dependng on the data type, these extensons may be partcularly natural. For example, dffuson tractography data provdes a measure of the strength of connectvty, but not ts drectonalty. In all of these cases (and n everythng we consder), a ³ 0. Relaxng ths condton (.e., allowng negatve strengths ) changes the character of the problem greatly for example, the graph Laplacan s no longer postve defnte, so that dffuson can dverge. akng ths even further allowng a to be complex, wth a = a, essentally yelds the pcture of general lnear dynamcs at a sngle frequency, as the connectvtes now can be consdered to represent values of a transfer functon. here s also a connecton to Markov chans. If every column of a graph matrx sums to (the Markov condton), we can vew a as the probablty than a partcle at node wll, at the next tme step, move to node. ote also that we can take any graph matrx and column-normalze t (.e,. replace a by a / ak ) to ensure that ts columns sum to, so that we can always turn a å graph matrx nto a Markov matrx. k Elements wo web-based sources for ths materal may be found n Radu Horaud, and Sukanta Pat, But there are notatonal nconsstences and some annoyng typos. General defntons A graph G conssts of a set of vertces V (a.k.a. nodes) and an adacency matrx A whose rows and columns correspond to the vertces. he vertces are typcally labeled by the ntegers {,..., } -- but typcally, these are abstract labels, and ther numercal values are rrelevant. All elements of the adacency matrx A are assumed to be non-negatve. A subgraph of G s a subset of the vertces, along wth the matrx formed from the correspondng rows and columns of the adacency matrx A. If a > 0, we say that there s an edge connectng vertex to vertex, and wrte. Graph-heoretc Methods, of 3

3 he degree of a vertex s the number of vertces that t connects wth. In the case of a drected graph, one needs to dstngush between the outgong degree and the ncomng degree. he dstance between two vertces s the mnmum number of edges that must be traversed to pass between them. Each example of a mnmum-length path s called a geodesc. he dameter p of a graph s the length of ts largest geodesc. It s the lowest power p for whch A has no zero off-dagonal elements. A clque s a set of vertces for whch every par s connected by an edge. wo vertces connected by an edge trvally form a clque. A three-clque s a set of vertces whose subgraph s a trangle. On ths general setup, we typcally add one or more knds of restrctons: If we allow nonzero a s to be quanttes other than, the graph s a weghted graph. If we allow the matrx A to be asymmetrc, the graph s a drected graph; a s the strength of the connecton from to (ths conventon makes the connecton wth Markov matrces most straghtforward). ypcally, we wll deal wth smple graphs; three condtons apply: all a are 0 or, they are symmetrc, and the on-dagonal elements are 0. he last condton means that there are no selfloops. Gong from a smple graph to a weghted graph s often a straghtforward generalzaton, n that a graph wth nteger weghts can be thought of as a graph n whch there are multple connectons between a sngle par of nodes. So one can expect that ths generalzaton wll not affect approaches that depend on the algebrac structure of A (though t may affect approaches that depend on countng the number of paths). In contrast, gong from a smple graph to a drected graph means allowng A to be asymmetrc. hs wll change ts algebrac propertes markedly, snce A goes from beng self-adont to nonself-adont. However, combnatoral approaches (e.g, approaches that rely on countng paths k between nodes) may generalze readly. For example, each element of A counts the number of k-step paths from node to node. A smple graph s ust a set of ponts and lnes that connect them: Graph-heoretc Methods, 3 of 3

4 æ0 0 ö For ths graph, the adacency matrx s gven by A = ç çè ø he clques consst of the seven pars of vertces connected by edges, and also the 3-clque {,,3 }. Some specal knds of graphs he followng are farly common, and often useful: A connected graph s a graph for whch, for every par of vertces, there s a sequence of edges that connects them. Equvalently, t s a graph for whch a suffcently hgh power of the adacency matrx A contans no zero elements. Snce the elements of A and ts powers are nonnegatve, ths s equvalent to the condton that + za + z A +... = (seres converges - za for suffcently small z) has no zero elements. Equvalently, a connected graph s a graph whose dameter s fnte. A cyclc graph s a connected graph for whch every vertex has degree. A forest s a graph that has no subgraphs that are cycles. A tree s a forest that s connected. A trangle-free graph s a graph that has no subgraphs that are 3-clques. All forests are tranglefree. A regular graph s a graph for whch all vertces have the same degree. A complete graph s a graph for whch all edges are present. Equvalently, the entre graph s a clque.) Equvalently, t s a graph of dameter. Graph-heoretc Methods, 4 of 3

5 A star graph s a graph n whch one vertex s connected to all of the others, and there are no cycles. It has dameter. A bpartte graph s a graph whose vertces can be parttoned nto two dsont subsets, and the vertces wthn each subset are not connected wth each other. (Smlarly for a k-partte graph.) Some specal knds of graph archtectures he above terms apply to specfc graphs, and they are fragle the propertes can be ganed or lost by nserton or deleton of a sngle vertex or edge. he terms below are dfferent they are ntended to apply to graph archtectures (strateges of buldng graphs wth an arbtrarly large number of nodes, as the number of nodes s large). A scale-free graph s a graph for whch the frequency of nodes wth a gven degree s a powerlaw functon of the degree. hat s, the probablty that a node has degree between d and -g d +D d s gven by p( d) = d D d ; typcally g s n the range of to 3. Obvously ths can only hold over a range of degrees, and must be approxmate. But the basc dea s that there are a small number of vertces that have a large number of connectons ( hubs ), a larger number of nodes that have somewhat fewer connectons, etc. A small-world graph (or network) s a graph n whch the average dstance between pars of vertces s small n comparson to the total number of vertces. A complete graph s small-world (wth vertces, the rato s /); so s a star graph (rato s /). A cyclc graph s not smallworld (wth vertces, the rato s /). ypcally, the term s reserved for stuaton that the average dstance grows no faster than the logarthm of the number of vertces. Scale-free graphs, wth exponents g n the range of to 3, are small-world. But not all smallworld graphs are scale-free. A classc example of ths s the small-world archtecture of Watts and Strogatz (ature 998). hese graphs are made by () startng wth a cyclc graph, and () addng a small number of long-range cross-connectons. An Erdos-Reny graph s a graph n whch the probablty of a connecton between any two nodes s set to some value p. For p =, ths yelds a complete graph. For p <, the graph may not be connected. here are numerous results on the behavor of these graphs as the number of vertces grows for example, the probablty that the graph s connected, the probablty that t has a sngle large component, etc. For large, ln determnes connectedness: for ln p >, ln the graph s almost always connected, for p <, the graph s almost always not connected. he graph Laplacan he graph Laplacan s a matrx that captures many aspects of a graph, ncludng how dynamcal processes on the graph evolve, and how the graph can best be represented by a one-dmensonal structure. It s also central to many algorthms ( spectral parttonng, spectral clusterng ) that use graphs for many other purposes (mage segmentaton, VLSI layout). Graph-heoretc Methods, 5 of 3

6 We wll defne the Laplacan for a smple (unweghted, un-drected) graph. he extenson to a weghted graph s straghtforward: nstead of a or a 0 to ndcate whether an edge s present, one can nsert a postve real number to ndcate ts strength. (One can consder the ntermedate case the possblty of a multplcty of equally-weghted edges between each par of ponts and ths corresponds exactly to the case of nteger weghts.) However, the extenson of the graph Laplacan to a drected graph s not, as the symmetry of the n-connectons and out-connectons from a node s an essental feature. he startng pont s to consder a functon on the graph,.e., an assgnment x of values to each vertex on the graph. We can thnk of each value x to represent an amount of materal (or heat) at the vertex. ow allow ths materal to dffuse, accordng to the graph s connectvty. he amount at each vertex at tme t s gven by x ( t ). At each tme step, some of the materal goes to each of the d vertces that s connected to, but also, some of the materal at these neghborng vertces flows back to. So, d æ ö x = K d x x dt ç - + å, çè ~ ø where the frst term on the rght represents the loss to neghborng vertces, and the second term represents the nflux. he constant K s rrelevant (t provdes an overall temporal scale, and can be set to ), so we can wrte d x =- Lx, dt where x s the column matrx of the x s and L s the graph Laplacan. he graph Laplacan s gven by L= D- A, the dfference of the dagonal matrx of the degrees (D) and the adacency matrx. A ustfcaton for callng L a Laplacan s that the above resembles the standard heat (or dffuson) equaton: for x(,) st s the temperature of a conductng bar at poston s and tme t, the evoluton s x = K x. he graph Laplacan, lke the ordnary Laplacan, s proportonal to t s the dfference between the value at a pont, and ts local average. But there s a (conventonal) sgn dfference: the graph Laplacan approxmates the negatve of the second dervatve, whle the contnuous-system Laplacan, or, s the (postve) second dervatve. s We now would lke to solve ths equaton, gven some set of ntal condtons x (0). If we have a complete set of egenvectors m (.e, a set that spans the vector space of functons on the graph), along wth ther assocated egenvalues l m (wth Lm = lmm, the soluton s mmedate. Each m d dt corresponds to a soluton that des down exponentally, wth rate l m : e =- l e =-L e -lmt -lmt -lmt m m m m. Graph-heoretc Methods, 6 of 3

7 So we wrte x (0) mt x() t = åcmme -l. m= n terms of the bass set, x(0) = åcmm, and have a soluton: m= he bottom lne s that the egenfunctons of the Laplacan ndcate the patterns that de out exponentally, and the egenfunctons wth lowest nonzero egenvalues de out the slowest and thus, characterze the dynamcs of functons that evolve on G. here are some useful varants of the Laplacan that dffer n row- or column-normalzaton. he normalzed Laplacan s defned as -/ -/ Lnorm = D LD. Snce -/ -/ -/ norm = = ( - ) -/ = - -/ -/, t s equal to the dentty on ts / / Lnorm I ed - AD L D LD D D A D I D AD dagonal. For a weghted graph wth weak connectons, = -. Snce ths s close to the dentty, one can approxmate repeated applcatons of L norm by r -/ -/ r -/ -/ D AD -erd AD / / r e ( Lnorm) ( I ed AD ) ( exp ) = -» = exp. L norm s symmetrc, and ts egenvalues can also be shown to be non-negatve (see below). he transton matrx Laplacan allows for a connecton wth Markov chans: - - Ltranston = D L= I- D A. L transton dffers from the standard graph Laplacan n that ts rows are dvded by the degree. hs means that the matrx now represents partcle dffuson n the followng sense: a partcle can choose to leave a node, and f t does so, t chooses any of the edges randomly to go to the next node. o see the dfference between ths and the graph Laplacan consdered above, consder a star graph wth + vertces. For the standard graph Laplacan, the functon that s constant on all nodes s an egenvector of egenvalue 0. For the transton-matrx Laplacan, the functon that has value on the central vertex and value on the perpheral ones s an egenvector: on every step, a partcle at the central vertex moves to any one of the perpheral ones, but a partcle at any perpheral node must move centrally. Egenstructure of the Laplacan: null space, postve-defnteness We frst determne the egenvectors correspondng to a zero egenvalue va elementary means, then show that the other egenvectors are postve, and then use some group theory to determne them n some nterestng specal cases. We focus on the standard graph Laplacan, but all the arguments work for the other varants. he null space A functon on the graph s mapped to zero by the Laplacan f ts value at every vertex s equal to the average of ts neghbors. So a functon that s constant on the graph s mapped to zero. Conversely, f a functon s not equal to the average value of ts nearest neghbors, then t s not mapped to zero. Graph-heoretc Methods, 7 of 3

8 If a graph s not connected, the Laplacan conssts of blocks, one for each connected component. hs s because D s dagonal, and A s n blocks. A functon that s constant on each component separately (but perhaps has a dfferent value on each component) s therefore mapped to zero by the Laplacan. So the null space of the Laplacan s the set of functons that are constant on each component separately. Postve-defnteness, the ncdence matrx, and representng the graph on a lne he above comments determne the egenvectors of egenvalue 0 for the graph Laplacan. Wth a bt more work, we can show that all of the egenvalues of the graph Laplacan are nonnegatve. We do ths by representng t as a sum of outer products, and ths leads to some other applcatons and nterpretatons of the graph Laplacan. But before demonstratng postve-defnteness n ths fashon, t s worthwhle notng that ths property s to be expected from the dffuson nterpretaton. Usng the negatve of the Laplacan as the rght sde of a dfferental equaton moves the value at a node towards the local average. Snce ths cannot lead to dvergence, no tme-dependence can grow wthout bound, so the negatve of the Laplacan cannot have any postve egenvalues. o represent the Laplacan as a sum of outer products, we defne the ncdence matrx. he ncdence matrx (or vertex ncdence matrx) of a graph s a matrx Q whose rows correspond to the edges of the graph, and whose columns correspond to ts vertces. It s defned as follows: For each row (edge) r, whch conssts of a connecton between vertex b and vertex c, set q rb = and q rc =-, and q r = 0 for Ï { bc, }. ote that the assgnment of + or - to b or c s arbtrary, so the adacency matrx s defned only up to sgn-flps of each row. However, we wll only be concerned wth propertes of Q that are ndependent of such sgn-flps, especally propertes of QQ. In the above case, and orderng the edges n lexcographc order along the rows, the ncdence matrx s æ ö Q = è ç ø wth (arbtrary) row order é{, } ù {, 3} {, 4} {, 6}. {,3} { 3,5} êë { 4,5} úû he basc observaton that we need to demonstrate s that L= Q Q. o see ths,.e., to see thatqq= A- D: On the dagonal (say n poston ), Graph-heoretc Methods, 8 of 3 QQ has a contrbuton of ( ) for

9 every edge that begns or ends at vertex, whch s the degree of that vertex. Off the dagonal, QQ s nonzero only f column and column have a nonzero entry n the same poston (e.g., row r). hs means that there s an edge (the edge correspondng to row r ) that starts at and ends at, or vce-versa. So these entres n Q must be opposte n sgn, and ( QQ ) =-, as requred by QQ D A = -. Conversely, ( ) 0 the adacency matrx s zero at that locaton, also as requred by QQ = means that no edge connects to, and that QQ= D- A. Snce each row of Q sums to 0, the vector consstng of a column of s s n the null space of Q, and hence, an egenvector of QQ wth egenvalue 0. (We already knew ths, from the above comment about what s n the null space of the Laplacan.) he relatonshp of L to Q now shows that all egenvalues of L are non-negatve. For say l s an egenvalue. hen x Lx = x Q Qx = ( Qx ) Qx = Qx but also x Lx = x lx = lx x = l x, so l ³ 0. he same argument shows that the normalzed Laplacan s also postve-defnte: -/ -/ -/ -/ -/ -/ Lnorm = D LD = D Q QD = ( QD ) QD. From ths, t follows that the egenvalues of the transton-matrx Laplacan (an asymmetrc matrx), are also non-negatve, - -/ -/ -/ / -/ / snce Ltranston = D L= D D LD D = D LnormD. So the transton-matrx Laplacan and the normalzed Laplacan are smlar matrces, and ther egenvalues are the same. he relatonshp L= Q Q provdes another applcaton of the Laplacan. Wrtng out x Lx = x Q Qx n coordnates yelds x Lx = x Q Qx = å( x-x ). he egenvector of lowest postve egenvalue mnmzes ths quantty, subect to the constrant x =. hat s, the map from vertex to x s the map from the graph to the lne that best approxmates the graph structure: t mnmzes the total squared dstance between the vertces that are connected, gven a constrant on the total extent of the map. he dea extends: Consder a map M from each vertex to the row vector æ ö v ( p),..., ( q) = ç lp l (where p,..., q are the egenvectors of nonzero egenvalue) s çè q ø an embeddng of the graph n a space of dmenson q- p+. he frst coordnate s proportonal to the above mappng to the lne; each subsequent coordnate s the best possble map to a lne that s orthogonal to all prevous mappngs. A very hand-wavy argument as to why ths set of coordnates s a good way to represent the graph: Let C be the matrx of coordnates of these vectors,.e., Cr, = ( v) = ( r), whch conssts of the egenvectors r r n columns, l r Graph-heoretc Methods, 9 of 3

10 wth each column normalzed by l. hen æ ö C LC = C Q QC= C l m m m C= I ç å. So è r m ø n some sense, CC s lke the nverse of the Laplacan, whch n turn ndcates how easy t s to dffuse between two ponts. So when two vertces have coordnates (columns of C) wth hgh covarance, they are close n the sense of dffuson on the graph. he nonzero egenvalues A smple calculaton One mght thnk that for a graph whose topology s that of a lne, the domnant egenvector of the graph Laplacan s ust a lnear functon on the lne. But ths can t actually be the case: where the value assgned to a node s equal to the average of ts neghbors, the Laplacan goes to zero. So the Laplacan maps a lnear functon on the graph to a functon that s zero everywhere, except at the endponts. So unless = or = 3, ths cannot be an egenvector. So let s calculate the lowest egenvector and egenvalue of the Laplacan for a graph consstng of ponts n a lne. For =, the egenvector (trvally) s [- ]; for =3, almost as trvally, t s [- 0 ]. But for ³ 4, t s not a lnear functon of poston on the graph, and, as the calculatons show, t approaches a half cycle of a snusod, wth a peak at one end of the graph, a zero-crossng n the mddle, and a trough at the other end. We can understand ths asymptotc behavor as follows, va the reflecton trck. For ths graph, at every pont except the endponts, the graph Laplacan s a dscrete approxmaton to the (negatve of the) second dervatve: the value at poston x s replaced by x -( x- + x+ ). At the endponts, ths s not the case BU f we magne adonng reflectons of the graph at each endpont, the second-dervatve behavor s recovered. More formally, we guess that the Graph-heoretc Methods, 0 of 3

11 soluton { x,..., x } s one secton of an nfnte lne (black segment n fgure below), where x = - x (the mrror) and x- + = x (perodcty). So now a soluton to the graph Laplacan heat equaton s a segment of a soluton to the standard Laplacan heat equaton on the standard lne, provded that ths soluton s mrror-symmetrc at, and perodc wth perod ( - ). Snce the heat equaton on the lne s translaton-nvarant, ts solutons are the egenfunctons of the translaton operator the snusods. For x( s) = exp( ws), Lx( s) =- exp( ws) = w x( s). s So the lowest nonzero egenfuncton s the one wth the lowest frequency consstent wth the mrror-symmetry condton,.e,. a snusod wth perod ( - ). hs analyss properly suggests that group-theoretc consderatons wll help understand the soluton of the heat equaton on other graphs. If there s a permutaton of the vertces that leaves the graph s connectvty unchanged, then t necessarly leaves the Laplacan unchanged as well. So we can mmedately calculate the egenfunctons of the Laplacan on a cyclc graph wth [ r] pr elements they are x = exp( ). o fnd the correspondng egenvalues: pr - p + p - p + p Lx = x - x - x = e -e - e = -e -e x [ r] [ r] [ r] [ r] r/ r/ r/ r/ [ r] ( ) ( ) ( ) ( ) ( ) ( )( ) - + p / p / pr pr = - - = (- cos( )) = sn ( ). he smallest nonzero egenvalue - r + r so lr ( e e ) corresponds to r =. Further mplcatons of symmetry Snce we re dealng wth graphs, there s the possblty for more nterestng (.e., larger, noncommutatve) symmetry groups to be relevant. For the complete graph wth vertces, the full permutaton group leaves the graph nvarant. So now our task s to fnd out how the full permutaton group acts n the space of functon on obects. We recall that every fnte group G has a specfc, fnte set of rreducble representatons,.e., the dstnct ways n whch t can operate n a vector space V. hese actons are defned by a sete of lnear operators U g n Hom( V, V ), whch respect the group operaton: Ugh = UgUh. For any nonzero vector v n that space, the set of mages Uv g must span the space. (If they dd not span the space, then they would span a subspace of V preserved by G, and U would not be rreducble. A man dfference between commutatve groups (for example, the cyclc group) and noncommutatve groups (for example, the full permutaton group wth 3 or more elements) s that non-commutatve groups contans rreducble representatons of dmenson or more. Graph-heoretc Methods, of 3

12 What are the consequences of ths n the current set-up, when a group acts on a graph n a structure-preservng way? Frst, the acton of the group on the graph nduces an acton of the group on functons of the graph. hs means that there s a representaton of the group (say, W) n the vector space X of functons on the graph. But also, snce the group operaton preserves the graph structure, t necessarly preserves the Laplacan. So for any group element g, WL g = LWg. Let s say we have an egenvector of L, wth L= l. hen for any group element g, LW ( g) = WL g = Wgl= l( Wg), so Wg s also an egenvector of L, and has the same egenvalue. Snce there may be some rreducble subspaces of X that have dmenson or more, ths means that two or more egenvectors of L wll be related by the group symmetry, and have the same egenvalue. So ths puts a premum on fndng the rreducble components of how G acts n the space X of functons on the graph for every rreducble component of dmenson k, we wll fnd k egenvectors sharng the same egenvalue. ow consder the specfc case of the complete graph of ponts. he space of functons on the graph also has dmenson. he elements of the permutaton group act on these functons va permutaton matrces. We can tell that ths representaton s not rreducble by a smple character calculaton. Specfcally, the number of copes of an rreducble representaton M n a gven representaton W s å cm( g) cw( g), where c W( g) = tr( Wg). For permutaton # G g matrces, the trace s the sum of the number of tems that are not relabeled. So for the representaton W, c ( g W ) ³ 0 everywhere, and, for some group elements, the nequalty s strct. ow takng M to be the trval representaton E (that maps every group element to the number ), t follows that there s at least one copy of E nsde of W. So W = EÅ X. Graph-heoretc Methods, of 3 For representatons of the full permutaton group, there s a classc and general (but very nvolved) way to show that X s rreducble. Here we wll use an elementary argument, by characterzng the matrces that commute wth all elements of W. Frst, consder W s for a partcular par-swap permutaton s = ( pq). he correspondng W s s a matrx for whch ( W s ), =, ( W ) pq s, =, ( W ) q p s, = for Ï { pq, }, and ( ), 0 k W s = otherwse. If a matrx B commutes wth W s, then t follows from BWs = WsB that Bq, p= Bp, q. herefore, a matrx that commutes wth W t all par-swap permutatons must have all of ts off-dagonal elements equal to the same value: æb c cö c b B = c. Fnally, snce the number of rreducble components of W s the number of ç c c b çè ø lnearly ndependent operators that commute wth all W g, the number of such components s one of dmenson (that corresponds to the trval representaton that maps all group elements to the dentty) and one of dmenson -, whch conssts of the natural representaton n permutaton matrces, but wth the subspace correspondng to the dentty removed. Snce the

13 Laplacan commutes wth all of the matrces n ths rreducble representaton, t must act as a multple of the dentty so the Laplacan has - egenvectors n ths space, and they all have the same egenvalue. More explctly: the egenvector correspondng to the trval representaton s the vector of all s, and t has egenvalue 0. he other egenvectors consst of any vector whose mean s zero, as ths guarantees that they are orthogonal to. In ths subspace, the Laplacan acts by replacng each element x by ( -) x - x = x - x = x å å (where we ve used orthogonalty, ¹ x = x = 0). So the egenvalues n ths space are. å We can use a smlar approach for other many other smple graphs. For example, n the startopology, all of the perpheral vertces are equvalent under the acton of the full symmetrc group; the wagonwheel topology, the cube topology, and many others can be treated n a smlar way. Symmetry arguments are also useful when there s an approxmate symmetry, as n ths case, the exact egenvalues and egenvectors can be consdered to be perturbatons of the correspondng quanttes of the fully symmetrc graph. So one can group the egenvalues and egenvectors n a meanngful way. Embeddng n a smpler context It s useful to compare the calculaton of the egenvectors of the Laplacan wth basc procedures for exploratory analyss of multdmensonal data that arse n a smpler context: prncpal components analyss and multdmensonal scalng. In prncpal components analyss (PCA), the data already have coordnates, and we look for a dmensonally-reduced set of coordnates that stll represents the same data well. Multdmensonal scalng s a brdge from PCA to Laplacan embeddng n MDS, one knows the dstances between the dataponts but not ther coordnates, and seeks a dmensonally-reduced representaton. (For the graph Laplacan, one doesn t know dstances, one ust knows whch vertces are connected.) hs materal s adapted from MVAR0.pdf Prncpal Components Analyss Prncpal components analyss can be thought of as fndng a set of coordnates that do the best ob of representng a hgh-dmensonal dataset n a lower dmenson. he setup s that we have k observatons, each of whch yeld a data pont (or dataset) n a n- dmensonal space. Each dataset s an n-element column vector y, consstng of the observatons y,,, yn, ; we wrte a set of these column vectors together as a n k matrx Y. We seek a representaton n a space of p dmensons, where p s much smaller than n and k. hat s, we seek a set of p column vectors x, x, amng to choose them so that lnear Graph-heoretc Methods, 3 of 3, p

14 combnatons of these p column vectors are good approxmatons to the orgnal datasets y. o formalze ths: the set of p column vectors,, x x p consttute an unknown n p matrx, X. We want to choose them so that they explan as much of the data as possble,.e., that we can fnd an assocated set of coeffcents B for whch Y - XB s as small as possble, for some p k, matrx B. If p s much smaller than both n and k, then we have found a concse reprentaton of the dataset, snce Y s n k but X and B are n p and p k. Geometrcally, the columns of X defne a p-dmensonal subspace that accounts for the data. We can also thnk of the columns of X as regressors, and that we seek regresors that smultaneously accout for the multvarate data. ypcally, one carres out PCA for a range of values of p, and then chooses a useful one based on consderatons such as the fracton of the data explaned, and the ntended use (vsualzaton, nose removal, etc.) We note at the outset that we can t hope to determne X unquely: alternatve X s whose columns span the same space wll gve an equvalent soluton. Put another way, for any nvertble p p - - matrx, XB= X B, so X = X and B = B can replace X and B. One consequence s that we can always assume that the columns of X are orthonormal. he soluton s stll ambguous (snce n the above, can stll be taken to be a untary matrx), but ths turns out to be very helpful n fndng the solutons. Soluton, several ways ft = - -, where Y = XB and X and B are both s unknown. If we knew X, then B could be found mmedately t s the proecton of the datasets Y nto the subspace spanned by X. We have already seen that ths s B = ( X X) - X Y. Our goal s to mnmze tr (( ft ft R Y Y ) ( Y Y )) Snce we can assume that the columns of X are orthonormal, t follows that X X = I, and that B = XY ft, and Y = XX Y. hus, R = tr ( Y -XX Y ) ( Y -XX Y ) ( ) ( Y Y Y XX Y Y XX Y Y XX XX Y) ( YY Y XXY) = tr = tr - where we have agan used X X = I n the second equalty. hus, mnmzng R s equvalent tr Y XX Y, subect to the constrant that X X = I. ote that because to maxmzng ( ) tr( AB) = tr( BA), ths s equvalent to maxmzng tr( YY XX ) maxmzng tr( XX YY ) and tr( X YY X ).,, and also to At ths pont, we note that snce YY s a symmetrc matrx, t (typcally) has a full set of egenvectors and egenvalues. hese are the natural, data-drven coordnates for our problem. It therefore makes sense to wrte a potental soluton for X n terms of these egenvectors. Graph-heoretc Methods, 4 of 3

15 We wll fnd that the columns of X must be the egenvectors of YY. here are two ways to see ths. he frst s to wrte out a possble soluton for X n terms of these egenvectors. hs leads to a coordnate-based calculaton, whch eventually yelds the desred result (see pp. 0- of MVAR0-MVAR8, notes for ths strategy). Alternatvely, we could use the method of Lagrange Multplers, whch leads to the same concluson, but n a much more systematc way. In ts smplest form, we can seek a soluton for p=: that s, fnd the drecton that accounts for the largest fracton of the varance. Wth ths drecton correspondng to the unknown vector x, ths means maxmzng tr( YY xx ) = tr( x YY x) subect to x = x x =. In the Lagrange formalsm, ths s equvalent to maxmzng F = x YY x-lx x. Settng dervatves w.r.t the components of x F to zero (e., = 0)yelds the egenvalue equaton YY x = lx. So the x unknown vector x s an egenvector. We could have already seen ths, by expressng YY n ts egenbass: YY p = ål. = If we try to fnd a soluton for p >, the algebra s dentcal. Here, our constrants ( X X = I) can be thought of as a matrx of constrants, one for each element of X X. hus, the Lagrange term (a sum of unknown coeffcents multpled by each constrant) can be compactly wrtten as tr( L X X ), for some p p matrx L. hus, maxmzng tr( YY XX ) F tr( YY XX ) tr( X X) subect to X X = I s equvalent to maxmzng = - L wthout constrants on X, and choosng L so that X X = I at the F F maxmum. o do ths, we calculate, and put the resultng n p equatons, = 0, x m, x m, nto a matrx. hs yelds (see p. 6 of MVAR0-MVAR8, notes for detals) YY X = XL. ow t s obvous that f we choose L to be a dagonal matrx consstng of the egenvalues l, lp of YY, then choosng the columns of X to be the assocated egenvectors (the ) satsfes both the mnmzaton equaton YY X = XL and the constrants, because the egenvectors of a symmetrc matrx are orthogonal. Whch egenvectors and egenvalues to choose? Makng use of the fact that X satsfes YY X = XL, t follows that ( ) ( ) ( ) tr YY XX = tr XL X = tr X XL = tr L=å l. So f one s to choose p egenvalues, one should choose the p largest ones of YY. p = Graph-heoretc Methods, 5 of 3

16 In sum, the best approxmaton (n the least-squares sense) of an n k matrx Y by a product XB of an n p matrx X and a p k matrx B s to choose the columns of X to be the egenvectors correspondng to the p largest egenvalues of YY, and to choose B = XY. he unexplaned varance s the sum of the remanng egenvalues of YY. An mportant computatonal note s that ths problem s symmetrc n n and k, and ths symmetry reflects the fact that the egenvalues of YY are the same as those of Y Y. But when n and k dffer greatly n sze, one problem may be computatonally much easer than the other. hs s mplemented by matlab s prncomp.m. Multdmensonal scalng In contrast to the above stuaton, the setup n multdmensonal scalng s that we are gven a set of dssmlartes d as dstances between ponts whose coordnates are as-yet unknown, and we are to determne the coordnates. For example, the d could be the result of a survey of raters that are asked to compare stmul and. But they also could be the vector-space dstances between measurements y and y,.e., d = y -y (but we are not gven the vectors y ). Our problem s to fnd a representaton of the d as Eucldean dstances. hat s, we seek a set of vectors x = ( x,,..., xr, ), for whch R d = x - x = x -x. () å r, r, r= he embeddng dmenson R s not known, and we may want to choose a value of R for whch eq. () s only approxmately true. he dstances and coordnates of the x are assumed to be real numbers. We are of course only nterested n solutons n whch the embeddng dmenson R s substantally less than the number of data ponts,. We use a trck (due I thnk to Kruskal) to turn ths problem nto an egenvalue problem. he frst observaton s that the soluton () s non-unque n two ways. Frst, as wth most of the above problems, t s ambguous up to rotaton for any rotaton matrx M, Mx- Mx = x-x. But also, we can add an arbtrary vector to each of the x : ( x+ b) -( x+ b) = x-x. Because of ths, we can restrct our search to a set of vectors x whose mean s zero. We next note that f eq. () holds and also that vectors å x = 0, we can wrte an equaton for the nner products x x n terms of the d. Begnnng wth d = x - x = x -x x - x = x x + x x -x x, we note that ( ) ( ) Graph-heoretc Methods, 6 of 3

17 åd = å( x x + x x - x x ) = x x + S, where = = = = æ ö d = ( x x S) S = = ç + = çè = ø then Smlarly, åd = å( x x + x x - x x ) = x x + S, and åå å. So, f there are vectors k k = S = å x x. k x for whch eq. () holds, æ ö x x = d d d d ç - + å + å - åå = =. () çè = = ø æ ö We therefore wrte G = d d d d ç - + å + å - åå, whch s entrely çè = = = = ø determned by the gven dstances, and seek a set of vectors a set of vectors x = ( x,,..., xr, ) for R whch x x = G,.e., G = å xr, xr,. hs s equvalent to the matrx equaton G= X X, where each vector r= x forms a column of X. hs yelds an mmedate formal soluton: we wrte G n terms of ts normalzed egenvectors, G= ål v v, and then take x = lv. he l s, whch can be taken n descendng order, = ndcate the mportance of each coordnate n the representaton (). he above allows for a parallel to the use of the graph Laplacans to characterze or embed a graph. he elements of G play the same role as the elements of the Laplacan: the rows and columns sum to zero, and the dagonals must be postve. he values off of the dagonal express dstances, but here the correspondence s not so close: n MDS, a zero value ndcates that dstance s typcal, postve values connect ponts that are closer than typcal. For the graph Laplacan, 0 values ndcate ponts that are dsconnected, and negatve values ndcate nodes that are lnked. Multdmensonal scalng as descrbed above works fne provded that all the egenvalues are non-negatve (snce we want to fnd real coordnates). But there s no guarantee that ths s the case for multdmensonal scalng. hs s one reason for usng a Laplacan-lke approach to deal wth smlarty data n the graph-theoretc scenaro, all egenvalues must be postve but the tradeoff s that one s no longer representng dstances, ust somethng lke dstances, as mpled by the dffuson dea. In standard multdmensonal scalng, the presence of negatve values of the egenvectors ndcate that no Eucldean representaton s possble (.e., no dstance-preservng embeddng n a Eucldean space s possble). Instead, the representaton () must be generalzed to Graph-heoretc Methods, 7 of 3

18 R = å er, r -, r, (3) r= d x x where e r =+ along the Eucldean dmensons ( l r > 0, x = lv ), and e r =- along the non-eucldean dmensons ( l r < 0, x = -l v ). he non-eucldean dmensons can be consdered to descrbe an ntrnsc aspect of the geometry of the orgnal data. Alternatvely, f all that s desred s a representaton of the rank order of the dstances, t s always possble to cure the non-eucldean-ness by replacng the orgnal dstances d by some power of them, ( d ) a. For a power a that s suffcently close to 0, the non-eucldean-ness goes away. Combnatoral measures We now return to graphs, and consder some examples of combnatoral measures measures that emphasze countng connectons, rather than the algebrac structure of A. Global clusterng coeffcent he global clusterng coeffcent C s a way of measurng whether the connectvty s mostly local (hghly clustered ) or not. It s defned as three tmes the rato of the number of trangles to the number of connected trples, where a connected trple s a subset of nodes,, and k for whch and k. he factor of 3 s because every trangle necessarly has three connected trples (each cyclc order). A complete graph has C =, a trangle-free graph has C = 0. For a random graph (an Erdos-Reny graph) n whch nodes are connected wth probablty p, we ( -)( -) can calculate that the expected number of connected trples s p and the ( -)( -) 3 expected number of trangles s p, so the global clusterng coeffcent s p. 6 More generally, the global clusterng coeffcent s not normalzed for the overall connecton densty of the graph. It s also not straghtforward to extend the dea of a global clusterng coeffcent to a graph wth weghted edges. Communty structure Communty structure s an approach advanced over the last decade to analyze graphs that arse n many dfferent contexts. he dea s to attempt to partton the vertces nto communtes, n a way that most adacency relatonshps are wthn communtes, rather than between. See ewman (PAS 006, for an applcaton to socal networks. For an applcaton to fmri, see Bassett Porter Grafton, We use Porter s normalzaton conventons but the algebrac approach of ewman. Graph-heoretc Methods, 8 of 3

19 A communty structure conssts of an assgnment of each vertex to a communty c. he extent to whch ths assgnment captures the graph structure s determned by a qualty functon, typcally Q= å ( a - p) d( c, c )., he quanttes a are the entres n the adacency matrx. he quanttes p correspond to the expected probablty of a connecton between and, subect to a null hypothess. he natural dd null hypothess s p =, where m s the total number of edges. hs s the expected m probablty of a connecton, gven the total number of edges n the graph, and the number of edges present at and at, and no further organzaton. Put another way, f you create an adacency matrx that s as random as possble gven a specfed set of degrees at each vertex, dd then ts off-dagonal entres would be p =. ote that m= åa = å dq, snce m, q countng the degree of each vertex counts each connecton twce. Q s large f edges occur wthn a communty n a way that s more than chance ( a > p f d ( c, c ) = ). In the trval case that all vertces are assgned to the same communty, æ dd ö d Q= a - p = a - = a - m = a - d = a - m= ç m çè ø m å( ) å ç å å å å å 0.,,,,, ote that any permutaton that respects the symmetry of the graph results n a reassgnment of communtes that leaves Q nvarant. One dffculty n applyng ths approach s that the general problem of determnng the communty structure that maxmzes Q s ntractable specfcally, t s an P-hard problem, equvalent n dffculty to the travelng salesman problem. However, fndng approxmately optmal communty structures (an nformal concept) s much easer. Here we outlne the ewman (006) approach. he approach s greedy the frst step s to fnd the best parttonng nto two communtes; at each subsequent step, the exstng communty structure s the startng pont and the algorthm seeks to dvde one of the exstng communtes nto two. here s no guarantee that the optmal communty structure can be dentfed ths way, so ths s the frst sense n whch the algorthm s nexact. ( Greedy algorthms typcally have ths property.) At each stage, the algorthm can termnate by ndcatng that no further subdvson mproves Q. ewman s algorthm conssts of an explct egenvector computaton at each stage, but he also suggested ways to refne each stage by a local search, n whch ndvdual vertces are reassgned to the alternatve communty. he key observaton s that, for the specal case of dvdng a graph nto two communtes, we can rewrte Q as a quadratc form. Let the communty structure be determned by s =, where the two sgns correspond to the two communtes. hen, d ( c, c) = ( ss + ). As Graph-heoretc Methods, 9 of 3

20 noted above, ( a p) Q = 0), so, å - = 0 (.e., for the assgnment of all vertces to the same communty,, Q= å ( a -p) ss. ow usng the s s to consttute a column vector s, and B to be the matrx whose entres b = a - p express the dfference between the true adacency matrx and the null hypothess matrx, we have Q= s Bs. he problem of fndng the optmal communty structure s now the problem of maxmzng ths quadratc form, subect to the constrant that all of the components of s are. If, nstead, the constrant was that s =, we would smply fnd the egenvector of B that corresponds to ts largest egenvalue. So here s the second place at whch the algorthm s heurstc rather than exact: the ewman procedure dentfes ths egenvector s max, and then sets s to be accordng to the sgn of ( s max). hs yelds the vector s that s most closely algned wth s max (n the dot-product sense), but not necessarly the one that yelds the largest Q. Of course t makes sense that these are smlar and n specfc examples, as checked by an exhaustve search, ths s the case but t s not guaranteed. he procedure to be followed when s = 0 s unspecfed, but there are some obvous choces: choose randomly, or do an exhaustve search. he numercal values of ( s max) have a meanng they are the extent to whch each vertex contrbutes to the communty structure. As s the case wth the graph Laplacan, the matrx B s left nvarant by any relabelng of the graph vertces that leaves the connectvty matrx nvarant. So we can use group-theoretc tolls to fnd B s egenvalues n many smple cases (complete graph, cyclc graph, star graph, etc.). Importantly, the matrx B could be negatve-defnte -- there may be no egenvectors wth postve egenvalue. In ths case, the ewman procedure smply termnates as t can fnd no communty structure that ncreases Q above ts value for a sngle-communty assgnment. As an ( -)( -) ( -) example, for the complete graph of sze, a = and p = =, so for ( -)/ ³ 3, all off-dagonal b < 0. In ths case, the algorthm stops, as no egenvector wll ncrease the qualty Q of the communty structure. But the stuaton s more complex: because the a true egenvector of B s replaced by one whose entres are coordnates are, t s possble that B has a postve egenvector, but none of the communty assgnments ncrease Q. In ths case, the algorthm agan termnates. An example of ths s a graph wth a wagonwheel connectvty and 6 vertces (5 perpheral): B has an Graph-heoretc Methods, 0 of 3

21 egenvalue of ( 5- )/» 0.68 ; ths corresponds to an assgnment of 3 vertces to each communty and a decrement of Q by 0.. What may be even more of a concern but ths relates more to the defnton of Q than to the algorthm -- s that Q may be large even for graphs n whch the communty structure does not seem relevant. For example, parttonng a cyclc graph nto two connected halves ncreases Q by - 4. But there s no partcular reason to choose any of the many ways that one can hemsect the graph; all lead to an equal ncrease n Q. are equally good. Put another way, the matrx B has several egenvectors that are dentcal; they are guaranteed to be dentcal because of the symmetry of the graph. So ths suggests a measure of robustness of the communty assgnment: f the next-largest egenvalue of B s close to the largest one, the parttonng s not robust. hs also suggests an algorthmc tweak: when there s no gap, or only a small gap, between the largest egenvalue of B and the next-largest egenvalue, these top egenvectors should be consdered smultaneously. here are at least two ways of dong ths: frst, a bnary the communty assgnment could be carred out by fndng the vector contanng s that s closest to the hyperplane that they span; second, one could look for a non-bnary partton n that subspace (as n the way that the Fsher dscrmnant s used for more than bnary parttonng). It s nterestng to note that the communty approach becomes tractable (.e., reduces to a set of egenvalue problems) only f one gnores the combnatoral aspects of the problem or at least, uses the heurstcs that they can be approxmated by an algebrac approach. It s also nterestng to note that the above egenvalue problem s very smlar to the one that arses n the Fsher dscrmnant, where the problem s to fnd an axs that maxmally separates two clusters. he dfference s that n the Fsher problem, one s gven coordnates of the ponts (and from ths, one can compute ther dstances). Here, one s only gven the dstances between the ponts (effectvely f connected, large f not connected), and one needs to put them nto a space frst, so that one can fnd drectons. Analogous to the Fsher problem, one wonders whether a less-greedy approach, n whch one allows for a multpart subdvson at a sngle stage f B has more than one postve egenvalue would be a useful generalzaton. Lnear dscrmnant analyss, a.k.a. Fsher dscrmnant In dscrmnant analyss, rather than try to fnd the best coordnates to represent a dataset (as n MDS), we seek the best coordnates to dstngush one subset from another hence the analogy wth graph parttons. he dea extends readly to more than two subsets. Let s say there are a total of n samples of multvarate data X, wth the samples tagged as belongng to two subsets, n n the frst subset and n n the second. Say the two subsets have [] n [] means m = å [] n [] x, and m n = å [] [] nm + nm x, and the global mean s m =, all = n = n+ n row-vectors. We want to fnd a lnear functon of the coordnates that does the best ob of separatng these two clouds of data. hat s, we want to dscrmnate these subsets by ther proectons onto a (row) vector v. hat means, we want to maxmze the dfference of the Graph-heoretc Methods, of 3

22 proectons of the means, whle, smultaneously mnmzng the scatter wthn the groups, as proected onto v. he setup extends to C classes. he varance between the group means, after proecton onto v, C [ c] s V ( ) between = å v m -m nc [ c] [ c]. he varance wthn group s V ( ) c = å v x -m, so the n c= total wthn-group varance s V of V between to V C c = = å V. We want to fnd drectons that maxmze the rato wthn c c= C wthn = å Vc. It doesn t make sense to smply maxmze V between c= ; we could do ths n an empty way ust by magnfyng v. Smultaneously controllng V wthn takes care of ths, and ensures that we fnd we focus on the drecton of v, not ts sze. o solve the problem, we could try a brute-force method of fndng v that maxmzes the rato Vbetween / V wthn. Or, we could attempt to maxmze V between subect to the constrant that V wthn s constant. (he specfc constant doesn t matter, snce t ust multples v by a constant.) he latter s more practcal. Settng t up as a Lagrange Multpler problem, our ob s to mnmze Vbetween + lvwthn. Each of the terms s quadratc n v, so dervatves wll be lnear. hs leads to v x x v. C C n [ c] [ c] [ ] [ ] [ ] [ ] c c c c c c c nc hs s an equaton of the form Az = lbz (for z= v ), where A s the between-group C [ c] covarance, and B s the wthn-group covarance. A has rank C -, snce nc c (.e., the weghted mean of the wthn-group means s the global mean.) For the two-group case, t s easy to solve. In ths case, A has rank, so Bz must be wthn the [] one-dmensonal subspace n the range of A, namely,, whch s necessarly a scalar [] multple of and also [] []. Snce Bz must be proportonal to [] [], - [] [] v = z= B m -m. More generally ( C > ), we seek egenvectors of t follows that ( ) B - A, whch are guaranteed to be n the span of the columns of A. hese solutons are known as canoncal varates ; they express the varables n whch the classes are most cleanly dscrmnated. Comparng communty structures 0 Graph-heoretc Methods, of 3

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

Graph-based Clustering

Graph-based Clustering Graphbased Clusterng Transform the data nto a graph representaton ertces are the data ponts to be clustered Edges are eghted based on smlarty beteen data ponts Graph parttonng Þ Each connected component

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Reading. 14. Subdivision curves. Recommended:

Reading. 14. Subdivision curves. Recommended: eadng ecommended: Stollntz, Deose, and Salesn. Wavelets for Computer Graphcs: heory and Applcatons, 996, secton 6.-6., A.5. 4. Subdvson curves Note: there s an error n Stollntz, et al., secton A.5. Equaton

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

CHAPTER 2 DECOMPOSITION OF GRAPHS

CHAPTER 2 DECOMPOSITION OF GRAPHS CHAPTER DECOMPOSITION OF GRAPHS. INTRODUCTION A graph H s called a Supersubdvson of a graph G f H s obtaned from G by replacng every edge uv of G by a bpartte graph,m (m may vary for each edge by dentfyng

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

cos(a, b) = at b a b. To get a distance measure, subtract the cosine similarity from one. dist(a, b) =1 cos(a, b)

cos(a, b) = at b a b. To get a distance measure, subtract the cosine similarity from one. dist(a, b) =1 cos(a, b) 8 Clusterng 8.1 Some Clusterng Examples Clusterng comes up n many contexts. For example, one mght want to cluster journal artcles nto clusters of artcles on related topcs. In dong ths, one frst represents

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Lecture #15 Lecture Notes

Lecture #15 Lecture Notes Lecture #15 Lecture Notes The ocean water column s very much a 3-D spatal entt and we need to represent that structure n an economcal way to deal wth t n calculatons. We wll dscuss one way to do so, emprcal

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Private Information Retrieval (PIR)

Private Information Retrieval (PIR) 2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline mage Vsualzaton mage Vsualzaton mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and Analyss outlne mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Data Representation in Digital Design, a Single Conversion Equation and a Formal Languages Approach

Data Representation in Digital Design, a Single Conversion Equation and a Formal Languages Approach Data Representaton n Dgtal Desgn, a Sngle Converson Equaton and a Formal Languages Approach Hassan Farhat Unversty of Nebraska at Omaha Abstract- In the study of data representaton n dgtal desgn and computer

More information

Brave New World Pseudocode Reference

Brave New World Pseudocode Reference Brave New World Pseudocode Reference Pseudocode s a way to descrbe how to accomplsh tasks usng basc steps lke those a computer mght perform. In ths week s lab, you'll see how a form of pseudocode can be

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 5 Luca Trevisan September 7, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 5 Luca Trevisan September 7, 2017 U.C. Bereley CS294: Beyond Worst-Case Analyss Handout 5 Luca Trevsan September 7, 207 Scrbed by Haars Khan Last modfed 0/3/207 Lecture 5 In whch we study the SDP relaxaton of Max Cut n random graphs. Quc

More information

Structure from Motion

Structure from Motion Structure from Moton Structure from Moton For now, statc scene and movng camera Equvalentl, rgdl movng scene and statc camera Lmtng case of stereo wth man cameras Lmtng case of multvew camera calbraton

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

Recognizing Faces. Outline

Recognizing Faces. Outline Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

CSE 326: Data Structures Quicksort Comparison Sorting Bound

CSE 326: Data Structures Quicksort Comparison Sorting Bound CSE 326: Data Structures Qucksort Comparson Sortng Bound Steve Setz Wnter 2009 Qucksort Qucksort uses a dvde and conquer strategy, but does not requre the O(N) extra space that MergeSort does. Here s the

More information

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions Sortng Revew Introducton to Algorthms Qucksort CSE 680 Prof. Roger Crawfs Inserton Sort T(n) = Θ(n 2 ) In-place Merge Sort T(n) = Θ(n lg(n)) Not n-place Selecton Sort (from homework) T(n) = Θ(n 2 ) In-place

More information

Intro. Iterators. 1. Access

Intro. Iterators. 1. Access Intro Ths mornng I d lke to talk a lttle bt about s and s. We wll start out wth smlartes and dfferences, then we wll see how to draw them n envronment dagrams, and we wll fnsh wth some examples. Happy

More information

Polyhedral Compilation Foundations

Polyhedral Compilation Foundations Polyhedral Complaton Foundatons Lous-Noël Pouchet pouchet@cse.oho-state.edu Dept. of Computer Scence and Engneerng, the Oho State Unversty Feb 8, 200 888., Class # Introducton: Polyhedral Complaton Foundatons

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

CSE 326: Data Structures Quicksort Comparison Sorting Bound

CSE 326: Data Structures Quicksort Comparison Sorting Bound CSE 326: Data Structures Qucksort Comparson Sortng Bound Bran Curless Sprng 2008 Announcements (5/14/08) Homework due at begnnng of class on Frday. Secton tomorrow: Graded homeworks returned More dscusson

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

K-means and Hierarchical Clustering

K-means and Hierarchical Clustering Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n gvng your own lectures. Feel free to use these sldes verbatm, or to modfy them to ft your

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

VISUAL SELECTION OF SURFACE FEATURES DURING THEIR GEOMETRIC SIMULATION WITH THE HELP OF COMPUTER TECHNOLOGIES

VISUAL SELECTION OF SURFACE FEATURES DURING THEIR GEOMETRIC SIMULATION WITH THE HELP OF COMPUTER TECHNOLOGIES UbCC 2011, Volume 6, 5002981-x manuscrpts OPEN ACCES UbCC Journal ISSN 1992-8424 www.ubcc.org VISUAL SELECTION OF SURFACE FEATURES DURING THEIR GEOMETRIC SIMULATION WITH THE HELP OF COMPUTER TECHNOLOGIES

More information

3D vector computer graphics

3D vector computer graphics 3D vector computer graphcs Paolo Varagnolo: freelance engneer Padova Aprl 2016 Prvate Practce ----------------------------------- 1. Introducton Vector 3D model representaton n computer graphcs requres

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

Electrical analysis of light-weight, triangular weave reflector antennas

Electrical analysis of light-weight, triangular weave reflector antennas Electrcal analyss of lght-weght, trangular weave reflector antennas Knud Pontoppdan TICRA Laederstraede 34 DK-121 Copenhagen K Denmark Emal: kp@tcra.com INTRODUCTION The new lght-weght reflector antenna

More information

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe CSCI 104 Sortng Algorthms Mark Redekopp Davd Kempe Algorthm Effcency SORTING 2 Sortng If we have an unordered lst, sequental search becomes our only choce If we wll perform a lot of searches t may be benefcal

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT 3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ

More information

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016) Technsche Unverstät München WSe 6/7 Insttut für Informatk Prof. Dr. Thomas Huckle Dpl.-Math. Benjamn Uekermann Parallel Numercs Exercse : Prevous Exam Questons Precondtonng & Iteratve Solvers (From 6)

More information

Math Homotopy Theory Additional notes

Math Homotopy Theory Additional notes Math 527 - Homotopy Theory Addtonal notes Martn Frankland February 4, 2013 The category Top s not Cartesan closed. problem. In these notes, we explan how to remedy that 1 Compactly generated spaces Ths

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Computer Animation and Visualisation. Lecture 4. Rigging / Skinning

Computer Animation and Visualisation. Lecture 4. Rigging / Skinning Computer Anmaton and Vsualsaton Lecture 4. Rggng / Sknnng Taku Komura Overvew Sknnng / Rggng Background knowledge Lnear Blendng How to decde weghts? Example-based Method Anatomcal models Sknnng Assume

More information

EECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science

EECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science EECS 730 Introducton to Bonformatcs Sequence Algnment Luke Huan Electrcal Engneerng and Computer Scence http://people.eecs.ku.edu/~huan/ HMM Π s a set of states Transton Probabltes a kl Pr( l 1 k Probablty

More information

Loop Transformations for Parallelism & Locality. Review. Scalar Expansion. Scalar Expansion: Motivation

Loop Transformations for Parallelism & Locality. Review. Scalar Expansion. Scalar Expansion: Motivation Loop Transformatons for Parallelsm & Localty Last week Data dependences and loops Loop transformatons Parallelzaton Loop nterchange Today Scalar expanson for removng false dependences Loop nterchange Loop

More information

Ramsey numbers of cubes versus cliques

Ramsey numbers of cubes versus cliques Ramsey numbers of cubes versus clques Davd Conlon Jacob Fox Choongbum Lee Benny Sudakov Abstract The cube graph Q n s the skeleton of the n-dmensonal cube. It s an n-regular graph on 2 n vertces. The Ramsey

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

Laplacian Eigenmap for Image Retrieval

Laplacian Eigenmap for Image Retrieval Laplacan Egenmap for Image Retreval Xaofe He Partha Nyog Department of Computer Scence The Unversty of Chcago, 1100 E 58 th Street, Chcago, IL 60637 ABSTRACT Dmensonalty reducton has been receved much

More information

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros. Fttng & Matchng Lecture 4 Prof. Bregler Sldes from: S. Lazebnk, S. Setz, M. Pollefeys, A. Effros. How do we buld panorama? We need to match (algn) mages Matchng wth Features Detect feature ponts n both

More information

Report on On-line Graph Coloring

Report on On-line Graph Coloring 2003 Fall Semester Comp 670K Onlne Algorthm Report on LO Yuet Me (00086365) cndylo@ust.hk Abstract Onlne algorthm deals wth data that has no future nformaton. Lots of examples demonstrate that onlne algorthm

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

AP PHYSICS B 2008 SCORING GUIDELINES

AP PHYSICS B 2008 SCORING GUIDELINES AP PHYSICS B 2008 SCORING GUIDELINES General Notes About 2008 AP Physcs Scorng Gudelnes 1. The solutons contan the most common method of solvng the free-response questons and the allocaton of ponts for

More information

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

A SYSTOLIC APPROACH TO LOOP PARTITIONING AND MAPPING INTO FIXED SIZE DISTRIBUTED MEMORY ARCHITECTURES

A SYSTOLIC APPROACH TO LOOP PARTITIONING AND MAPPING INTO FIXED SIZE DISTRIBUTED MEMORY ARCHITECTURES A SYSOLIC APPROACH O LOOP PARIIONING AND MAPPING INO FIXED SIZE DISRIBUED MEMORY ARCHIECURES Ioanns Drosts, Nektaros Kozrs, George Papakonstantnou and Panayots sanakas Natonal echncal Unversty of Athens

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005 Exercses (Part 4) Introducton to R UCLA/CCPR John Fox, February 2005 1. A challengng problem: Iterated weghted least squares (IWLS) s a standard method of fttng generalzed lnear models to data. As descrbed

More information

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.

More information

Any Pair of 2D Curves Is Consistent with a 3D Symmetric Interpretation

Any Pair of 2D Curves Is Consistent with a 3D Symmetric Interpretation Symmetry 2011, 3, 365-388; do:10.3390/sym3020365 OPEN ACCESS symmetry ISSN 2073-8994 www.mdp.com/journal/symmetry Artcle Any Par of 2D Curves Is Consstent wth a 3D Symmetrc Interpretaton Tadamasa Sawada

More information

Circuit Analysis I (ENGR 2405) Chapter 3 Method of Analysis Nodal(KCL) and Mesh(KVL)

Circuit Analysis I (ENGR 2405) Chapter 3 Method of Analysis Nodal(KCL) and Mesh(KVL) Crcut Analyss I (ENG 405) Chapter Method of Analyss Nodal(KCL) and Mesh(KVL) Nodal Analyss If nstead of focusng on the oltages of the crcut elements, one looks at the oltages at the nodes of the crcut,

More information

Network Topologies: Analysis And Simulations

Network Topologies: Analysis And Simulations Networ Topologes: Analyss And Smulatons MARJAN STERJEV and LJUPCO KOCAREV Insttute for Nonlnear Scence Unversty of Calforna San Dego, 95 Glman Drve, La Jolla, CA 993-4 USA Abstract:-In ths paper we present

More information

CHARUTAR VIDYA MANDAL S SEMCOM Vallabh Vidyanagar

CHARUTAR VIDYA MANDAL S SEMCOM Vallabh Vidyanagar CHARUTAR VIDYA MANDAL S SEMCOM Vallabh Vdyanagar Faculty Name: Am D. Trved Class: SYBCA Subject: US03CBCA03 (Advanced Data & Fle Structure) *UNIT 1 (ARRAYS AND TREES) **INTRODUCTION TO ARRAYS If we want

More information