The Random Subgraph Model for the Analysis of an Ecclesiastical Network in Merovingian Gaul
|
|
- Egbert Chambers
- 5 years ago
- Views:
Transcription
1 The Random Subgraph Model for the Analysis of an Ecclesiastical Network in Merovingian Gaul Charles Bouveyron Laboratoire MAP5, UMR CNRS 8145 Université Paris Descartes This is a joint work with Y. Jernite, P. Latouche, P. Rivera, L. Jegou & S. Lamassé 1
2 Outline Introduction The random subgraph model (RSM) Model inference Numerical experiments Analysis of an ecclesiastical network Conclusion 2
3 Introduction The analysis of networks: is a recent but increasingly important field in statistical learning, with applications in domains ranging from biology to history: biology: analysis of gene regulation processes, social sciences: analysis of political blogs, history: visualization of medieval social networks. Two main problems are currently well addressed: visualization of the networks, clustering of the network nodes. 3
4 Introduction The analysis of networks: is a recent but increasingly important field in statistical learning, with applications in domains ranging from biology to history: biology: analysis of gene regulation processes, social sciences: analysis of political blogs, history: visualization of medieval social networks. Two main problems are currently well addressed: visualization of the networks, clustering of the network nodes. Network comparison: is a still emerging problem is statistical learning, which is mainly addressed using graph structure comparison, but limited to binary networks. 3
5 Introduction Figure : Clustering of network nodes: communities (left) vs. structures with hubs (right). 4
6 Introduction Key works in probabilistic models: stochastic block model (SBM) by Nowicki and Snijders (2001), latent space model by Hoff, Handcock and Raftery (2002), latent cluster model by Handcock, Raftery and Tantrum (2007), mixed membership SBM (MMSBM) by Airoldi et al. (2008), mixture of experts for LCM by Gormley and Murphy (2010), MMSBM for dynamic networks by Xing et al. (2010), overlapping SBM (OSBM) by Latouche et al. (2011). A good overview is given in: M. Salter-Townshend, A. White, I. Gollini and T. B. Murphy, Review of Statistical Network Analysis: Models, Algorithms, and Software, Statistical Analysis and Data Mining, Vol. 5(4), pp ,
7 Introduction: the historical problem Our colleagues from the LAMOP team were interested in answering the following question: Does the Church was organized in the same way within the different kingdoms in Merovingian Gaul? 6
8 Introduction: the historical problem Our colleagues from the LAMOP team were interested in answering the following question: Does the Church was organized in the same way within the different kingdoms in Merovingian Gaul? To this end, they have build a relational database: from written acts of ecclesiastical councils that took place in Gaul during the 6th century ( ), those acts report who attended (bishops, kings, dukes, priests, monks,...) and what questions (regarding Church, faith,...) were discussed, they also allowed to characterize the type of relationship between the individuals, it took 18 months to build the database. 6
9 Introduction: the historical problem The database contains: 1331 individuals (mostly clergymen) who participated to ecclesiastical councils in Gaul between 480 and 614, 4 types of relationships between individuals have been identified (positive, negative, variable or neutral), each individual belongs to one of the 5 regions of Gaul: 3 kingdoms: Austrasia, Burgundy and Neustria, 2 provinces: Aquitaine and Provence. additional information is also available: social positions, family relationships, birth and death dates, hold offices, councils dates,... 7
10 Introduction: the historical problem Neustria Provence Unknown Aquitaine Austrasia Burgundy 8Figure : Adjacency matrix of the ecclesiastical network (sorted by regions).
11 Introduction Expected difficulties: existing approaches can not analyze networks with categorical edges and a partition into subgraphs, comparison of subgraphs has, up to our knowledge, not been addressed in this context, a source effect is expected due to the overrepresentation of some places (Neustria through Ten History Book of Gregory of Tours) or individuals (hagiographies). 9
12 Introduction Expected difficulties: existing approaches can not analyze networks with categorical edges and a partition into subgraphs, comparison of subgraphs has, up to our knowledge, not been addressed in this context, a source effect is expected due to the overrepresentation of some places (Neustria through Ten History Book of Gregory of Tours) or individuals (hagiographies). Our approach: we consider directed networks with typed (categorical) edges and for which a partition into subgraphs is known, we base our comparison on the cluster organization of the subgraphs, we propose an extension of SBM which takes into account typed edges and subgraphs, subgraph comparison is possible afterward using model parameters. 9
13 Outline Introduction The random subgraph model (RSM) Model inference Numerical experiments Analysis of an ecclesiastical network Conclusion 10
14 The random subgraph model (RSM) Before the maths, an example of an RSM network: We observe: the partition of the network into S = 2 subgraphs (node form), the presence A ij of directed edges between the N nodes, the type X ij {1,..., C} of the edges (C = 3, edge color). Figure : Example of an RSM network. 11
15 The random subgraph model (RSM) Before the maths, an example of an RSM network: We observe: the partition of the network into S = 2 subgraphs (node form), the presence A ij of directed edges between the N nodes, the type X ij {1,..., C} of the edges (C = 3, edge color). Figure : Example of an RSM network. We search: a partition of the node into K = 3 groups (node color), which overlap with the partition into subgraphs. 11
16 The random subgraph model (RSM) The network (represented by its adjacency matrix X) is assumed to be generated as follows: the presence of an edge between nodes i and j is such that: A ij B(γ sis j ) where s i {1,..., S} indicates the (observed) subgraph of node i, 12
17 The random subgraph model (RSM) The network (represented by its adjacency matrix X) is assumed to be generated as follows: the presence of an edge between nodes i and j is such that: A ij B(γ sis j ) where s i {1,..., S} indicates the (observed) subgraph of node i, each node i is as well associated with an (unobserved) group among K according to: Z i M(α si ) where α s [0, 1] K and K k=1 α sk = 1, 12
18 The random subgraph model (RSM) The network (represented by its adjacency matrix X) is assumed to be generated as follows: the presence of an edge between nodes i and j is such that: A ij B(γ sis j ) where s i {1,..., S} indicates the (observed) subgraph of node i, each node i is as well associated with an (unobserved) group among K according to: Z i M(α si ) where α s [0, 1] K and K k=1 α sk = 1, each edge X ij can be finally of C different (observed) types and such that: X ij A ij Z ik Z jl = 1 M(Π kl ) where Π kl [0, 1] C and C c=1 Π klc = 1. 12
19 The random subgraph model (RSM) 4 Y. JERNITE ET AL. Notations X A Z N K S C α Π γ Description Adjacency matrix. X ij {0,...,C} indicates the edge type Binary matrix. A ij =1indicatesthepresenceofanedge Binary matrix. Z ik =1indicatesthati belongs to cluster k Number of vertices in the network Number of latent clusters Number of subgraphs Number of edge types α sk is the proportion of cluster k in subgraph s Π klc is the probability of having an edge of type c between vertices of clusters k and l γ rs probability of having an edge between vertices of subgraphs r and s Table 1 Summary of the notations used in the paper. Table : Summary of the notations. the model, we also consider the binary matrix A with entries A ij such that A i,j =1 X i,j 0. We 13 also emphasize that the observed partition P induces a decomposition of the graph into subgraphs where each class of vertices corresponds to a
20 The random subgraph model (RSM) Remark 1: the RSM model separates the roles of the known partition and the latent clusters, this was motivated by historical assumptions on the creation of relationships during the 6th century, indeed, the possibilities of connection were preponderant over the type of connection and mainly dependent on the geography. 14
21 The random subgraph model (RSM) Remark 1: the RSM model separates the roles of the known partition and the latent clusters, this was motivated by historical assumptions on the creation of relationships during the 6th century, indeed, the possibilities of connection were preponderant over the type of connection and mainly dependent on the geography. Remark 2: an alternative approach would consist in allowing X ij to directly depend on both the latent clusters and the partition, however, this would dramatically increase the number of model parameters (K 2 S 2 (C + 1) + SK instead of S 2 + K 2 C + SK), if S = 6, K = 6 and C = 4, then the alternative approach has parameters while RSM has only
22 The random subgraph model (RSM) We consider a Bayesian framework: the previous model is fully defined by its joint distribution: p(x, A, Z α, γ, Π) = p(x A, Z, Π)p(A γ)p(z α), which we complete with conjuguate prior distributions for model parameters: the prior distribution for α is: p(γ rs) = Beta(a rs, b rs), the prior distribution for γ is: p(α s) = Dir(χ s), the prior distribution for Π is: p(π kl ) = Dir(Ξ kl ). 15
23 The random subgraph model (RSM) THE RANDOM SUBGRAPH MODEL 5 χ α P γ a, b Z i Z j A ij Ξ Π X ij Figure : A graphical representation of the RSM model. Fig 1. GraphicalrepresentationoftheRSMmodel. depending 16 on the latent clusters. Thus, if i belongs to cluster k and j to
24 Outline Introduction The random subgraph model (RSM) Model inference Numerical experiments Analysis of an ecclesiastical network Conclusion 17
25 Model inference Due to the Bayesian framework introduces above: we aim at estimating the posterior distribution p(z, α, γ, Π X, A), which in turn will allow us to compute MAP estimates of Z and (α, γ, Π), as expected, this distribution is not tractable and approximate inference procedures are required, the use of MCMC methods is obviously an option but MCMC methods have a poor scaling with sample sizes. 18
26 Model inference Due to the Bayesian framework introduces above: we aim at estimating the posterior distribution p(z, α, γ, Π X, A), which in turn will allow us to compute MAP estimates of Z and (α, γ, Π), as expected, this distribution is not tractable and approximate inference procedures are required, the use of MCMC methods is obviously an option but MCMC methods have a poor scaling with sample sizes. We chose to use variational approaches: because they allow to deal with large networks (N > 1000), recent theoretical results (Celisse et al., 2012; Mariadassou and Matias, 2013) gave new insights about convergence properties of variational approaches in this context. 18
27 The VBEM algorithm for RSM Variational Bayesian inference in our case: we aim at approximating the posterior distribution p(z, α, γ, Π X, A) we therefore search the approximation q(z, α, γ, Π) which maximizes L(q) where: log p(x, A) = L(q) + KL(q p(. X, A)), and q is assumed to factorize as follows: q(z, α, γ, Π) = q(z i ) q(α s ) q(γ st ) q(π kl ). The VBEM algorithm for RSM: E step: compute the update parameter τ i for q(z i ), M step: compute the update parameters χ, γ, Ξ for respectively q(α s ), q(γ st ) and q(π kl ). 19
28 Initialization and choice of K Initialization of the VBEM algorithm: the VBEM is known to be sensitive to its initialization, we propose a strategy based on several k-means algorithms with a specific distance: d(i, j) = N N δ(x ih X jh )A ih A jh + δ(x hi X hj )A hi A hj. h=1 h=1 20
29 Initialization and choice of K Initialization of the VBEM algorithm: the VBEM is known to be sensitive to its initialization, we propose a strategy based on several k-means algorithms with a specific distance: d(i, j) = N N δ(x ih X jh )A ih A jh + δ(x hi X hj )A hi A hj. h=1 Choice of the number K of groups: h=1 once the VBEM algorithm has converged, the lower bound L(q) is a good approximation of the integrated log-likelihood log p(x, A), we thus can use L(q) as a model selection criterion for choosing K, if computed right after the M step, L(q) = S r,s B(ars, brs) log( B(a 0 rs, b0 rs ) ) + S s=1 log( C(χ s ) C(χ 0 s ) ) + K k,l log( C(Ξ kl) C(Ξ 0 kl ) ) N i=1 k=1 K τ ik log(τ ik ). 20
30 Outline Introduction The random subgraph model (RSM) Model inference Numerical experiments Analysis of an ecclesiastical network Conclusion 21
31 Experimental setup We considered 3 different situations: S1 : network without subgraphs and with a preponderant proportion of edges of type 1, S2 : network without subgraphs and with balanced proportions of the three edge types, S3 : network with 3 subgraphs and with balanced proportions of the three edge types. Global setup: in all cases, the number of (unobserved) groups is K = 3 and the network size is N = 100, we use the adjusted Rand index (ARI) for evaluating the clustering quality (and thus the model fitting). 22
32 Choice of the number K of groups First, a model selection study: we aim at validating the use of L(q) as model selection criteria, we simulated 50 RSM networks according to scenario 1 and with N = 100, and applied our VB-EM algorithm for different values of K (K = 2,..., 5), the actual value of K is K = 3. 23
33 Choice of the number K of groups 12 Y. JERNITE ET AL. Criterion L L ARI repartition ARI K K Table Fig 4. : Repartition Lower bound ofthe L and criterion(left ARI averaged panel) over and50 ARI(right networkspanel) simulated over 50 according networksto the generated RSM with model. the parameters of the first scenario. data drawn according to its generative process. We were interested in the 24 comparison with the following models:
34 Comparison with other SBM-based approaches Second, a comparison with other SBM-based methods: binary SBM: the original SBM algorithm was applied on a collapsed version of the data (only the presence of edges); the mixer package was used, binary SBM (type 1, 2 or 3): the original SBM algorithm was applied on a collapsed version of the data (only edges of type 1, 2 or 3); the mixer package was used, typed SBM: we had to implement the categorical version of SBM since it is not available in existing software; this version of SBM will be available in mixer soon, the studied methods were applied to the the three scenarii and results are averaged over 50 networks. 25
35 Comparison with other SBM-based approaches Method Scenario 1 Scenario 2 Scenario 3 binary SBM (presence) ± ± ± binary SBM (type 1) ± ± ± binary SBM (type 2) ± ± ± binary SBM (type 3) ± ± ± Typed SBM ± ± ± RSM ± ± ± Table : ARI averaged over 50 networks simulated according to the three considered situations. 26
36 Outline Introduction The random subgraph model (RSM) Model inference Numerical experiments Analysis of an ecclesiastical network Conclusion 27
37 The ecclesiastical network The data: 1331 individuals (mostly clergymen) who participated to ecclesiastical councils in Gaul between 480 and 614, 4 types of relationships between individuals have been identified (positive, negative, variable or neutral), each individual belongs to one of the 5 regions (3 kingdoms et 2 provinces). Our modeling allows a multi-level analysis: Z allows to characterize the found clusters through social positions of the individuals, parameter Π describes the relations between the found clusters, parameter γ describes the connections between the subgraphs, parameter α describes the cluster repartition in the subgraphs. 28
38 RSM results: the latent clusters Cluster 1 Cluster Bishop Priest Abbot Earl Duke Monk Deacon King Queen Archdeacon Bishop Priest Abbot Earl Duke Monk Deacon King Queen Archdeacon Cluster 3 Cluster Bishop Priest Abbot Earl Duke Monk Deacon King Queen Archdeacon Bishop Priest Abbot Earl Duke Monk Deacon King Queen Archdeacon Cluster 5 Cluster Bishop Priest Abbot Earl Duke Monk Deacon King Queen Archdeacon Bishop Priest Abbot Earl Duke Monk Deacon King Queen Archdeacon 29 Figure : Characterization of the K = 6 clusters found by RSM.
39 RSM results: the latent clusters The latent clusters from the historical point of view: clusters 1 and 3 correspond to local, provincial of diocesan councils, mostly interested in local issues (ex: council of Arles, 554), clusters 2 and 6 correspond to councils dedicated to political questions, usually convened by a king (ex: Orleans, 511), clusters 4 and 5 correspond to aristocratic assemblies, where queens and duke and earls are present (ex: Orleans, 529). 30
40 RSM results: the relationships between clusters positive negative cluster 4 cluster 4 cluster 5 cluster 3 cluster 5 cluster 3 cluster 6 cluster 2 cluster 6 cluster 2 cluster 1 cluster 1 Figure : Characterization of the relationships between clusters (parameter Π). 31
41 RSM results: the relationships between clusters variable neutral cluster 4 cluster 4 cluster 5 cluster 3 cluster 5 cluster 3 cluster 6 cluster 2 cluster 6 cluster 2 cluster 1 cluster 1 Figure : Characterization of the relationships between clusters (parameter Π). 32
42 RSM results: the relationships between clusters The clusters relationships from the historical point of view: positive relations between clusters 3, 5 and 6 mainly corresponds to personal friendships between bishops (source effect), negative and variable relations betweens clusters 4, 5 and 6 report the conflicts in the hierarchy of the power, neutral relations between clusters 1, 3 and 6 were expected because they deal with different issues (local / political). 33
43 RSM results: the relationships between regions 1.5 Neustria Provence Unknown Aquitaine Austrasia Burgundy Neustria Provence Unknown Aquitaine Austrasia Burgundy Figure : Characterization of the relationships between the regions (parameter γ in log scale). 34
44 RSM results: comparison of the regions Proportions cluster 1 cluster 2 cluster 3 cluster 4 cluster 5 cluster 6 Neustria Provence Unknown Aquitaine Austrasia Burgundy total Figure : Characterization of regions through cluster repartition (parameter α). 35
45 RSM results: comparison of the regions Aquitaine Comp Neustria Burgundy Unknow Austrasia Provence Comp.1 36 Figure : PCA for compositional data on the parameter α.
46 Outline Introduction The random subgraph model (RSM) Model inference Numerical experiments Analysis of an ecclesiastical network Conclusion 37
47 Conclusion Our contribution: a model for network clustering which takes into account an existing partition of the network into subgraphs, this modeling allows afterward a comparison of the subgraphs, inference is done in a Bayesian framework using a VBEM algorithm, our approach has been applied to a complex historical network. 38
48 Conclusion Our contribution: a model for network clustering which takes into account an existing partition of the network into subgraphs, this modeling allows afterward a comparison of the subgraphs, inference is done in a Bayesian framework using a VBEM algorithm, our approach has been applied to a complex historical network. Interesting problems to address: temporality of the network (evolution of relations, social positions,...), visualization of this kind of networks, procedures to test the similarity of subgraphs... 38
49 Conclusion Our contribution: a model for network clustering which takes into account an existing partition of the network into subgraphs, this modeling allows afterward a comparison of the subgraphs, inference is done in a Bayesian framework using a VBEM algorithm, our approach has been applied to a complex historical network. Interesting problems to address: temporality of the network (evolution of relations, social positions,...), visualization of this kind of networks, procedures to test the similarity of subgraphs... Software: Reference: package Rambo for the R software is available on the CRAN The Random Subgraph Model for the Analysis of an Ecclesiastical Network in Merovingian Gaul, The Annals of Applied Statistics, vol. 8(1), pp , 2014 ( 38
50 The EM, VEM and VBEM algorithms First, it necessary to write the log-likelihood as: log(p(x θ)) = L(q(Z); θ) + KL(q(Z) p(z X, θ)), where: L(q(Z); θ) = Z q(z) log(p(x, Z θ)/q(z)) is a lower bound of the log-likelihood, KL(q(Z) p(z X, θ)) = Z q(z) log(p(x Z, θ)/q(z)) is the KL divergence between q(z) and p(z X, θ). 39
51 The EM, VEM and VBEM algorithms First, it necessary to write the log-likelihood as: log(p(x θ)) = L(q(Z); θ) + KL(q(Z) p(z X, θ)), where: L(q(Z); θ) = Z q(z) log(p(x, Z θ)/q(z)) is a lower bound of the log-likelihood, KL(q(Z) p(z X, θ)) = Z q(z) log(p(x Z, θ)/q(z)) is the KL divergence between q(z) and p(z X, θ). The EM algorithm: E step: θ is fixed and L is maximized over q q (Z) = p(z X, θ) M step: L(q (Z), θ old ) is now maximized over θ L(q (Z), θ old ) = Z p(z X, θ old ) log(p(x, Z θ)/p(z X, θ old )) = E[log(p(X, Z θ) θ old ] + c. 39
52 The EM, VEM and VBEM algorithms The variational approach: let us now suppose that p(x, Z θ) is, for some reason, intractable, the variational approach restrict the range of functions for q such that the problem is tractable, a popular variational approximation is to assume that q factorizes: q(z) = i q i (Z i ). The VEM algorithm: V-E step: θ is fixed and L is maximized over q log q j (Z j) = E i j [log p(x, Z θ)] + c V-M step: L(q (Z), θ old ) is now maximized over θ 40
53 The EM, VEM and VBEM algorithms We consider now the Bayesian framework: we aim at estimating the posterior distribution p(z, θ X), we have here the relation: log(p(x)) = L(q(Z, θ)) + KL(q(Z, θ) p(z, θ X)), we also assume that q factorizes over Z and θ: q(z, θ) = i q i (Z i )q θ (θ). The VBEM algorithm: VB-E step: q θ (θ) is fixed and L is maximized over the q i log q j (Z j) = E i j,θ [log p(x, Z, θ)] + c VB-M step: all q i (Z i ) are now fixed and L is maximized over q θ log q θ (θ) = E Z[log p(x, Z, θ)] + c 41
54 The VBEM algorithm for RSM: the M step The M step of the VBEM algorithm: the VBEM update step for the distributions q(α s ) is: log q (α s ) = E Z,α,γ,Π[log p(x, A, Z, α, γ, Π)] + c \s { } K N = log(α sk ) χ 0 sk + δ(r i = s)τ ik 1 + c, k=1 i=1 42
55 The VBEM algorithm for RSM: the M step The M step of the VBEM algorithm: the VBEM update step for the distributions q(α s ) is: log q (α s ) = E Z,α,γ,Π[log p(x, A, Z, α, γ, Π)] + c \s { } K N = log(α sk ) χ 0 sk + δ(r i = s)τ ik 1 + c, k=1 i=1 which is the functional form for a Dirichlet distribution: q(α s ) = Dir(α s ; χ s ), s {1,..., S} where χ sk = χ 0 sk + N i=1 δ(r i = s)τ ik, k {1,..., K}. 42
56 The VBEM algorithm for RSM: the M step The M step of the VBEM algorithm: the VBEM update step for the distributions q(α s ), q(γ st ) and q(π kl ) are: q(α s ) = Dir(α s ; χ s ), s {1,..., S}, q(γ rs ) = Beta(γ rs ; a rs, b rs ), (r, s) {1,..., S} 2, q(π kl ) = Dir(Π kl ; Ξ kl ), (k, l) {1,..., K} 2, where: χ sk = χ 0 sk + N i=1 δ(r i = s)τ ik, k {1,..., K}, a rs = a 0 rs + r i=r,r j=s (A ij), b rs = b 0 rs + r i=r,r j=s (1 A ij), Ξ klc = Ξ 0 klc + N i j δ(x ij = c)τ ik τ jl, c {1,..., C}. 43
57 The VBEM algorithm for RSM: the E step The E step of the VBEM algorithm: the VBEM update step for the distribution q(z i ) is given by: which implies that log q (Z i ) = E Z \i,α,γ,π[log p(x, A, Z, α, γ, Π)] + c where ( K τ ik exp ψ(χ ri,k) ψ( 44 q(z i ) = M(Z i ; 1, τ i ), i = 1,..., N l=1 χ ri,l) ) N C K C + exp δ(x ij = c)τ jl (ψ(ξ klc ) ψ( Ξ klu ) j i c=1 l=1 u=1 ) N C K C + exp δ(x ji = c)τ jl (ψ(ξ lkc ) ψ( Ξ lku ). j i c=1 l=1 ) u=1
The Random Subgraph Model for the Analysis of an Ecclesiastical Network in Merovingian Gaul
The Random Subgraph Model for the Analysis of an Ecclesiastical Network in Merovingian Gaul Charles Bouveyron, Laurent Jegou, Yacine Jernite, Stéphane Lamassé, Pierre Latouche, Patrick Rivera To cite this
More informationPackage Rambo. February 19, 2015
Package Rambo February 19, 2015 Type Package Title The Random Subgraph Model Version 1.1 Date 2013-11-13 Author Charles Bouveyron, Yacine Jernite, Pierre Latouche, Laetitia Nouedoui Maintainer Pierre Latouche
More informationLatent Variable Models for the Analysis, Visualization and Prediction of Network and Nodal Attribute Data. Isabella Gollini.
z i! z j Latent Variable Models for the Analysis, Visualization and Prediction of etwork and odal Attribute Data School of Engineering University of Bristol isabella.gollini@bristol.ac.uk January 4th,
More informationCS281 Section 9: Graph Models and Practical MCMC
CS281 Section 9: Graph Models and Practical MCMC Scott Linderman November 11, 213 Now that we have a few MCMC inference algorithms in our toolbox, let s try them out on some random graph models. Graphs
More informationMixture Models and the EM Algorithm
Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine c 2017 1 Finite Mixture Models Say we have a data set D = {x 1,..., x N } where x i is
More informationClustering: Classic Methods and Modern Views
Clustering: Classic Methods and Modern Views Marina Meilă University of Washington mmp@stat.washington.edu June 22, 2015 Lorentz Center Workshop on Clusters, Games and Axioms Outline Paradigms for clustering
More informationECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov
ECE521: Week 11, Lecture 20 27 March 2017: HMM learning/inference With thanks to Russ Salakhutdinov Examples of other perspectives Murphy 17.4 End of Russell & Norvig 15.2 (Artificial Intelligence: A Modern
More informationCS Introduction to Data Mining Instructor: Abdullah Mueen
CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen LECTURE 8: ADVANCED CLUSTERING (FUZZY AND CO -CLUSTERING) Review: Basic Cluster Analysis Methods (Chap. 10) Cluster Analysis: Basic Concepts
More informationCase Study IV: Bayesian clustering of Alzheimer patients
Case Study IV: Bayesian clustering of Alzheimer patients Mike Wiper and Conchi Ausín Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School 2nd - 6th
More informationStephen Scott.
1 / 33 sscott@cse.unl.edu 2 / 33 Start with a set of sequences In each column, residues are homolgous Residues occupy similar positions in 3D structure Residues diverge from a common ancestral residue
More informationLecture 8: The EM algorithm
10-708: Probabilistic Graphical Models 10-708, Spring 2017 Lecture 8: The EM algorithm Lecturer: Manuela M. Veloso, Eric P. Xing Scribes: Huiting Liu, Yifan Yang 1 Introduction Previous lecture discusses
More informationStatistical analysis of networks and applications in Social Sciences
Statistical analysis of networks and applications in Social Sciences Rawya Zreik To cite this version: Rawya Zreik. Statistical analysis of networks and applications in Social Sciences. Mathematics [math].
More information10. MLSP intro. (Clustering: K-means, EM, GMM, etc.)
10. MLSP intro. (Clustering: K-means, EM, GMM, etc.) Rahil Mahdian 01.04.2016 LSV Lab, Saarland University, Germany What is clustering? Clustering is the classification of objects into different groups,
More informationInference and Representation
Inference and Representation Rachel Hodos New York University Lecture 5, October 6, 2015 Rachel Hodos Lecture 5: Inference and Representation Today: Learning with hidden variables Outline: Unsupervised
More informationClustering Relational Data using the Infinite Relational Model
Clustering Relational Data using the Infinite Relational Model Ana Daglis Supervised by: Matthew Ludkin September 4, 2015 Ana Daglis Clustering Data using the Infinite Relational Model September 4, 2015
More informationCS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas
CS839: Probabilistic Graphical Models Lecture 10: Learning with Partially Observed Data Theo Rekatsinas 1 Partially Observed GMs Speech recognition 2 Partially Observed GMs Evolution 3 Partially Observed
More informationCSC412: Stochastic Variational Inference. David Duvenaud
CSC412: Stochastic Variational Inference David Duvenaud Admin A3 will be released this week and will be shorter Motivation for REINFORCE Class projects Class Project ideas Develop a generative model for
More informationClustering web search results
Clustering K-means Machine Learning CSE546 Emily Fox University of Washington November 4, 2013 1 Clustering images Set of Images [Goldberger et al.] 2 1 Clustering web search results 3 Some Data 4 2 K-means
More informationFeature LDA: a Supervised Topic Model for Automatic Detection of Web API Documentations from the Web
Feature LDA: a Supervised Topic Model for Automatic Detection of Web API Documentations from the Web Chenghua Lin, Yulan He, Carlos Pedrinaci, and John Domingue Knowledge Media Institute, The Open University
More informationMachine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme
Machine Learning B. Unsupervised Learning B.1 Cluster Analysis Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University of Hildesheim, Germany
More informationA Co-Clustering approach for Sum-Product Network Structure Learning
Università degli Studi di Bari Dipartimento di Informatica LACAM Machine Learning Group A Co-Clustering approach for Sum-Product Network Antonio Vergari Nicola Di Mauro Floriana Esposito December 8, 2014
More informationK-Means and Gaussian Mixture Models
K-Means and Gaussian Mixture Models David Rosenberg New York University June 15, 2015 David Rosenberg (New York University) DS-GA 1003 June 15, 2015 1 / 43 K-Means Clustering Example: Old Faithful Geyser
More informationProbabilistic Graphical Models
Overview of Part Two Probabilistic Graphical Models Part Two: Inference and Learning Christopher M. Bishop Exact inference and the junction tree MCMC Variational methods and EM Example General variational
More informationOverlapping Clustering Methods for Networks
25 Overlapping Clustering Methods for Networks Pierre Latouche Laboratoire SAMM, Université Paris 1 Panthéon-Sorbonne, France Etienne Birmelé Laboratoire Statistique et Génome, Université d Évry-val-d
More informationVariational Methods for Discrete-Data Latent Gaussian Models
Variational Methods for Discrete-Data Latent Gaussian Models University of British Columbia Vancouver, Canada March 6, 2012 The Big Picture Joint density models for data with mixed data types Bayesian
More informationLecture 21 : A Hybrid: Deep Learning and Graphical Models
10-708: Probabilistic Graphical Models, Spring 2018 Lecture 21 : A Hybrid: Deep Learning and Graphical Models Lecturer: Kayhan Batmanghelich Scribes: Paul Liang, Anirudha Rayasam 1 Introduction and Motivation
More informationOne-Shot Learning with a Hierarchical Nonparametric Bayesian Model
One-Shot Learning with a Hierarchical Nonparametric Bayesian Model R. Salakhutdinov, J. Tenenbaum and A. Torralba MIT Technical Report, 2010 Presented by Esther Salazar Duke University June 10, 2011 E.
More informationQuantitative Biology II!
Quantitative Biology II! Lecture 3: Markov Chain Monte Carlo! March 9, 2015! 2! Plan for Today!! Introduction to Sampling!! Introduction to MCMC!! Metropolis Algorithm!! Metropolis-Hastings Algorithm!!
More informationVariational Autoencoders
red red red red red red red red red red red red red red red red red red red red Tutorial 03/10/2016 Generative modelling Assume that the original dataset is drawn from a distribution P(X ). Attempt to
More informationK-means and Hierarchical Clustering
K-means and Hierarchical Clustering Xiaohui Xie University of California, Irvine K-means and Hierarchical Clustering p.1/18 Clustering Given n data points X = {x 1, x 2,, x n }. Clustering is the partitioning
More informationClustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin
Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014
More informationOver-contribution in discretionary databases
Over-contribution in discretionary databases Mike Klaas klaas@cs.ubc.ca Faculty of Computer Science University of British Columbia Outline Over-contribution in discretionary databases p.1/1 Outline Social
More informationCLUSTERING. JELENA JOVANOVIĆ Web:
CLUSTERING JELENA JOVANOVIĆ Email: jeljov@gmail.com Web: http://jelenajovanovic.net OUTLINE What is clustering? Application domains K-Means clustering Understanding it through an example The K-Means algorithm
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Clustering and EM Barnabás Póczos & Aarti Singh Contents Clustering K-means Mixture of Gaussians Expectation Maximization Variational Methods 2 Clustering 3 K-
More informationClustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin
Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014
More informationApplications of admixture models
Applications of admixture models CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar, Alkes Price Applications of admixture models 1 / 27
More informationExpectation Maximization (EM) and Gaussian Mixture Models
Expectation Maximization (EM) and Gaussian Mixture Models Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 2 3 4 5 6 7 8 Unsupervised Learning Motivation
More informationGradient of the lower bound
Weakly Supervised with Latent PhD advisor: Dr. Ambedkar Dukkipati Department of Computer Science and Automation gaurav.pandey@csa.iisc.ernet.in Objective Given a training set that comprises image and image-level
More informationGBAGC: A General Bayesian Framework for Attributed Graph Clustering
GBAGC: A General Bayesian Framework for Attributed Graph Clustering ZHIQIANG XU, Nanyang Technological University YIPING KE and YI WANG, Institute of High Performance Computing HONG CHENG and JAMES CHENG,
More informationStochastic Blockmodels as an unsupervised approach to detect botnet infected clusters in networked data
Stochastic Blockmodels as an unsupervised approach to detect botnet infected clusters in networked data Mark Patrick Roeling & Geoff Nicholls Department of Statistics University of Oxford Data Science
More informationWarped Mixture Models
Warped Mixture Models Tomoharu Iwata, David Duvenaud, Zoubin Ghahramani Cambridge University Computational and Biological Learning Lab March 11, 2013 OUTLINE Motivation Gaussian Process Latent Variable
More informationClustering Lecture 5: Mixture Model
Clustering Lecture 5: Mixture Model Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced topics
More informationAn Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework
IEEE SIGNAL PROCESSING LETTERS, VOL. XX, NO. XX, XXX 23 An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework Ji Won Yoon arxiv:37.99v [cs.lg] 3 Jul 23 Abstract In order to cluster
More informationk-means demo Administrative Machine learning: Unsupervised learning" Assignment 5 out
Machine learning: Unsupervised learning" David Kauchak cs Spring 0 adapted from: http://www.stanford.edu/class/cs76/handouts/lecture7-clustering.ppt http://www.youtube.com/watch?v=or_-y-eilqo Administrative
More informationClustering algorithms
Clustering algorithms Machine Learning Hamid Beigy Sharif University of Technology Fall 1393 Hamid Beigy (Sharif University of Technology) Clustering algorithms Fall 1393 1 / 22 Table of contents 1 Supervised
More informationDeep Generative Models Variational Autoencoders
Deep Generative Models Variational Autoencoders Sudeshna Sarkar 5 April 2017 Generative Nets Generative models that represent probability distributions over multiple variables in some way. Directed Generative
More information9.1. K-means Clustering
424 9. MIXTURE MODELS AND EM Section 9.2 Section 9.3 Section 9.4 view of mixture distributions in which the discrete latent variables can be interpreted as defining assignments of data points to specific
More information19: Inference and learning in Deep Learning
10-708: Probabilistic Graphical Models 10-708, Spring 2017 19: Inference and learning in Deep Learning Lecturer: Zhiting Hu Scribes: Akash Umakantha, Ryan Williamson 1 Classes of Deep Generative Models
More informationFitting Social Network Models Using the Varying Truncation S. Truncation Stochastic Approximation MCMC Algorithm
Fitting Social Network Models Using the Varying Truncation Stochastic Approximation MCMC Algorithm May. 17, 2012 1 This talk is based on a joint work with Dr. Ick Hoon Jin Abstract The exponential random
More informationLearning Finite Beta-Liouville Mixture Models via Variational Bayes for Proportional Data Clustering
Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Learning Finite Beta-Liouville Mixture Models via Variational Bayes for Proportional Data Clustering Wentao Fan
More informationMachine Learning (BSMC-GA 4439) Wenke Liu
Machine Learning (BSMC-GA 4439) Wenke Liu 01-31-017 Outline Background Defining proximity Clustering methods Determining number of clusters Comparing two solutions Cluster analysis as unsupervised Learning
More informationContent-based image and video analysis. Machine learning
Content-based image and video analysis Machine learning for multimedia retrieval 04.05.2009 What is machine learning? Some problems are very hard to solve by writing a computer program by hand Almost all
More informationA Deterministic Global Optimization Method for Variational Inference
A Deterministic Global Optimization Method for Variational Inference Hachem Saddiki Mathematics and Statistics University of Massachusetts, Amherst saddiki@math.umass.edu Andrew C. Trapp Operations and
More informationMachine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme, Nicolas Schilling
Machine Learning B. Unsupervised Learning B.1 Cluster Analysis Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University of Hildesheim,
More information1 More stochastic block model. Pr(G θ) G=(V, E) 1.1 Model definition. 1.2 Fitting the model to data. Prof. Aaron Clauset 7 November 2013
1 More stochastic block model Recall that the stochastic block model (SBM is a generative model for network structure and thus defines a probability distribution over networks Pr(G θ, where θ represents
More informationMultiplicative Mixture Models for Overlapping Clustering
Multiplicative Mixture Models for Overlapping Clustering Qiang Fu Dept of Computer Science & Engineering University of Minnesota, Twin Cities qifu@cs.umn.edu Arindam Banerjee Dept of Computer Science &
More informationDeep Boltzmann Machines
Deep Boltzmann Machines Sargur N. Srihari srihari@cedar.buffalo.edu Topics 1. Boltzmann machines 2. Restricted Boltzmann machines 3. Deep Belief Networks 4. Deep Boltzmann machines 5. Boltzmann machines
More informationGenerative and discriminative classification techniques
Generative and discriminative classification techniques Machine Learning and Category Representation 2014-2015 Jakob Verbeek, November 28, 2014 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.14.15
More informationSegmentation: Clustering, Graph Cut and EM
Segmentation: Clustering, Graph Cut and EM Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@northwestern.edu http://www.eecs.northwestern.edu/~yingwu
More informationSocial Network Community Detection
University of Connecticut DigitalCommons@UConn Doctoral Dissertations University of Connecticut Graduate School 8-6-2015 Social Network Community Detection Guang Ouyang guang.ouyang@uconn.edu Follow this
More informationExpectation Maximization. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University
Expectation Maximization Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University April 10 th, 2006 1 Announcements Reminder: Project milestone due Wednesday beginning of class 2 Coordinate
More informationGenerative models for social network data
Generative models for social network data Kevin S. Xu (University of Toledo) James R. Foulds (University of California-San Diego) SBP-BRiMS 2016 Tutorial About Us Kevin S. Xu Assistant professor at University
More informationComputer vision: models, learning and inference. Chapter 10 Graphical Models
Computer vision: models, learning and inference Chapter 10 Graphical Models Independence Two variables x 1 and x 2 are independent if their joint probability distribution factorizes as Pr(x 1, x 2 )=Pr(x
More informationImplicit Mixtures of Restricted Boltzmann Machines
Implicit Mixtures of Restricted Boltzmann Machines Vinod Nair and Geoffrey Hinton Department of Computer Science, University of Toronto 10 King s College Road, Toronto, M5S 3G5 Canada {vnair,hinton}@cs.toronto.edu
More informationExtracting Communities from Networks
Extracting Communities from Networks Ji Zhu Department of Statistics, University of Michigan Joint work with Yunpeng Zhao and Elizaveta Levina Outline Review of community detection Community extraction
More informationMachine Learning
Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University April 1, 2019 Today: Inference in graphical models Learning graphical models Readings: Bishop chapter 8 Bayesian
More informationStatistical Analysis of List Experiments
Statistical Analysis of List Experiments Kosuke Imai Princeton University Joint work with Graeme Blair October 29, 2010 Blair and Imai (Princeton) List Experiments NJIT (Mathematics) 1 / 26 Motivation
More informationA class of Multidimensional Latent Class IRT models for ordinal polytomous item responses
A class of Multidimensional Latent Class IRT models for ordinal polytomous item responses Silvia Bacci 1, Francesco Bartolucci, Michela Gnaldi Dipartimento di Economia, Finanza e Statistica - Università
More informationLatent Variable Models and Expectation Maximization
Latent Variable Models and Expectation Maximization Oliver Schulte - CMPT 726 Bishop PRML Ch. 9 2 4 6 8 1 12 14 16 18 2 4 6 8 1 12 14 16 18 5 1 15 2 25 5 1 15 2 25 2 4 6 8 1 12 14 2 4 6 8 1 12 14 5 1 15
More informationSemantic Segmentation. Zhongang Qi
Semantic Segmentation Zhongang Qi qiz@oregonstate.edu Semantic Segmentation "Two men riding on a bike in front of a building on the road. And there is a car." Idea: recognizing, understanding what's in
More informationMultiple cosegmentation
Armand Joulin, Francis Bach and Jean Ponce. INRIA -Ecole Normale Supérieure April 25, 2012 Segmentation Introduction Segmentation Supervised and weakly-supervised segmentation Cosegmentation Segmentation
More informationThe Multi Stage Gibbs Sampling: Data Augmentation Dutch Example
The Multi Stage Gibbs Sampling: Data Augmentation Dutch Example Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Module 8 1 Example: Data augmentation / Auxiliary variables A commonly-used
More informationMixture Models and EM
Table of Content Chapter 9 Mixture Models and EM -means Clustering Gaussian Mixture Models (GMM) Expectation Maximiation (EM) for Mixture Parameter Estimation Introduction Mixture models allows Complex
More informationProbabilistic Graphical Models Part III: Example Applications
Probabilistic Graphical Models Part III: Example Applications Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2014 CS 551, Fall 2014 c 2014, Selim
More informationDependency detection with Bayesian Networks
Dependency detection with Bayesian Networks M V Vikhreva Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Leninskie Gory, Moscow, 119991 Supervisor: A G Dyakonov
More informationPackage lvm4net. R topics documented: August 29, Title Latent Variable Models for Networks
Title Latent Variable Models for Networks Package lvm4net August 29, 2016 Latent variable models for network data using fast inferential procedures. Version 0.2 Depends R (>= 3.1.1), MASS, ergm, network,
More informationBayesian K-Means as a Maximization-Expectation Algorithm
Bayesian K-Means as a Maximization-Expectation Algorithm Max Welling Kenichi Kurihara Abstract We introduce a new class of maximization expectation (ME) algorithms where we maximize over hidden variables
More informationStatistical Modeling of Complex Networks Within the ERGM family
Statistical Modeling of Complex Networks Within the ERGM family Mark S. Handcock Department of Statistics University of Washington U. Washington network modeling group Research supported by NIDA Grant
More informationMachine Learning. Unsupervised Learning. Manfred Huber
Machine Learning Unsupervised Learning Manfred Huber 2015 1 Unsupervised Learning In supervised learning the training data provides desired target output for learning In unsupervised learning the training
More informationGenerative Models for Complex Network Structure
Generative Models for Complex Network Structure Aaron Clauset @aaronclauset Computer Science Dept. & BioFrontiers Institute University of Colorado, Boulder External Faculty, Santa Fe Institute 4 June 2013
More informationLearning Objects and Parts in Images
Learning Objects and Parts in Images Chris Williams E H U N I V E R S I T T Y O H F R G E D I N B U School of Informatics, University of Edinburgh, UK Learning multiple objects and parts from images (joint
More informationModel-Based Clustering for Online Crisis Identification in Distributed Computing
Model-Based Clustering for Crisis Identification in Distributed Computing Dawn Woodard Operations Research and Information Engineering Cornell University with Moises Goldszmidt Microsoft Research 1 Outline
More informationChapter 11. Network Community Detection
Chapter 11. Network Community Detection Wei Pan Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN 55455 Email: weip@biostat.umn.edu PubH 7475/8475 c Wei Pan Outline
More informationIntroduction to Machine Learning
Department of Computer Science, University of Helsinki Autumn 2009, second term Session 8, November 27 th, 2009 1 2 3 Multiplicative Updates for L1-Regularized Linear and Logistic Last time I gave you
More information3 : Representation of Undirected GMs
0-708: Probabilistic Graphical Models 0-708, Spring 202 3 : Representation of Undirected GMs Lecturer: Eric P. Xing Scribes: Nicole Rafidi, Kirstin Early Last Time In the last lecture, we discussed directed
More informationConvexization in Markov Chain Monte Carlo
in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non
More informationMachine Learning
Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University February 25, 2015 Today: Graphical models Bayes Nets: Inference Learning EM Readings: Bishop chapter 8 Mitchell
More informationSGN (4 cr) Chapter 11
SGN-41006 (4 cr) Chapter 11 Clustering Jussi Tohka & Jari Niemi Department of Signal Processing Tampere University of Technology February 25, 2014 J. Tohka & J. Niemi (TUT-SGN) SGN-41006 (4 cr) Chapter
More informationHierarchical Mixture Models for Nested Data Structures
Hierarchical Mixture Models for Nested Data Structures Jeroen K. Vermunt 1 and Jay Magidson 2 1 Department of Methodology and Statistics, Tilburg University, PO Box 90153, 5000 LE Tilburg, Netherlands
More informationDATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm
DATA MINING LECTURE 7 Hierarchical Clustering, DBSCAN The EM Algorithm CLUSTERING What is a Clustering? In general a grouping of objects such that the objects in a group (cluster) are similar (or related)
More informationCPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016
CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2016 A2/Midterm: Admin Grades/solutions will be posted after class. Assignment 4: Posted, due November 14. Extra office hours:
More informationData Science and Service Research Discussion Paper
Discussion Paper No. 97 Identifying Topic-based Communities by Combining Social Network Data and User Generated Content Mirai Igarashi Nobuhiko Terui April, 219 Data Science and Service Research Discussion
More informationVariational Autoencoders. Sargur N. Srihari
Variational Autoencoders Sargur N. srihari@cedar.buffalo.edu Topics 1. Generative Model 2. Standard Autoencoder 3. Variational autoencoders (VAE) 2 Generative Model A variational autoencoder (VAE) is a
More informationComparison of Recommender System Algorithms focusing on the New-Item and User-Bias Problem
Comparison of Recommender System Algorithms focusing on the New-Item and User-Bias Problem Stefan Hauger 1, Karen H. L. Tso 2, and Lars Schmidt-Thieme 2 1 Department of Computer Science, University of
More informationCOMP 551 Applied Machine Learning Lecture 13: Unsupervised learning
COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning Associate Instructor: Herke van Hoof (herke.vanhoof@mail.mcgill.ca) Slides mostly by: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551
More informationMachine Learning Department School of Computer Science Carnegie Mellon University. K- Means + GMMs
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University K- Means + GMMs Clustering Readings: Murphy 25.5 Bishop 12.1, 12.3 HTF 14.3.0 Mitchell
More informationInformation Filtering for arxiv.org
Information Filtering for arxiv.org Bandits, Exploration vs. Exploitation, & the Cold Start Problem Peter Frazier School of Operations Research & Information Engineering Cornell University with Xiaoting
More informationECE 5424: Introduction to Machine Learning
ECE 5424: Introduction to Machine Learning Topics: Unsupervised Learning: Kmeans, GMM, EM Readings: Barber 20.1-20.3 Stefan Lee Virginia Tech Tasks Supervised Learning x Classification y Discrete x Regression
More informationMachine Learning
Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University March 4, 2015 Today: Graphical models Bayes Nets: EM Mixture of Gaussian clustering Learning Bayes Net structure
More informationEnhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques
24 Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques Ruxandra PETRE
More information