Structure Estimation in Graphical Models

Similar documents
Probabilistic Graphical Models

Decomposition of log-linear models

Summary: A Tutorial on Learning With Bayesian Networks

Junction Trees and Chordal Graphs

Graphical Models and Markov Blankets

Junction Trees and Chordal Graphs

Building Classifiers using Bayesian Networks

Learning DAGs from observational data

Supplementary Material: The Emergence of. Organizing Structure in Conceptual Representation

Research Article Structural Learning about Directed Acyclic Graphs from Multiple Databases

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

On Local Optima in Learning Bayesian Networks

Lecture 13: May 10, 2002

Stat 5421 Lecture Notes Graphical Models Charles J. Geyer April 27, Introduction. 2 Undirected Graphs

FMA901F: Machine Learning Lecture 6: Graphical Models. Cristian Sminchisescu

Monte Carlo for Spatial Models

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Machine Learning (BSMC-GA 4439) Wenke Liu

The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R

Inference for loglinear models (contd):

Assessing the Quality of the Natural Cubic Spline Approximation

STAT 598L Learning Bayesian Network Structure

D-Separation. b) the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C.

Random projection for non-gaussian mixture models

Loopy Belief Propagation

Lecture 4: Undirected Graphical Models

Machine Learning. Sourangshu Bhattacharya

Graphical Modeling for High Dimensional Data

Bayesian Networks Inference (continued) Learning

Machine Learning (BSMC-GA 4439) Wenke Liu

Computer vision: models, learning and inference. Chapter 10 Graphical Models

Learning decomposable models with a bounded clique size

Graphical Models for Resource- Constrained Hypothesis Testing and Multi-Modal Data Fusion

Evaluating the Effect of Perturbations in Reconstructing Network Topologies

Sparse Nested Markov Models with Log-linear Parameters

10708 Graphical Models: Homework 2

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

Note Set 4: Finite Mixture Models and the EM Algorithm

Bayesian Estimation for Skew Normal Distributions Using Data Augmentation

Dependency detection with Bayesian Networks

Lecture 21 : A Hybrid: Deep Learning and Graphical Models

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas

STA 4273H: Statistical Machine Learning

Learning Equivalence Classes of Bayesian-Network Structures

GAMs semi-parametric GLMs. Simon Wood Mathematical Sciences, University of Bath, U.K.

Probabilistic Graphical Models

Estimation of Distribution Algorithm Based on Mixture

A Transformational Characterization of Markov Equivalence for Directed Maximal Ancestral Graphs

Counting and Exploring Sizes of Markov Equivalence Classes of Directed Acyclic Graphs

SGN (4 cr) Chapter 11

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS

Learning Bounded Treewidth Bayesian Networks

arxiv: v1 [cs.ai] 11 Oct 2015

What is machine learning?

Directed Graphical Models

Expectation Propagation

Probabilistic Graphical Models

Counting and Exploring Sizes of Markov Equivalence Classes of Directed Acyclic Graphs

1 Methods for Posterior Simulation

3 : Representation of Undirected GMs

K-Means and Gaussian Mixture Models

Bayesian Network Structure Learning by Recursive Autonomy Identification

Machine Learning

Network Traffic Measurements and Analysis

The flare Package for High Dimensional Linear Regression and Precision Matrix Estimation in R

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

Lecture 5: Exact inference

Convexization in Markov Chain Monte Carlo

ECE 6504: Advanced Topics in Machine Learning Probabilistic Graphical Models and Large-Scale Learning

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

1 : Introduction to GM and Directed GMs: Bayesian Networks. 3 Multivariate Distributions and Graphical Models

Survey of contemporary Bayesian Network Structure Learning methods

Generative and discriminative classification techniques

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Graphical Models. Pradeep Ravikumar Department of Computer Science The University of Texas at Austin

Chapter 2 PRELIMINARIES. 1. Random variables and conditional independence

Learning Bayesian Networks via Edge Walks on DAG Associahedra

A note on the pairwise Markov condition in directed Markov fields

Undirected Graphical Models. Raul Queiroz Feitosa

Enumerating the decomposable neighbours of a decomposable graph under a simple perturbation scheme

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework

Mixture Models and the EM Algorithm

An Edge-Swap Heuristic for Finding Dense Spanning Trees

CS242: Probabilistic Graphical Models Lecture 3: Factor Graphs & Variable Elimination

Voxel selection algorithms for fmri

Randomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec.

Linear Methods for Regression and Shrinkage Methods

Sparsity and image processing

Missing Data Estimation in Microarrays Using Multi-Organism Approach

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

BAYESIAN NETWORKS STRUCTURE LEARNING

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Structure learning with large sparse undirected graphs and its applications

High-dimensional probability density estimation with randomized ensembles of tree structured Bayesian networks

Recall from last time. Lecture 4: Wrap-up of Bayes net representation. Markov networks. Markov blanket. Isolating a node

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

Unsupervised Learning

Bayesian Classification Using Probabilistic Graphical Models

Lecture 5: Exact inference. Queries. Complexity of inference. Queries (continued) Bayesian networks can answer questions about the underlying

10601 Machine Learning. Model and feature selection

Transcription:

Wald Lecture, World Meeting on Probability and Statistics Istanbul 2012

Structure estimation Some examples General points Advances in computing has set focus on estimation of structure: Model selection (e.g. subset selection in regression)

Structure estimation Some examples General points Advances in computing has set focus on estimation of structure: Model selection (e.g. subset selection in regression) System identification (engineering)

Structure estimation Some examples General points Advances in computing has set focus on estimation of structure: Model selection (e.g. subset selection in regression) System identification (engineering) Structural learning (AI or machine learning)

Structure estimation Some examples General points Advances in computing has set focus on estimation of structure: Model selection (e.g. subset selection in regression) System identification (engineering) Structural learning (AI or machine learning) Graphical models describe conditional independence structures, so good case for formal analysis. Methods must scale well with data size, as many structures and huge collections of data are to be considered.

Why estimation of structure? Structure estimation Some examples General points Parallel to e.g. density estimation

Structure estimation Some examples General points Why estimation of structure? Parallel to e.g. density estimation Obtain quick overview of relations between variables in complex systems

Structure estimation Some examples General points Why estimation of structure? Parallel to e.g. density estimation Obtain quick overview of relations between variables in complex systems Data mining

Structure estimation Some examples General points Why estimation of structure? Parallel to e.g. density estimation Obtain quick overview of relations between variables in complex systems Data mining Gene regulatory networks

Structure estimation Some examples General points Why estimation of structure? Parallel to e.g. density estimation Obtain quick overview of relations between variables in complex systems Data mining Gene regulatory networks Reconstructing family trees from DNA information

Structure estimation Some examples General points Why estimation of structure? Parallel to e.g. density estimation Obtain quick overview of relations between variables in complex systems Data mining Gene regulatory networks Reconstructing family trees from DNA information General interest in sparsity.

Markov mesh model Introduction Structure estimation Some examples General points

PC algorithm Introduction Structure estimation Some examples General points Crudest algorithm (HUGIN), 10000 simulated cases

Bayesian GES Introduction Structure estimation Some examples General points Crudest algorithm (WinMine), 10000 simulated cases

Tree model Introduction Structure estimation Some examples General points PC algorithm, 10000 cases, correct reconstruction

Bayesian GES on tree Structure estimation Some examples General points

Chest clinic Introduction Structure estimation Some examples General points

PC algorithm Introduction Structure estimation Some examples General points 10000 simulated cases

Bayesian GES Introduction Structure estimation Some examples General points

Structure estimation Some examples General points SNPs and gene expressions 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 min BIC forest

Structure estimation Some examples General points Methods for structure identification in graphical models can be classified into three types: score-based methods: For example optimizing a penalized likelihood by using convex programming e.g. glasso;

Structure estimation Some examples General points Methods for structure identification in graphical models can be classified into three types: score-based methods: For example optimizing a penalized likelihood by using convex programming e.g. glasso; Bayesian methods: Identifying posterior distributions over graphs; can also use posterior probability as score.

Structure estimation Some examples General points Methods for structure identification in graphical models can be classified into three types: score-based methods: For example optimizing a penalized likelihood by using convex programming e.g. glasso; Bayesian methods: Identifying posterior distributions over graphs; can also use posterior probability as score. constraint-based methods: Querying conditional independences and identifying compatible independence structures, for example PC, PC*, NPC, IC, CI, FCI, SIN, QP,...

Estimating trees and forests Graphical lasso for DAGs Assume P factorizes w.r.t. an unknown tree τ. Based on a sample x (n) = (x 1,..., x n ) the likelihood function becomes l(τ, p) = log L(τ) = log p(x (n) τ) = log p(x e (n) ) {deg(v) 1} log p(x v (n) ). v V e E(τ) Maximizing this over p for a fixed tree τ yields the profile likelihood ˆl(τ) = l(τ, ˆp) = H n (v) e E(τ) H n (e) + v V

Estimating trees and forests Graphical lasso for DAGs Here H n (e) is the empirical cross-entropy or mutual information between endpoint variables of the edge e = {u, v}: H n (e) = n(x u, x v ) n log n(x u, x v )/n n(x u )n(x v )/n 2 and similarly H n (v) is the empirical entropy of X v. Based on this fact, (Chow and Liu, 1968) showed MLE ˆτ of T has maximal weight, where the weight of τ is w(τ) = H n (e) = ˆl(τ) ˆl(φ 0 ). e E(τ) Here φ 0 is the graph with all vertices isolated.

Estimating trees and forests Graphical lasso for DAGs Fast algorithms (Kruskal Jr., 1956) compute maximal weight spanning tree from weights W = (w uv, u, v V ). Chow and Wagner (1978) show a.s. consistency in total variation of ˆP: If P factorises w.r.t. τ, then sup p(x) ˆp(x) 0 for n, x so if τ is unique for P, ˆτ = τ for all n > N for some N. If P does not factorize w.r.t. a tree, ˆP converges to closest tree-approximation P to P (Kullback-Leibler distance). Note that if P is Markov w.r.t. and undirected graph G and we define weights on edges as w(e) = H(e) by cross-entropy, P is Markov w.r.t any maximal weight spanning tree of G.

Forests Introduction Estimating trees and forests Graphical lasso for DAGs Note that if we consider forests instead of trees, i.e. allow missing branches, it is still true that ˆl(φ) = l(φ, ˆp) = H n (v). e E(φ) H n (e) + v V Thus if we add a penalty for each edge, e.g. proportional to the number of additional parameters q e of introducing the edge e, we can find the maximum penalised forest by Kruskal s algorithm using weights w(φ) = {H n (e) λq e }. e E(φ) This has been exploited in the package graphd (Edwards et al., 2010), see also Panayidou (2011) and Højsgaard et al. (2012).

Estimating trees and forests Graphical lasso for DAGs SNPs and gene expressions 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 min BIC forest

Gaussian Trees Introduction Estimating trees and forests Graphical lasso for DAGs If X = (X v, v V ) is regular multivariate Gaussian, it factorizes w.r.t. an undirected graph if and only if its concentration matrix K = Σ 1 satisfies k uv = 0 u v. Results of Chow et al. are easily extended to Gaussian trees, with the weight of a tree determined as w(τ) = log(1 re 2 ), e E(τ) with r 2 e being the empirical correlation coeffient between X u and X v.

Estimating trees and forests Graphical lasso for DAGs Direct likelihood methods (ignoring penalty terms) lead to sensible results.

Estimating trees and forests Graphical lasso for DAGs Direct likelihood methods (ignoring penalty terms) lead to sensible results. (Boootstrap) sampling distribution of tree MLE can be simulated

Estimating trees and forests Graphical lasso for DAGs Direct likelihood methods (ignoring penalty terms) lead to sensible results. (Boootstrap) sampling distribution of tree MLE can be simulated Penalty terms additive along branches, so highest AIC or BIC scoring tree (forest) also available using a MWST algorithm.

Estimating trees and forests Graphical lasso for DAGs Direct likelihood methods (ignoring penalty terms) lead to sensible results. (Boootstrap) sampling distribution of tree MLE can be simulated Penalty terms additive along branches, so highest AIC or BIC scoring tree (forest) also available using a MWST algorithm. Tree methods scale extremely well with both sample size and number of variables;

Estimating trees and forests Graphical lasso for DAGs Direct likelihood methods (ignoring penalty terms) lead to sensible results. (Boootstrap) sampling distribution of tree MLE can be simulated Penalty terms additive along branches, so highest AIC or BIC scoring tree (forest) also available using a MWST algorithm. Tree methods scale extremely well with both sample size and number of variables; Pairwise marginal counts are sufficient statistics for the tree problem (empirical covariance matrix in the Gaussian case).

Estimating trees and forests Graphical lasso for DAGs Direct likelihood methods (ignoring penalty terms) lead to sensible results. (Boootstrap) sampling distribution of tree MLE can be simulated Penalty terms additive along branches, so highest AIC or BIC scoring tree (forest) also available using a MWST algorithm. Tree methods scale extremely well with both sample size and number of variables; Pairwise marginal counts are sufficient statistics for the tree problem (empirical covariance matrix in the Gaussian case). Note sufficiency holds despite parameter space very different from open subset of R k.

Estimating trees and forests Graphical lasso for DAGs Consider an undirected Gaussian graphical model and the l 1 -penalized log-likelihood function 2l pen (K) = log det K tr(kw ) λ K 1. The penalty K 1 = i j k ij is essentially a convex relaxation of the number of edges in the graph and optimization of the penalized likelihood will typically lead to several k ij = 0 and thus in effect estimate a particular graph. This penalized likelihood can be maximized efficiently (Banerjee et al., 2008) as implemented in the graphical lasso (Friedman et al., 2008). Beware: not scale-invariant!

glasso for bodyfat Introduction Estimating trees and forests Graphical lasso for DAGs Weight Knee Thigh Hip Ankle Biceps Forearm Abdomen Height Wrist Neck Chest Age BodyFat

Markov equivalence Introduction Estimating trees and forests Graphical lasso for DAGs D and D are equivalent if and only if: 1. D and D have same skeleton (ignoring directions) 2. D and D have same unmarried parents so but

Estimating trees and forests Graphical lasso for DAGs Equivalence class searches Searches directly in equivalence classes of DAGs. Define score function σ(d), with the property that D D σ(d) = σ(d ). This holds e.g. if score function is AIC or BIC or full Bayesian posterior with strong hyper Markov prior (based upon Dirichlet or inverse Wishart distributions). Equivalence class with maximal score is sought. dlasso? problems with invariance over equivalence classes!

Estimating trees and forests Graphical lasso for DAGs Greedy equivalence class search 1. Initialize with empty DAG 2. Repeatedly search among equivalence classes with a single additional edge and go to class with highest score - until no improvement. 3. Repeatedly search among equivalence classes with a single edge less and move to one with highest score - until no improvement. For BIC, for example, this algorithm yields consistent estimate of equivalence class for P. (Chickering, 2002)

Basic Bayesian model estimation Decomposable graphical models Bayesian methods for DAGs For g in specified set of graphs, Θ g is associated parameter space so that P factorizes w.r.t. g if and only if P = P θ for some θ Θ g. π g is prior on Θ g. Prior p(g) is uniform for simplicity. Based on x (n), posterior distribution of G is p (g) = p(g x (n) ) p(x (n) g) = p(x (n) g, θ) π g (dθ). Θ g looks for MAP estimate g maximizing p (g) or attempts to sample from posterior, using e.g. MCMC.

Basic Bayesian model estimation Decomposable graphical models Bayesian methods for DAGs For connected decomposable graphs and strong hyper Markov priors Dawid and Lauritzen (1993) show p(x (n) g) = C C S S (n) p(x C ) (n) p(x S ), where each factor has explicit form. C are the cliques of g and S the separators (mininal cutsets). Hence, if the prior distributions over a class of graphs is uniform, the posterior distribution has the form p(g x (n) ) C C S S (n) p(x C ) (n) p(x S ) = C C w(c) S S w(s).

Basic Bayesian model estimation Decomposable graphical models Bayesian methods for DAGs Byrne (2011) shows that a distribution over decomposable graphs having the form C C w(c) p(g) S S w(s), satisfies a structural Markov property so that for subsets A and B with V = A B, it holds that G A G B (A, B) is a decomposition of G. In words, the finer structure of the graph bits in two components of any decomposition are independent.

Basic Bayesian model estimation Decomposable graphical models Bayesian methods for DAGs Trees are decomposable, so for trees we get p(τ x (n) ) p(x e (n) ) e E(τ) which more illuminating can be expressed as p(τ x (n) ) where B uv = e E(τ) p(x (n) uv ) B e p(x u (n) )p(x v (n) ) is the Bayes factor for independence along the edge uv.

Basic Bayesian model estimation Decomposable graphical models Bayesian methods for DAGs MAP estimates of trees can be computed (also in Gaussian case) Good direct algorithms exist for generating random spanning trees (Guénoche, 1983; Broder, 1989; Aldous, 1990; Propp and Wilson, 1998) so full posterior analysis is possible for trees. MCMC methods for exploring posteriors of undirected graphs have been developed. Even for forests, it seems complicated to sample from the posterior distribution (Dai, 2008). p(φ x (n) ) e E(φ) B e

Some challenges for undirected graphs Basic Bayesian model estimation Decomposable graphical models Bayesian methods for DAGs Find feasible algorithm for (perfect) simulation from a distribution over decomposable graphs as C C w(c) p(g) S S w(s), where w(a), A V are a prescribed set of positive weights. Find feasible algorithm for obtaining MAP in decomposable case. This may not be universally possible as problem is NP-complete, even for bounded maximum clique size.

Posterior distribution for DAG Basic Bayesian model estimation Decomposable graphical models Bayesian methods for DAGs For strong directed hyper Markov priors it holds that p(x (n) d) = v V p(x v (n) x (n) pa(v) ) so p(d x (n) ) v V p(x v (n) x (n) pa(v) ), see e.g. Spiegelhalter and Lauritzen (1990); Cooper and Herskovits (1992); Heckerman et al. (1995) Challenge: Find good algorithm for sampling from this full posterior.

PC algorithm First step of constraint-based methods (eg PC-algorithm) is to identify skeleton of G, which is the undirected graph with α β if and only if there exists S V \ {α, β} with α g β S. The skeleton stage of any such algorithm is thus correct for any type of graph which represents the independence structure! In particular, the the PC algorithm can easily be adapted to UGs and CHGs (Meek, 1996). Challenge: the properties of these algorithms when translated to sampling situations are not sufficiently well understood.

PC algorithm Step 1: Identify skeleton using that, for P faithful, u v S V \ {u, v} : X u X v X S. Begin with complete graph, check for S = and remove edges when independence holds. Then continue for increasing S. PC-algorithm (Spirtes et al., 1993) exploits that only S with S ne(u) or S ne(v) needs checking, ne refers to current skeleton. Step 2: Identify directions to be consistent with independence relations found in Step 1.

pcalg for bodyfat Introduction PC algorithm Chest BodyFat Hip Knee Biceps Age Abdomen Thigh Ankle Forearm Height Weight Wrist Neck

pcalg for carcass Introduction PC algorithm Meat12 Meat11 Meat13 LeanMeat Fat12 Fat11 Fat13

PC algorithm Faithfulness A given distribution P is in general compatible with a variety of structures, i.e. if P corresponds to complete independence. To identify a structure G something like the following must hold P is said to be faithful to G if A B X S A g B S. Most distributions are faithful. More precisely, for DAGs it holds that the non-faithful distributions form a Lebesgue null-set in parameter space associated with a DAG.

Exact properties of PC-algorithm PC algorithm If P is faithful to DAG D, PC-algorithm finds D equivalent to D. It uses N independence checks where N is at most ( ) d ( ) V V 1 N 2 2 i i=0 V d+1 (d 1)!, where d is the maximal degree of any vertex in D. f So worst case complexity is exponential, but algorithm fast for sparse graphs. Sampling properties are less well understood although consistency results exist.

PC algorithm Constraint based methods establish fundamentally two lists: 1. An independence list I of triplets (α, β, S) with α β S; identifies skeleton; 2. A dependence list D of triplets (α, β, S) with (α β S). They get established recursively, for increasing size of S, and the list I is most likely to have errors. The lists may or may not be consistent with a DAG model, but methods exist for checking this. Question: Assume a current pair of input lists is consistent with compositional graphoid axioms and a new triplet (α, β, S) is considered. Can it be verified whether it can be consistently added to any of the two lists? Graph representable? Compositional?

Seems out of hand to extend Bayesian and score based methods to more general graphical models;

Seems out of hand to extend Bayesian and score based methods to more general graphical models; Fully Bayesian methods do typically not scale well with number of variables;

Seems out of hand to extend Bayesian and score based methods to more general graphical models; Fully Bayesian methods do typically not scale well with number of variables; Constraint based methods have less clear formal inference basis; challenge to improve this.

Seems out of hand to extend Bayesian and score based methods to more general graphical models; Fully Bayesian methods do typically not scale well with number of variables; Constraint based methods have less clear formal inference basis; challenge to improve this. Constraint based methods have been developed to work in cases where P is faithful and conditional independence queries can be resolved without error.

Factorisation properties for Markov distributions;

Factorisation properties for Markov distributions; Local computation algorithms for speeding up any relevant computation;

Factorisation properties for Markov distributions; Local computation algorithms for speeding up any relevant computation; Causality and causal interpretations;

Factorisation properties for Markov distributions; Local computation algorithms for speeding up any relevant computation; Causality and causal interpretations; Non-parametric graphical models for large scale data analysis;

Factorisation properties for Markov distributions; Local computation algorithms for speeding up any relevant computation; Causality and causal interpretations; Non-parametric graphical models for large scale data analysis; Probabilistic expert systems, for example in forensic identification problems;

Factorisation properties for Markov distributions; Local computation algorithms for speeding up any relevant computation; Causality and causal interpretations; Non-parametric graphical models for large scale data analysis; Probabilistic expert systems, for example in forensic identification problems; Markov theory for infinite graphs (huge graphs).

Factorisation properties for Markov distributions; Local computation algorithms for speeding up any relevant computation; Causality and causal interpretations; Non-parametric graphical models for large scale data analysis; Probabilistic expert systems, for example in forensic identification problems; Markov theory for infinite graphs (huge graphs). THANK YOU FOR LISTENING!

Aldous, D. (1990). A random walk construction of uniform spanning trees and uniform labelled trees. SIAM Journal on Discrete Mathematics, 3(4):450 465. Banerjee, O., Ghaoui, L. E., and daspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. 9:485 216. Broder, A. (1989). Generating random spanning trees. In Thirtieth Annual Symposium of Computer Science, pages 442 447. Byrne, S. (2011). Hyper and Structural Markov Laws for Graphical Models. PhD thesis, Statistical Laboratory, University of Cambridge. Chickering, D. M. (2002). Optimal structure identification with greedy search. Journal of Machine Learning Research, 3:507 554.

Chow, C. K. and Liu, C. N. (1968). Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14:462 467. Chow, C. K. and Wagner, T. J. (1978). Consistency of an estimate of tree-dependent probability distributions. IEEE Transactions on Information Theory, 19:369 371. Cooper, G. F. and Herskovits, E. (1992). A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9:309 347. Dai, H. (2008). Perfect sampling methods for random forests. Advances in Applied Probability, 40:897 917. Dawid, A. P. and Lauritzen, S. L. (1993). Hyper Markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 21:1272 1317.

Edwards, D., de Abreu, G., and Labouriau, R. (2010). Selecting high-dimensional mixed graphical models using minimal AIC or BIC forests. BMC Bioinformatics, 11(1):18. Friedman, J., Hastie, T., and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432 441. Guénoche, A. (1983). Random spanning tree. Journal of Algorithms, 4(3):214 220. Heckerman, D., Geiger, D., and Chickering, D. M. (1995). Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20:197 243. Højsgaard, S., Edwards, D., and Lauritzen, S. (2012). Graphical Models with R. Springer-Verlag, New York. Kruskal Jr., J. B. (1956). On the shortest spanning subtree of a

graph and the travelling salesman problem. Proceedings of the American Mathematical Society, 7:48 50. Meek, C. (1996). Selecting Graphical Models: Causal and Statistical Reasoning. PhD thesis, Carnegie Mellon University. Panayidou, K. (2011). Estimation of Tree Structure for Variable Selection. PhD thesis, Department of Statistics, University of Oxford. Propp, J. G. and Wilson, D. B. (1998). How to get a perfectly random sample from a generic Markov chain and generate a random spanning tree of a directed graph. Journal of Algorithms, 27(2):170 217. Spiegelhalter, D. J. and Lauritzen, S. L. (1990). Sequential updating of conditional probabilities on directed graphical structures. Networks, 20:579 605.

Spirtes, P., Glymour, C., and Scheines, R. (1993). Causation, Prediction and Search. Springer-Verlag, New York. Reprinted by MIT Press.