# TELCOM2125: Network Science and Analysis

Size: px
Start display at page:

Transcription

1 School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015

2 2 Part 4: Dividing Networks into Clusters

3 The problem l Graph partitioning and community detection refer to the division of the vertices in a graph based on the connection patterns of the edges The most common approach is to divide vertices in groups, such that the majority of edges connect members from the same group l Discovering communities in a network can be useful for revealing structure and organization in a network beyond the scale of a single vertex 3

4 Visually 4

5 Number of groups l Depending on whether we have specified the number of groups we want to divide our network in we have two different problems: Graph partitioning Community detection l Even though the general problem is the same, there are different algorithms that deal with each one of the problems 5

6 Graph partitioning l In graph partitioning the number of groups k is pre-defined l In particular, we want to divide the network vertices into k non-overlapping groups of given sizes such that the number of edges across groups are minimized Sizes can also be provided in approximation only ü E.g., within specific range E.g., divide the network nodes into two groups of equal size, such that the number of edges between them is minimized l Graph partitioning is useful in parallel processing of numerical solutions of network processes 6

7 Community detection l In community detection neither the number of groups nor their sizes are pre-defined l The goal is to find the natural fault lines along which the network separates Few edges between groups and many edges within groups l Community detection is not as well-posed problem as graph partitioning What do we mean many and few? Many different objectives à different algorithms 7

8 Graph partitioning vs Community detection l Graph partitioning is typically performed to divide a large network into smaller parts for faster/more manageable processing l Community detection aims into understanding the structure of the network in a large scale Identifying connection patterns l Hence, graph partitioning always needs to output a partition (even if it is not good), while community detection is not required to provide an answer if no good division exists 8

9 Why partitioning is hard? l Graph partitioning is an easy problem to state and understand but it is not easy to solve l Let s start with the simplest problem of dividing a network into two equally sized parts Graph bisection Partitioning in an arbitrarily number of parts can be realized by repeated graph bisection The number of edges between the two groups is called cut size ü Can be thought as a generalized min cut problem 9

10 Why partitioning is hard? l An optimal solution would require to examine all the possible partitions of the network in two groups of sizes n 1 and n 2 respectively. This is: n! n 1!n 2! l Such an algorithm would require exponential time and hence, is not appropriate for even moderately small networks We have to rely on approximation/heuristic algorithms ü They require less time but the solution is not optimal (but acceptable) 10

11 Kernighan-Lin algorithm l The Kernighan-Lin algorithm is a greedy heuristic for graph partitioning l We begin with an initial (possibly random) partition of the vertices At each step we examine every pair of nodes (i,j), where i and j belong to different groups 11 ü If swapping i and j has the smallest effect on the cut size i.e., maximum decrease or least increase we swap i and j We repeat the above process by considering all pairs across groups but not ones including vertices that have already been swapped Once all swaps have been completed, we go through all the different states the network passed through and choose the state in which the cut size takes the smallest value

12 Kernighan-Lin algorithm l We can repeat this algorithm for many rounds At each round we start with the best division found in the previous round If no improvement of the cut size occurs during the new round the algorithm terminates l Different initial assignments can give different final partitioning, We can run the algorithm several times and pick the best solution 12

13 Kernighan-Lin algorithm l Despite being straightforward Kernighan-Lin algorithm is slow l At every round the number of swaps is equal to the size of the smaller of the groups! O(n) l At every step of every round we have to check all the possible pairs of nodes that belong to different groups and we have not yet swapped! O(n 2 ) For each of these pairs we need to determine what is the change in the cut size if we swap them 13

14 Kernighan-Lin algorithm l If k i same are the number of edges that node i has with vertices belonging to the same group and k i other are the edges i has across groups, then the total change in the cut size if we swap i and j is: Δ = k i other k i same + k j other k j same 2A ij l Evaluation of the above expression depends on the way we store the network Adjacent list: O(m/n) With some tricks and with adjacent matrix: O(1) l Nevertheless, the overall running time is O(n 3 ) 14

15 Spectral partitioning l Let us consider that the vertices are partitioned in groups 1 and 2 The total number of edges between vertices of these groups is: l Let s define: R = 1 A ij, where i and j belong to different groups 2 i, j " \$ s i = # \$ % +1, if vertex i belongs to group 1 1, if vertex i belongs to group 2 Then: " 1 2 (1 s \$ is j ) = # \$ % 1, if i and j are in different groups 0, if i and j are in the same group 15

16 Spectral partitioning l Hence, we can now write R as: R = 1 4 A ij (1 s i s j ) The summation now takes place over all i and j ij l We also have: A ij = k i ij i 2 = k i s i = k i δ ij s i s j i ij l Hence, we can write: R = 1 (k i δ ij A ij )s i s j = 1 L ij s i s j R = 1 4 ij 4 ij 4 L is the Laplacian matrix s T L s 16

17 Spectral partitioning l The above optimization problem is the formal definition of the graph partitioning (bisection) l The reason why it is hard to solve is the fact that the elements s i cannot take any real value but only specific integer values (+/- 1) l Integer optimization problems can be solved approximately using the relaxation method 17

18 Spectral partitioning l In the original integer problem the vector s would point to one of the 2 n corners of an n-dimensional hypercube centered at the origins The length of this vector is l We relax the above constraint by allowing the vector s to point to any direction as long as its length is still n 2 s i = n i n 18

19 Spectral partitioning l A second constraint in the integer optimization problem is the fact that the number of elements in vector s that equal +1 or -1 needs to be equal to the desired sizes of the two groups: s i = n 1 n 2 1 T s = n1 n 2 We keep this constraint unchanged i l So we have a minimization problem subject to two equality constraints Lagrange multipliers 19

20 Spectral partitioning * \$ ' 2 L jk s j s k + λ n s j s & ) + 2µ \$, & n 1 n 2 i +, jk % j ( % L s = λ s + µ 1 ( ) s j j '- ) / (./ = 0 L ijs j = λs i + µ j l The vector (1,1,,1) T is an eigenvector of L with eigenvalue 0. Hence, if we multiply the above equation on the left with (1,1,,1) we have: l If we define: 20 µ = n 1 n 2 n x = s + µ 1 = s n n 1 2 λ n λ 1 we finally get: Lx! = L s + µ \$ # 1& = Ls = λs + µ 1 = λx " λ %

21 Spectral partitioning l The last equations tells us that vector x is an eigenvector of the Laplacian, with eigenvalue λ l Given that any eigenvector will satisfy this equation we will pick the one that minimizes R However, it cannot be the vector (1,1,,1) since: ü Vector x is orthogonal to vector (1,1,,1) l So which eigenvector? 1 T x = 1 T µ s 1 T 1 = (n 1 n 2 ) n 1 n 2 n = 0 λ n 21

22 Spectral partitioning l We rewrite R as: R = 1 s T Ls = 1 x T Lx = λ x T 1 x = 4 λ! s T µ s + λ ( s T T µ 2 \$ s)+ 1 T # 1& = " λ 2 % = 1 4 λ! n 2 n n 1 2 # " n (n 1 n 2 )+ (n 1 n 2 )2 n \$ n& = n n 1 2 % n λ l Hence, the cut size is proportional to the eigenvalue λ We pick the second lowest eigenvalue λ 2 (since λ=0 cannot be a solution as we saw) and then vector x is proportional to the eigenvector v 2 Finally vector s is given by: s = x + n 1 n 2 n 1 This vector is relaxed 22

23 Spectral partitioning l How do we transform the obtained vector s to one with values only +/- 1? We simply set the n 1 most positive values of the relaxed vector s equal to 1, and the rest equal to -1 Since vector x, and hence vector s, are proportional to the eigenvector v 2 of the Laplacian we finally have a very simple result ü Calculate the eigenvector v 2 of the Laplacian and place the n 1 vertices with the most positive elements in v 2 to group 1 and the rest to group 2 ü When n 1 and n 2 are not equal, there are two ways of making the split. Which do we pick? 23

24 Spectral partitioning l The final steps are as follows: Calculate the eigenvector v 2 that corresponds to the second smallest eigenvalue λ 2 of the graph Laplacian Sort the elements of the eigenvector in order from the largest to smallest Put the vertices corresponding to the n 1 largest elements in group 1, the rest in group 2 and calculate the cut size Put the vertices corresponding to the n 1 smallest elements in group 1, the rest in group 2 and calculate the cut size Between these two divisions of the network, choose the one that gives the smaller cut size 24

25 Spectral partitioning l In general the spectral partitioning will find divisions of a network that have the right shape, but are not as good as the ones obtained by other algorithms l However, the main advantage is the speed and hence its scalability O(mn) in sparse network 25

26 Algebraic connectivity l We have seen before that the second smallest eigenvalue of the graph Laplacian informs us whether the network is connected or not l Now when the network is connected its value tells us how easy is to divide the network We saw that the cut size is proportional to λ 2 Small λ 2 means that the network has good cuts ü Can be disconnected by removing a few edges 26

27 Community detection l Community detection is primarily used as a tool for understanding the large-scale structure of networks l The number and size of groups are not specified Community detections aims into finding the natural division of the network into groups ü The fault line along which the network is divided into groups is not well defined and so is our problem l Let s start with the simplest community detection problem instance We want to divide the network again into two groups but now we do not know the size of these groups 27

28 Community detection l One solution would be to perform graph bisection with one of the algorithms presented before for all combinations of n 1 and n 2 and pick the one with the smallest cut size This will not work (why?) R l Another approach is to define the ratio cut partitioning: n Now instead of minimizing R, we want to minimize the above 1 n 2 ratio The denominator takes its maximum value when n 1 =n 2 =0.5n, while it diverges if one of the quantities is zero Hence, the solution where n 1 or n 2 equals to zero is eliminated, 28 but still the solution is biased (now towards equally sized groups)

29 Community detection l It has been argued that cut size is not a good metric for evaluating community detection algorithms l A good division is one where edges across groups are less than the expected ones if edges were places in random It is not the total cut size that matters but how it compares with what we expect to see l Conventionally in the development of the community detection algorithms one considers the edges within groups 29

30 Community detection l This approach is similar to the thinking developed for the assortativity coefficient and modularity Modularity takes high values when connections between vertices of the same type are higher than the number of connections expected at random l In the context of community detection, the vertices of the two groups can be considered to be of the same type Good divisions are the ones that have high values of the corresponding modularity l Community detection is also an NP problem 30

31 Simple modularity maximization l One straightforward approach is an algorithm analogous to the Kernighan-Lin graph bisection l We start with a random division of the network into equally sized groups At each step go through all vertices and compute the change in the modularity if this vertex was placed to the other group Among all vertices examined swap groups to the one that has the best effect on modularity i.e., largest increase or smallest decrease We repeat the above process by considering only the vertices that we have not already swaped 31

32 Simple modularity maximization l As with the Kernighan-Lin algorithm, after we go through all vertices we go through all the states of the network and keep the one with the higher modularity We repeat the whole process starting with the best network that we identified in the previous round, until the modularity no longer improves l The time complexity of this algorithm is significantly less than the one of Kernighan-Lin algorithm In this case we go through single vertices and not all possible pairs ü Complexity O(mn) 32

33 Spectral modularity maximization l The modularity matrix B is defined as: l The modularity matrix has the following property: l Let s consider the simplest case again, that is dividing the vertices into two groups We define again s i similar to the case of spectral partitioning. Hence, 33 B ij = A ij k i j j B ij = A ij k ik j 2m 2m j k j δ(c i, c j ) = 1 2 (s is j +1) = k i k i 2m 2m = 0 " \$ +1, i in group 1 s i = # \$ 1, i in group 2 %

34 Spectral modularity maximization l Using the above we can write the modularity of the network as: Q = 1 4m B ij(s i s j +1) = 1 ij 4m B ijs i s j Q = 1 ij 4m s T B s l The above objective function is of the same form as the one for the spectral bisection 2 We still have the constraint s i = n i The only difference is that we do not have the constraint for the sizes in the groups As before we solve the problem using the relaxation method ü One Lagrange multiplier now 34

35 Spectral modularity maximization * \$ '- 2, B jk s j s k + β n s j s & ) / i +, jk % j (./ = 0 B s ij j j = βs i B s = β s l In other words vector s in this problem is an eigenvector of the modularity matrix Hence, Q = 1 4m β s T s = n 4m β l For maximum modularity, we pick vector s to be the eigenvector that corresponds to the maximum eigenvalue of the modularity matrix 35

36 Spectral modularity maximization l For removing the relaxation we want to maximize: s T u 1 (why?) l We approximate vector s as following: " \$ s i = # \$ % +1, if [u 1 ] i > 0 1, if [u 1 ] i < 0 l Modularity matrix is not sparse hence the running time can be high By exploiting properties of the modularity matrix we can still compute the eigenvector in O(n 2 ) for a sparse network 36

37 Division into more than two groups l In the case of community detection we cannot simply continue dividing the output pairs of the community detection algorithms for two communities The modularity of the complete network does not break up into independent contributions from the separate communities l We must explicitly consider the change ΔQ in the modularity of the entire network upon further bisecting the community c of size n c 37

38 Division into more than two groups l The above problem has the same format as the spectral modularity maximization Hence, we find the leading eigenvector of B (c) and divide the network according to the signs of its elements l When do we stop this iterative process? If we are unable to find any division of a community that results in a positive change ΔQ in the modularity, we leave that community undivided ü The leading eigenvector will have all elements with the same sign When we have subdivided the network to the point where all communities are in this indivisible state, the algorithm terminates 38

39 Division into more than two groups l The repeated bisection is by no means perfect l As we have mentioned we can even attempt to find directly the maximum modularity over divisions into any number of groups This can, in principle, find better divisions ü More complicated and slower 39

40 Other modularity maximization methods l Simulated annealing l Genetic algorithms l Greedy algorithms We begin with every node belonging to a different community At every step we combine the pair of groups whose amalgamation gives the biggest increase in modularity, or the smallest decrease if no choice gives an increase Eventually all vertices are combined into a single group We then trace back the states over which the network passed and we select the one with the highest value of modularity Time complexity O(nlog 2 n) 40 ü The modularity achieved with greedy algorithms is slightly lower but it is much faster compared to the other two approaches

41 Betweenness-based method l While graph partitioning is a problem well-defined objective function (i.e., minimize cut set) the same does not hold true for community detection Modularity maximization is only one of the possible ways to tackle the problem Different objective functions can lead to different algorithms l One alternative to modularity maximization is to find edges that lie between communities Once we remove them we will be left with just the isolated communities 41

42 Betweenness-based method l Edge betweenness is the number of geodesic paths that go through an edge l Edges that lie between communities are expected to have high betweenness because they will connect roughly all pairs of vertices between the communities Time to calculate edge betweenness is O(n(m+n)) 42

43 Betweenness-based method l We calculate the betweenness of each edge l We find the edge with the highest score and remove it l We recalculate the betweenness of the remaining edges, since geodesic paths will have changed l Repeat the above process until we eventually have a completely disconnected network Singleton vertices 43

44 Betweenness-based method l The states of the network passes through the above procedure can be represented using a dendrogram l Leaves represent the final state (i.e., singleton nodes) while the top of the tree represents the starting state (i.e., a single community) Horizontal cuts represent intermediate configurations l This algorithm gives a selection of different possible decompositions of the network Coarse (i.e., top of the tree) vs fine divisions (i.e., bottom of the tree) 44

45 Betweenness-based method l The above algorithm gives very good results l However it is one of the slowest algorithms for community detection O(n 3 ) in a sparse network l The ability to get an entire dendrogram rather than a single decomposition can be useful in many cases Hierarchical decomposition 45

46 Betweenness-based method l A variation of the above algorithm is based on the observation that an edge that connects two components that are poorly connected with each other is unlikely to participate to many short loops Thus, one can identify the edges between communities by examining the number of short loops they participate in (rather than the betweenness) Radicchi et al. found that examining for loop lengths 3 and 4 gave the best results 46

47 Betweenness-based method l This approach is an order of magnitude faster compared to the betweenness-based approach Computing short loops to which an edge belongs is a local operation l However, this approach works well when the network has a significant number of short loops to begin with Social networks do have many short loops, however other types of networks (e.g., technological or biological) might not have this property 47

48 Hierarchical clustering l The betweenness-based community detection provided as an output a dendrogram The dendrogram essentially decomposes the network into nested communities in a hierarchical fashion l There is an entire class of algorithms that are doing the same thing but operate using an agglomerative strategy rather than a divisive one Hierarchical clustering algorithms 48

49 Hierarchical clustering l The basic idea behind hierarchical clustering is to initially consider every vertex as one community l Then we define a metric of similarity between vertices, based on the network structure Then we join the most similar vertices to the same group l We can use any metric of structural equivalence as the similarity metric Cosine similarity, correlation coefficient between rows in the adjacent matrix, Euclidean distance between rows in the adjacent matrix etc. 49

50 Hierarchical clustering l Once we calculate the pairwise similarities, our goal is to group together the vertices that have the highest similarities This can lead to conflicts l The first step is to join the singleton pairs with the highest similarity to a group of size two Then we need to agglomerate groups with possibly more than one vertices However, we have metrics for pairwise similarity not group similarity How can we combine these vertex similarities to create similarities for the groups? 50

51 Hierarchical clustering l Consider two groups of n 1 and n 2 vertices respectively There are n 1 n 2 pairs of vertices among them (one node belongs to group 1 and the other to group 2) l Single-linkage clustering The similarity between the groups is defined as the maximum of the pairwise similarities l Complete-linkage clustering The similarity between the groups is defined as the smallest of the pairwise similarities 51

52 Hierarchical clustering l Average-linkage clustering The similarity between the groups is defined as the mean of the pairwise similarities l Single-linkage clustering is very lenient, while completelinkage clustering is more stringent Average-linkage clustering is a more satisfactory definition and in between the two extremes, but it is rarely used in practice 52

53 Hierarchical clustering l The full hierarchical clustering method is: Choose a similarity metric and evaluate it for all vertex pairs Assign each vertex to a group of its own, consisting of just that one vertex ü The initial similarities of the groups are just the similarities of the vertices Find the pair of groups with the highest similarity and join them together into a single group Calculate the similarity between the new composite group and all others using one of the three methods above (single-, complete- or average-linkage clustering) Repeat from step 3 until all vertices have been joined to a single group 53

54 Hierarchical clustering l Time complexity O(n 2 logn) using a heap With union/find and single-linkage clustering we can speed up to O(n 2 ) l Hierarchical clustering does not always work well It is able to find the cores of the groups, but tends to be less good at assigning peripheral vertices to appropriate groups Peripheral nodes are left out of the agglomerative clustering until the very end ü Hence, we have a set of tightly knit cores surrounded by a loose collection of single vertices or smaller groups o However, even this might be valuable information for the network structure 54

### V4 Matrix algorithms and graph partitioning

V4 Matrix algorithms and graph partitioning - Community detection - Simple modularity maximization - Spectral modularity maximization - Division into more than two groups - Other algorithms for community

### My favorite application using eigenvalues: partitioning and community detection in social networks

My favorite application using eigenvalues: partitioning and community detection in social networks Will Hobbs February 17, 2013 Abstract Social networks are often organized into families, friendship groups,

### Types of general clustering methods. Clustering Algorithms for general similarity measures. Similarity between clusters

Types of general clustering methods Clustering Algorithms for general similarity measures agglomerative versus divisive algorithms agglomerative = bottom-up build up clusters from single objects divisive

### Community Detection. Community

Community Detection Community In social sciences: Community is formed by individuals such that those within a group interact with each other more frequently than with those outside the group a.k.a. group,

### Modularity CMSC 858L

Modularity CMSC 858L Module-detection for Function Prediction Biological networks generally modular (Hartwell+, 1999) We can try to find the modules within a network. Once we find modules, we can look

### Clustering Algorithms for general similarity measures

Types of general clustering methods Clustering Algorithms for general similarity measures general similarity measure: specified by object X object similarity matrix 1 constructive algorithms agglomerative

### CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING Cluster Analysis: Basic Concepts and Methods Huan Sun, CSE@The Ohio State University Slides adapted from UIUC CS412, Fall 2017, by Prof. Jiawei Han 2 Chapter 10. Cluster

### CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING Cluster Analysis: Basic Concepts and Methods Huan Sun, CSE@The Ohio State University 09/25/2017 Slides adapted from UIUC CS412, Fall 2017, by Prof. Jiawei Han 2 Chapter 10.

### Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Informal goal Clustering Given set of objects and measure of similarity between them, group similar objects together What mean by similar? What is good grouping? Computation time / quality tradeoff 1 2

### Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

### Community Structure Detection. Amar Chandole Ameya Kabre Atishay Aggarwal

Community Structure Detection Amar Chandole Ameya Kabre Atishay Aggarwal What is a network? Group or system of interconnected people or things Ways to represent a network: Matrices Sets Sequences Time

### Lesson 2 7 Graph Partitioning

Lesson 2 7 Graph Partitioning The Graph Partitioning Problem Look at the problem from a different angle: Let s multiply a sparse matrix A by a vector X. Recall the duality between matrices and graphs:

### Detecting community structure in networks

Eur. Phys. J. B 38, 321 330 (2004) DOI: 10.1140/epjb/e2004-00124-y THE EUROPEAN PHYSICAL JOURNAL B Detecting community structure in networks M.E.J. Newman a Department of Physics and Center for the Study

### Mining Social Network Graphs

Mining Social Network Graphs Analysis of Large Graphs: Community Detection Rafael Ferreira da Silva rafsilva@isi.edu http://rafaelsilva.com Note to other teachers and users of these slides: We would be

### CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

### Cluster Analysis. Angela Montanari and Laura Anderlucci

Cluster Analysis Angela Montanari and Laura Anderlucci 1 Introduction Clustering a set of n objects into k groups is usually moved by the aim of identifying internally homogenous groups according to a

### CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning

CS 140: Sparse Matrix-Vector Multiplication and Graph Partitioning Parallel sparse matrix-vector product Lay out matrix and vectors by rows y(i) = sum(a(i,j)*x(j)) Only compute terms with A(i,j) 0 P0 P1

### Clustering Part 3. Hierarchical Clustering

Clustering Part Dr Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville Hierarchical Clustering Two main types: Agglomerative Start with the points

### BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

BBS654 Data Mining Pinar Duygulu Slides are adapted from Nazli Ikizler 1 Classification Classification systems: Supervised learning Make a rational prediction given evidence There are several methods for

### Spectral Methods for Network Community Detection and Graph Partitioning

Spectral Methods for Network Community Detection and Graph Partitioning M. E. J. Newman Department of Physics, University of Michigan Presenters: Yunqi Guo Xueyin Yu Yuanqi Li 1 Outline: Community Detection

### Cluster Analysis. Ying Shen, SSE, Tongji University

Cluster Analysis Ying Shen, SSE, Tongji University Cluster analysis Cluster analysis groups data objects based only on the attributes in the data. The main objective is that The objects within a group

### Community Structure and Beyond

Community Structure and Beyond Elizabeth A. Leicht MAE: 298 April 9, 2009 Why do we care about community structure? Large Networks Discussion Outline Overview of past work on community structure. How to

### Social-Network Graphs

Social-Network Graphs Mining Social Networks Facebook, Google+, Twitter Email Networks, Collaboration Networks Identify communities Similar to clustering Communities usually overlap Identify similarities

### Hierarchical Clustering

Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram A tree like diagram that records the sequences of merges or splits 0 0 0 00

### TELCOM2125: Network Science and Analysis

School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 Figures are taken from: M.E.J. Newman, Networks: An Introduction 2

### Network Traffic Measurements and Analysis

DEIB - Politecnico di Milano Fall, 2017 Introduction Often, we have only a set of features x = x 1, x 2,, x n, but no associated response y. Therefore we are not interested in prediction nor classification,

### CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

CS 1675 Introduction to Machine Learning Lecture 18 Clustering Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Clustering Groups together similar instances in the data sample Basic clustering problem:

### Clustering Lecture 3: Hierarchical Methods

Clustering Lecture 3: Hierarchical Methods Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced

### Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods

### Sequence clustering. Introduction. Clustering basics. Hierarchical clustering

Sequence clustering Introduction Data clustering is one of the key tools used in various incarnations of data-mining - trying to make sense of large datasets. It is, thus, natural to ask whether clustering

### Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)

### Spectral Graph Multisection Through Orthogonality. Huanyang Zheng and Jie Wu CIS Department, Temple University

Spectral Graph Multisection Through Orthogonality Huanyang Zheng and Jie Wu CIS Department, Temple University Outline Motivation Preliminary Algorithm Evaluation Future work Motivation Traditional graph

### Cluster Analysis. Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX April 2008 April 2010

Cluster Analysis Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX 7575 April 008 April 010 Cluster Analysis, sometimes called data segmentation or customer segmentation,

### Web Structure Mining Community Detection and Evaluation

Web Structure Mining Community Detection and Evaluation 1 Community Community. It is formed by individuals such that those within a group interact with each other more frequently than with those outside

### Hierarchical clustering

Aprendizagem Automática Hierarchical clustering Ludwig Krippahl Hierarchical clustering Summary Hierarchical Clustering Agglomerative Clustering Divisive Clustering Clustering Features 1 Aprendizagem Automática

### 1 Homophily and assortative mixing

1 Homophily and assortative mixing Networks, and particularly social networks, often exhibit a property called homophily or assortative mixing, which simply means that the attributes of vertices correlate

### Spectral Graph Multisection Through Orthogonality

Spectral Graph Multisection Through Orthogonality Huanyang Zheng and Jie Wu Department of Computer and Information Sciences Temple University, Philadelphia, PA 922 {huanyang.zheng, jiewu}@temple.edu ABSTRACT

### Hierarchical Clustering

Hierarchical Clustering Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram A tree-like diagram that records the sequences of merges

### Lecture 19: Graph Partitioning

Lecture 19: Graph Partitioning David Bindel 3 Nov 2011 Logistics Please finish your project 2. Please start your project 3. Graph partitioning Given: Graph G = (V, E) Possibly weights (W V, W E ). Possibly

### CS224W: Analysis of Networks Jure Leskovec, Stanford University

CS224W: Analysis of Networks Jure Leskovec, Stanford University http://cs224w.stanford.edu 11/13/17 Jure Leskovec, Stanford CS224W: Analysis of Networks, http://cs224w.stanford.edu 2 Observations Models

### Cluster Analysis: Agglomerate Hierarchical Clustering

Cluster Analysis: Agglomerate Hierarchical Clustering Yonghee Lee Department of Statistics, The University of Seoul Oct 29, 2015 Contents 1 Cluster Analysis Introduction Distance matrix Agglomerative Hierarchical

### Approximation Algorithms

18.433 Combinatorial Optimization Approximation Algorithms November 20,25 Lecturer: Santosh Vempala 1 Approximation Algorithms Any known algorithm that finds the solution to an NP-hard optimization problem

### Machine learning - HT Clustering

Machine learning - HT 2016 10. Clustering Varun Kanade University of Oxford March 4, 2016 Announcements Practical Next Week - No submission Final Exam: Pick up on Monday Material covered next week is not

### Feature Selection for fmri Classification

Feature Selection for fmri Classification Chuang Wu Program of Computational Biology Carnegie Mellon University Pittsburgh, PA 15213 chuangw@andrew.cmu.edu Abstract The functional Magnetic Resonance Imaging

### Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition. by Tan, Steinbach, Karpatne, Kumar

Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 7 Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar Hierarchical Clustering Produces a set

### Clusters and Communities

Clusters and Communities Lecture 7 CSCI 4974/6971 22 Sep 2016 1 / 14 Today s Biz 1. Reminders 2. Review 3. Communities 4. Betweenness and Graph Partitioning 5. Label Propagation 2 / 14 Today s Biz 1. Reminders

### E-Companion: On Styles in Product Design: An Analysis of US. Design Patents

E-Companion: On Styles in Product Design: An Analysis of US Design Patents 1 PART A: FORMALIZING THE DEFINITION OF STYLES A.1 Styles as categories of designs of similar form Our task involves categorizing

### CS 664 Slides #11 Image Segmentation. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #11 Image Segmentation Prof. Dan Huttenlocher Fall 2003 Image Segmentation Find regions of image that are coherent Dual of edge detection Regions vs. boundaries Related to clustering problems

### Lecture 6: Unsupervised Machine Learning Dagmar Gromann International Center For Computational Logic

SEMANTIC COMPUTING Lecture 6: Unsupervised Machine Learning Dagmar Gromann International Center For Computational Logic TU Dresden, 23 November 2018 Overview Unsupervised Machine Learning overview Association

### CSE 258 Lecture 5. Web Mining and Recommender Systems. Dimensionality Reduction

CSE 258 Lecture 5 Web Mining and Recommender Systems Dimensionality Reduction This week How can we build low dimensional representations of high dimensional data? e.g. how might we (compactly!) represent

### TELCOM2125: Network Science and Analysis

School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 Figures are taken from: M.E.J. Newman, Networks: An Introduction 2

### Graph drawing in spectral layout

Graph drawing in spectral layout Maureen Gallagher Colleen Tygh John Urschel Ludmil Zikatanov Beginning: July 8, 203; Today is: October 2, 203 Introduction Our research focuses on the use of spectral graph

### Chapter 10. Fundamental Network Algorithms. M. E. J. Newman. May 6, M. E. J. Newman Chapter 10 May 6, / 33

Chapter 10 Fundamental Network Algorithms M. E. J. Newman May 6, 2015 M. E. J. Newman Chapter 10 May 6, 2015 1 / 33 Table of Contents 1 Algorithms for Degrees and Degree Distributions Degree-Degree Correlation

### University of Florida CISE department Gator Engineering. Clustering Part 2

Clustering Part 2 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville Partitional Clustering Original Points A Partitional Clustering Hierarchical

### Community Structure in Graphs

Community Structure in Graphs arxiv:0712.2716v1 [physics.soc-ph] 17 Dec 2007 Santo Fortunato a, Claudio Castellano b a Complex Networks LagrangeLaboratory(CNLL), ISI Foundation, Torino, Italy b SMC, INFM-CNR

### Clustering and Visualisation of Data

Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some

### ( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components

Review Lecture 14 ! PRINCIPAL COMPONENT ANALYSIS Eigenvectors of the covariance matrix are the principal components 1. =cov X Top K principal components are the eigenvectors with K largest eigenvalues

### Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2008 CS 551, Spring 2008 c 2008, Selim Aksoy (Bilkent University)

### Graph Partitioning for High-Performance Scientific Simulations. Advanced Topics Spring 2008 Prof. Robert van Engelen

Graph Partitioning for High-Performance Scientific Simulations Advanced Topics Spring 2008 Prof. Robert van Engelen Overview Challenges for irregular meshes Modeling mesh-based computations as graphs Static

### B553 Lecture 12: Global Optimization

B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence

### Visual Representations for Machine Learning

Visual Representations for Machine Learning Spectral Clustering and Channel Representations Lecture 1 Spectral Clustering: introduction and confusion Michael Felsberg Klas Nordberg The Spectral Clustering

### Cluster analysis. Agnieszka Nowak - Brzezinska

Cluster analysis Agnieszka Nowak - Brzezinska Outline of lecture What is cluster analysis? Clustering algorithms Measures of Cluster Validity What is Cluster Analysis? Finding groups of objects such that

### Clustering. Robert M. Haralick. Computer Science, Graduate Center City University of New York

Clustering Robert M. Haralick Computer Science, Graduate Center City University of New York Outline K-means 1 K-means 2 3 4 5 Clustering K-means The purpose of clustering is to determine the similarity

### Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Clustering CS294 Practical Machine Learning Junming Yin 10/09/06 Outline Introduction Unsupervised learning What is clustering? Application Dissimilarity (similarity) of objects Clustering algorithm K-means,

### CSE 255 Lecture 5. Data Mining and Predictive Analytics. Dimensionality Reduction

CSE 255 Lecture 5 Data Mining and Predictive Analytics Dimensionality Reduction Course outline Week 4: I ll cover homework 1, and get started on Recommender Systems Week 5: I ll cover homework 2 (at the

### Hierarchical Clustering 4/5/17

Hierarchical Clustering 4/5/17 Hypothesis Space Continuous inputs Output is a binary tree with data points as leaves. Useful for explaining the training data. Not useful for making new predictions. Direction

### Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Clustering CE-717: Machine Learning Sharif University of Technology Spring 2016 Soleymani Outline Clustering Definition Clustering main approaches Partitional (flat) Hierarchical Clustering validation

### L14 - Placement and Routing

L14 - Placement and Routing Ajay Joshi Massachusetts Institute of Technology RTL design flow HDL RTL Synthesis manual design Library/ module generators netlist Logic optimization a b 0 1 s d clk q netlist

### Data Mining Concepts & Techniques

Data Mining Concepts & Techniques Lecture No 08 Cluster Analysis Naeem Ahmed Email: naeemmahoto@gmailcom Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro Outline

### Clustering. SC4/SM4 Data Mining and Machine Learning, Hilary Term 2017 Dino Sejdinovic

Clustering SC4/SM4 Data Mining and Machine Learning, Hilary Term 2017 Dino Sejdinovic Clustering is one of the fundamental and ubiquitous tasks in exploratory data analysis a first intuition about the

### A Course in Machine Learning

A Course in Machine Learning Hal Daumé III 13 UNSUPERVISED LEARNING If you have access to labeled training data, you know what to do. This is the supervised setting, in which you have a teacher telling

### Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

Seminar on A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Mohammad Iftakher Uddin & Mohammad Mahfuzur Rahman Matrikel Nr: 9003357 Matrikel Nr : 9003358 Masters of

### 5/15/16. Computational Methods for Data Analysis. Massimo Poesio UNSUPERVISED LEARNING. Clustering. Unsupervised learning introduction

Computational Methods for Data Analysis Massimo Poesio UNSUPERVISED LEARNING Clustering Unsupervised learning introduction 1 Supervised learning Training set: Unsupervised learning Training set: 2 Clustering

### Clustering in Data Mining

Clustering in Data Mining Classification Vs Clustering When the distribution is based on a single parameter and that parameter is known for each object, it is called classification. E.g. Children, young,

### CMPSCI611: The Simplex Algorithm Lecture 24

CMPSCI611: The Simplex Algorithm Lecture 24 Let s first review the general situation for linear programming problems. Our problem in standard form is to choose a vector x R n, such that x 0 and Ax = b,

### MSA220 - Statistical Learning for Big Data

MSA220 - Statistical Learning for Big Data Lecture 13 Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology Clustering Explorative analysis - finding groups

### Online Social Networks and Media. Community detection

Online Social Networks and Media Community detection 1 Notes on Homework 1 1. You should write your own code for generating the graphs. You may use SNAP graph primitives (e.g., add node/edge) 2. For the

### 9/29/13. Outline Data mining tasks. Clustering algorithms. Applications of clustering in biology

9/9/ I9 Introduction to Bioinformatics, Clustering algorithms Yuzhen Ye (yye@indiana.edu) School of Informatics & Computing, IUB Outline Data mining tasks Predictive tasks vs descriptive tasks Example

### Statistical Physics of Community Detection

Statistical Physics of Community Detection Keegan Go (keegango), Kenji Hata (khata) December 8, 2015 1 Introduction Community detection is a key problem in network science. Identifying communities, defined

### Gene expression & Clustering (Chapter 10)

Gene expression & Clustering (Chapter 10) Determining gene function Sequence comparison tells us if a gene is similar to another gene, e.g., in a new species Dynamic programming Approximate pattern matching

### Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

Week 9 Based in part on slides from textbook, slides of Susan Holmes Part I December 2, 2012 Hierarchical Clustering 1 / 1 Produces a set of nested clusters organized as a Hierarchical hierarchical clustering

### Segmentation Computer Vision Spring 2018, Lecture 27

Segmentation http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 218, Lecture 27 Course announcements Homework 7 is due on Sunday 6 th. - Any questions about homework 7? - How many of you have

### Partitioning and Partitioning Tools. Tim Barth NASA Ames Research Center Moffett Field, California USA

Partitioning and Partitioning Tools Tim Barth NASA Ames Research Center Moffett Field, California 94035-00 USA 1 Graph/Mesh Partitioning Why do it? The graph bisection problem What are the standard heuristic

### INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

INF4820 Clustering Erik Velldal University of Oslo Nov. 17, 2009 Erik Velldal INF4820 1 / 22 Topics for Today More on unsupervised machine learning for data-driven categorization: clustering. The task

### Interactive Math Glossary Terms and Definitions

Terms and Definitions Absolute Value the magnitude of a number, or the distance from 0 on a real number line Addend any number or quantity being added addend + addend = sum Additive Property of Area the

### STATS306B STATS306B. Clustering. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010

STATS306B Jonathan Taylor Department of Statistics Stanford University June 3, 2010 Spring 2010 Outline K-means, K-medoids, EM algorithm choosing number of clusters: Gap test hierarchical clustering spectral

### CS 2750 Machine Learning. Lecture 19. Clustering. CS 2750 Machine Learning. Clustering. Groups together similar instances in the data sample

Lecture 9 Clustering Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Clustering Groups together similar instances in the data sample Basic clustering problem: distribute data into k different groups

### CS7267 MACHINE LEARNING

S7267 MAHINE LEARNING HIERARHIAL LUSTERING Ref: hengkai Li, Department of omputer Science and Engineering, University of Texas at Arlington (Slides courtesy of Vipin Kumar) Mingon Kang, Ph.D. omputer Science,

### Image Segmentation continued Graph Based Methods

Image Segmentation continued Graph Based Methods Previously Images as graphs Fully-connected graph node (vertex) for every pixel link between every pair of pixels, p,q affinity weight w pq for each link

### INF4820, Algorithms for AI and NLP: Hierarchical Clustering

INF4820, Algorithms for AI and NLP: Hierarchical Clustering Erik Velldal University of Oslo Sept. 25, 2012 Agenda Topics we covered last week Evaluating classifiers Accuracy, precision, recall and F-score

### Introduction to Machine Learning

Introduction to Machine Learning Clustering Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574 1 / 19 Outline

### A Novel Parallel Hierarchical Community Detection Method for Large Networks

A Novel Parallel Hierarchical Community Detection Method for Large Networks Ping Lu Shengmei Luo Lei Hu Yunlong Lin Junyang Zou Qiwei Zhong Kuangyan Zhu Jian Lu Qiao Wang Southeast University, School of

### Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/004 1

### CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

### Unsupervised Learning. Supervised learning vs. unsupervised learning. What is Cluster Analysis? Applications of Cluster Analysis

7 Supervised learning vs unsupervised learning Unsupervised Learning Supervised learning: discover patterns in the data that relate data attributes with a target (class) attribute These patterns are then

### Chapter VIII.3: Hierarchical Clustering

Chapter VIII.3: Hierarchical Clustering 1. Basic idea 1.1. Dendrograms 1.2. Agglomerative and divisive 2. Cluster distances 2.1. Single link 2.2. Complete link 2.3. Group average and Mean distance 2.4.

### DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

DATA MINING LECTURE 7 Hierarchical Clustering, DBSCAN The EM Algorithm CLUSTERING What is a Clustering? In general a grouping of objects such that the objects in a group (cluster) are similar (or related)

### CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING Cluster Analysis: Basic Concepts and Methods Huan Sun, CSE@The Ohio State University 09/28/2017 Slides adapted from UIUC CS412, Fall 2017, by Prof. Jiawei Han 2 Chapter 10.