Chapter 4: Mining Frequent Patterns, Associations and Correlations 4.1 Basic Concepts 4.2 Frequent Itemset Mining Methods 4.3 Which Patterns Are Interesting? Pattern Evaluation Methods 4.4 Summary
Frequent Pattern Analysis Frequent Pattern: a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set Goal: finding inherent regularities in data What products were often purchased together? Beer and diapers?! What are the subsequent purchases after buying a PC? What kinds of DNA are sensitive to this new drug? Can we automatically classify Web documents? Applications: Basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis.
Why is Frequent Pattern Mining Important? An important property of datasets Foundation for many essential data mining tasks Association, correlation, and causality analysis Sequential, structural (e.g., sub-graph) patterns Pattern analysis in spatiotemporal, multimedia, time-series, and stream data Classification: discriminative, frequent pattern analysis Clustering analysis: frequent pattern-based clustering Data warehousing: iceberg cube and cube-gradient Semantic data compression Broad applications
Frequent Patterns Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs itemset: A set of one or more items K-itemset X = {x 1,, x k } 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk Customer buys beer Customer buys both Customer buys diaper (absolute) support, or, support count of X: Frequency or occurrence of an itemset X (relative) support, s, is the fraction of transactions that contains X (i.e., the probability that a transaction contains X) An itemset X is frequent if X s support is no less than a minsup threshold
Association Rules Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk Find all the rules X Y with minimum support and confidence threshold support, s, probability that a transaction contains X Y confidence, c, conditional probability that a transaction having X also contains Y Customer buys both Customer buys diaper Let minsup = 50%, minconf = 50% Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3, {Beer, Diaper}:3 Customer buys beer Association rules: (many more!) Beer Diaper (60%, 100%) Diaper Beer (60%, 75%) Rules that satisfy both minsup and minconf are called strong rules
Closed Patterns and Max-Patterns A long pattern contains a combinatorial number of sub-patterns, e.g., {a 1,, a 100 } contains ( 1001 ) + ( 1002 ) + + ( 1 1 0 0 00 ) = 2 100 1 = 1.27*10 30 sub-patterns! Solution: Mine closed patterns and max-patterns instead An itemset X is closed if X is frequent and there exists no superpattern Y כ X, with the same support as X An itemset X is a max-pattern if X is frequent and there exists no frequent super-pattern Y כ X Closed pattern is a lossless compression of freq. patterns Reducing the number of patterns and rules
Closed Patterns and Max-Patterns Example DB = {<a 1,, a 100 >, < a 1,, a 50 >} Min_sup=1 What is the set of closed itemset? <a1,, a100>: 1 < a1,, a50>: 2 What is the set of max-pattern? <a1,, a100>: 1 What is the set of all patterns?!!
Computational Complexity How many itemsets are potentially to be generated in the worst case? The number of frequent itemsets to be generated is sensitive to the minsup threshold When minsup is low, there exist potentially an exponential number of frequent itemsets The worst case: MN where M: # distinct items, and N: max length of transactions
Chapter 4: Mining Frequent Patterns, Associations and Correlations 4.1 Basic Concepts 4.2 Frequent Itemset Mining Methods 4.2.1 Apriori: A Candidate Generation-and-Test Approach 4.2.2 Improving the Efficiency of Apriori 4.2.3 FPGrowth: A Frequent Pattern-Growth Approach 4.2.4 ECLAT: Frequent Pattern Mining with Vertical Data Forma 4.3 Which Patterns Are Interesting? Pattern Evaluation Methods 4.4 Summary
4.2.1Apriori: Concepts and Principle The downward closure property of frequent patterns Any subset of a frequent itemset must be frequent If {beer, diaper, nuts} is frequent, so is {beer, diaper} i.e., every transaction having {beer, diaper, nuts} also contains {beer, diaper} Apriori pruning principle: If there is any itemset which is infrequent, its superset should not be generated/tested
4.2.1Apriori: Method Initially, scan DB once to get frequent 1-itemset Generate length (k+1) candidate itemsets from length k frequent itemsets Test the candidates against DB Terminate when no frequent or candidate set can be generated
Database Tid Items 10 A, C, D 20 B, C, E 30 A, B, C, E 40 B, E Sup min = 2 1 st scan Apriori: Example Itemset sup {A} 2 L C 1 1 {B} 3 Itemset {C} 3 {D} 1 {E} 3 sup C 2 C 2 {A, B} 1 L 2 Itemset sup 2 nd scan {A, C} 2 {B, C} 2 {B, E} 3 {C, E} 2 {A, C} 2 {A, E} 1 {B, C} 2 {B, E} 3 {C, E} 2 Itemset sup {A} 2 {B} 3 {C} 3 {E} 3 Itemset {A, B} {A, C} {A, E} {B, C} {B, E} {C, E} C Itemset 3 3 rd scan L 3 {B, C, E} Itemset sup {B, C, E} 2
Apriori Algorithm C k : Candidate itemset of size k L k : frequent itemset of size k L 1 = {frequent items}; for (k = 1; L k!= ; k++) do begin C k+1 = candidates generated from L k ; for each transaction t in database do increment the count of all candidates in C k+1 that are contained in t L k+1 end return k L k ; = candidates in C k+1 with min_support
Candidate Generation How to generate candidates? Step 1: self-joining L k Step 2: pruning Example of Candidate Generation L 3 ={abc, abd, acd, ace, bcd} Self-joining: L 3 *L 3 : abcd from abc and abd, acde from acd and ace Pruning: acde is removed because ade is not in L 3 C4 = {abcd}
4.2.2 Generating Association Rules Once the frequent itemsets have been found, it is straightforward to generate strong association rules that satisfy: minimum Support minimum confidence Relation between support and confidence: Confidence(A B) = P(B A)= support_count(a B) support_count(a) Support_count(A B) is the number of transactions containing the itemsets A B Support_count(A) is the number of transactions containing the itemset A.
Generating Association Rules For each frequent itemset L, generate all non empty subsets of L For every no empty subset S of L, output the rule: S (L-S) If (support_count(l)/support_count(s)) >= min_conf
Example Suppose the frequent Itemset L={I1,I2,I5} Subsets of L are: {I1,I2}, {I1,I5},{I2,I5},{I1},{I2},{I5} Association rules : I1 I2 I5 confidence = 2/4= 50% I1 I5 I2 I2 I5 I1 I1 I2 I5 I2 I1 I5 I5 I2 I2 confidence=2/2=100% confidence=2/2=100% confidence=2/6=33% confidence=2/7=29% confidence=2/2=100% If the minimum confidence =70% Transactional Database TID T100 T200 T300 T400 T500 T600 T700 T800 T900 List of item IDS I1,I2,I5 I2,I4 I2,I3 I1,I2,I4 I1,I3 I2,I3 I1,I3 I1,I2,I3,I5 I1,I2,I3 Question: what is the difference between association rules and decision tree rules?
4.2.2 Improving the Efficiency of Apriori Major computational challenges Huge number of candidates Multiple scans of transaction database Tedious workload of support counting for candidates Improving Apriori: general ideas Shrink number of candidates Reduce passes of transaction database scans Facilitate support counting of candidates
Database Tid Items 10 A, C, D 20 B, C, E 30 A, B, C, E 40 B, E (A) DHP: Hash-based Technique C 1 1 st scan Itemset sup {A} 2 {B} 3 {C} 3 {D} 1 {E} 3 Making a hash table 10: {A,C}, {A, D}, {C,D} 20: {B,C}, {B, E}, {C,E} 30: {A,B}, {A, C}, {A,E}, {B,C}, {B, E}, {C,E} 40: {B, E} Hash codes Buckets Buckets counters min-support=2 We have the following binary vector 0 1 2 3 4 5 6 {A,E} {B,C} {B,E} {A,B} {B,E} {B,E} {C,E} {C,E} {A,D} {A,C} {C,D} {A,C} 3 1 2 0 3 1 3 1 0 1 0 1 0 1 {A,B} 1 {A, C} 3 {A, C} {A,E} 3 {B,C} {B,C} 2 {B, E} {B, E} 3 {C,E} {C,E} 3 J. Park, M. Chen, and P. Yu. An effective hash-based algorithm for mining association rules. SIGMOD 95
(B) Partition: Scan Database Only Twice Subdivide the transactions of D into k non overlapping partitions If σ is the min_sup of D, the min-sup of D i is σ i, = σ D i Any itemset that is potentially frequent in D must be frequent in at least one of the partition of Di Each partition can fit into main memory, thus it is read only once Steps: Scan1: partition database and find local frequent patterns Scan2: consolidate global frequent patterns D 1 + D 2 + + D k = D σ 1 = σ D 1 σ 2 = σ D 2 σ k = σ D k A. Savasere, E. Omiecinski and S. Navathe, VLDB 95
(C) Sampling for Frequent Patterns Select a sample of the original database Mine frequent patterns within the sample using Apriori Use a lower support threshold than the minimum support to find local frequent itemsets Scan the database once to verify the frequent itemsets found in the sample Only broader frequent patterns are checked Example: check abcd instead of ab, ac,, etc. Scan the database again to find missed frequent patterns H. Toivonen. Sampling large databases for association rules. In VLDB 96
(D) Dynamic: Reduce Number of Scans ABCD ABC ABD ACD BCD AB AC BC AD BD CD Apriori A B C D Once both A and D are determined frequent, the counting of AD begins Once all length-2 subsets of BCD are determined frequent, the counting of BCD begins Transactions 1-itemsets 2-itemsets {} Itemset lattice DIC 1-itemsets 2-items 3-items S. Brin R. Motwani, J. Ullman, and S. Tsur. Dynamic itemset counting and implication rules for market basket data. In SIGMOD 97
4.2.3 FP-Growth: Frequent Pattern-Growth Adopts a divide and conquer strategy Compress the database representing frequent items into a frequent pattern tree or FP-tree Retains the itemset association information Divid the compressed database into a set of conditional databases, each associated with one frequent item Mine each such databases separately
Example: FP-Growth The first scan of data is the same as Apriori Derive the set of frequent 1- itemsets Let min-sup=2 Generate a set of ordered items Item ID I2 7 I1 6 I3 6 I4 2 I5 2 Support count Transactional Database TID T100 T200 T300 T400 T500 T600 T700 T800 T900 List of item IDS I1,I2,I5 I2,I4 I2,I3 I1,I2,I4 I1,I3 I2,I3 I1,I3 I1,I2,I3,I5 I1,I2,I3
Construct the FP-Tree Transactional Database TID Items TID Items TID Items T100 I1,I2,I5 T400 I1,I2,I4 T700 I1,I3 T200 I2,I4 T500 I1,I3 T800 I1,I2,I3,I5 T300 I2,I3 T600 I2,I3 T900 I1,I2,I3 - Create a branch for each transaction - Items in each transaction are processed in order Item ID I2 7 Support count I2:1 1- Order the items T100: {I2,I1,I5} 2- Construct the first branch: <I2:1>, <I1:1>,<I5:1> null I1 6 I3 6 I1:1 I4 2 I5 2 I5:1
Construct the FP-Tree Transactional Database TID Items TID Items TID Items T100 I1,I2,I5 T400 I1,I2,I4 T700 I1,I3 T200 I2,I4 T500 I1,I3 T800 I1,I2,I3,I5 T300 I2,I3 T600 I2,I3 T900 I1,I2,I3 - Create a branch for each transaction - Items in each transaction are processed in order Item ID I2 7 I1 6 I3 6 Support count I1:1 I2:1 I2:2 1- Order the items T200: {I2,I4} 2- Construct the second branch: <I2:1>, <I4:1> null I4:1 I4 2 I5 2 I5:1
Construct the FP-Tree Transactional Database TID Items TID Items TID Items T100 I1,I2,I5 T400 I1,I2,I4 T700 I1,I3 T200 I2,I4 T500 I1,I3 T800 I1,I2,I3,I5 T300 I2,I3 T600 I2,I3 T900 I1,I2,I3 - Create a branch for each transaction - Items in each transaction are processed in order Item ID I2 7 I1 6 I3 6 Support count I1:1 I2:3 I2:2 1- Order the items T300: {I2,I3} 2- Construct the third branch: <I2:2>, <I3:1> I3:1 null I4:1 I4 2 I5 2 I5:1
Construct the FP-Tree Transactional Database TID Items TID Items TID Items T100 I1,I2,I5 T400 I1,I2,I4 T700 I1,I3 T200 I2,I4 T500 I1,I3 T800 I1,I2,I3,I5 T300 I2,I3 T600 I2,I3 T900 I1,I2,I3 - Create a branch for each transaction - Items in each transaction are processed in order Item ID I2 7 I1 6 I3 6 Support count I1:1 I1:2 I2:4 I2:3 1- Order the items T400: {I2,I1,I4} 2- Construct the fourth branch: <I2:3>, <I1:1>,<I4:1> I3:1 null I4:1 I4 2 I5 2 I5:1 I4:1
Construct the FP-Tree Transactional Database TID Items TID Items TID Items T100 I1,I2,I5 T400 I1,I2,I4 T700 I1,I3 T200 I2,I4 T500 I1,I3 T800 I1,I2,I3,I5 T300 I2,I3 T600 I2,I3 T900 I1,I2,I3 - Create a branch for each transaction - Items in each transaction are processed in order Item ID I2 7 I1 6 I3 6 I4 2 I5 2 Support count I5:1 I1:2 I2:4 1- Order the items T400: {I1,I3} 2- Construct the fifth branch: <I1:1>, <I3:1> I4:1 I3:1 null I4:1 I1:1 I3:1
Construct the FP-Tree Transactional Database TID Items TID Items TID Items T100 I1,I2,I5 T400 I1,I2,I4 T700 I1,I3 T200 I2,I4 T500 I1,I3 T800 I1,I2,I3,I5 T300 I2,I3 T600 I2,I3 T900 I1,I2,I3 null Item ID Support count I2:7 I1:2 I2 7 I1 6 I1:4 I3:2 I4:1 I3:2 I3 6 I4 2 I5 2 I5:1 I5:1 I3:2 I4:1 When a branch of a transaction is added, the count for each node along a common prefix is incremented by 1
Construct the FP-Tree Item ID I2 7 I1 6 I3 6 I4 2 I5 2 Support count I5:1 I2:7 I1:4 I3:2 I4:1 I3:2 null I4:1 I1:2 I3:2 I5:1 The problem of mining frequent patterns in databases is transformed to that of mining the FP-tree
Construct the FP-Tree Item ID I2 7 I1 6 Support count I1:4 I2:7 I3:2 null I4:1 I1:2 I3:2 I3 6 I4 2 I5 2 I5:1 I3:2 I4:1 I5:1 -Occurrences of I5: <I2,I1,I5> and <I2,I1,I3,I5> -Two prefix Paths <I2, I1: 1> and <I2,I1,I3: 1> -Conditional FP tree contains only <I2: 2, I1: 2>, I3 is not considered because its support count of 1 is less than the minimum support count. -Frequent patterns {I2,I5:2}, {I1,I5:2},{I2,I1,I5:2}
Construct the FP-Tree Item ID Support count I2 7 I1 6 I3 6 I4 2 I5 2 I5:1 I2:7 I1:4 I3:2 I4:1 I3:2 null I4:1 I1:2 I3:2 I5:1 TID Conditional Pattern Base Conditional FP-tree I5 {{I2,I1:1},{I2,I1,I3:1}} <I2:2,I1:2> I4 {{I2,I1:1},{I2,1}} <I2:2> I3 {{I2,I1:2},{I2:2}, {I1:2}} <I2:4,I1:2>,<I1:2> I1 {I2,4} <I2:4>
Construct the FP-Tree Item ID Support count I2 7 I1 6 I3 6 I4 2 I5 2 I5:1 I2:7 I1:4 I3:2 I4:1 I3:2 null I4:1 I1:2 I3:2 I5:1 TID Conditional FP-tree Frequent Patterns Generated I5 <I2:2,I1:2> {I2,I5:2}, {I1,I5:2},{I2,I1,I5:2} I4 <I2:2> {I2,I4:2} I3 <I2:4,I1:2>,<I1:2> {I2,I3:4},{I1,I3:4},{I2,I1,I3:2} I1 <I2:4> {I2,I1:4}
FP-growth properties FP-growth transforms the problem of finding long frequent patterns to searching for shorter once recursively and the concatenating the suffix It uses the least frequent suffix offering a good selectivity It reduces the search cost If the tree does not fit into main memory, partition the database Efficient and scalable for mining both long and short frequent patterns