Association Rule Mining (ARM) Komate AMPHAWAN

Size: px
Start display at page:

Download "Association Rule Mining (ARM) Komate AMPHAWAN"

Transcription

1 Association Rule Mining (ARM) Komate AMPHAWAN 1

2 J-O-K-E???? 2

3 What can be inferred? I purchase diapers I purchase a new car I purchase OTC cough (ไอ) medicine I purchase a prescription medication (ใบส งยา) I don t show up (แสดงต วตน) for class 3

4 ARM Proposed by Agrawal et al in Initially used for Market Basket Analysis to find how items purchased by customers are related. Bread Milk [sup = 5%, conf = 100%] 4

5 Association Rule Learners to discover elements that co-occur frequently within a data set consisting of multiple independent selections of elements (such as purchasing transactions), and to discover rules, such as implication or correlation, which relate cooccurring elements. to answer questions such as "if a customer purchases product A, how likely is he to purchase product B?" and "What products will a customer buy if he buys products C and D?" are answered by association-finding algorithms. to reduce a potentially huge amount of information to a small, understandable set of statistically supported statements. also known as market basket analysis. 5

6 Market Basket Analysis A typical and widely-used example of association rule mining. Example: Data are collected using bar-code scanners in supermarkets. Each record will consist of all items in a single purchase transaction. Managers would be interested to know if certain groups of items are consistently purchased together. They could use this data for adjusting store layouts (placing items optimally with respect to each other), for cross-selling, for promotions, for catalog design and to identify customer segments based on buying patterns. 6

7 Famous & Interesting Finding Beer & Diaper A number of convenience store clerks noticed that men often bought beer at the same time they bought diapers. The store mined its receipts and proved the clerks' observations correct. So, the store began stocking diapers next to the beer coolers, and sales skyrocketed 7

8 Market Basket Analysis Retail each customer purchases different set of products, different quantities, different times MBA uses this information to: Identify who customers are (not by name) Understand why they make certain purchases Gain insight about its merchandise (products): Fast and slow movers Products which are purchased together Products which might benefit from promotion Take action: Store layouts Which products to put on specials, promote, coupons Combining all of this with a customer loyalty card it 8 becomes even more valuable

9 Association Rules DM technique most closely allied with Market Basket Analysis AR can be automatically generated AR represent patterns in the data without a specified target variable Good example of undirected data mining Whether patterns make sense is up to humanoids (us!) 9

10 Associations Rules expressing relationships between items Example cereal, milk fruit People who bought cereal and milk also bought fruit. Stores might want to offer specials on milk and cereal to get people to buy more fruit. 10

11 Association Rules Wal-Mart customers who purchase Barbie dolls have a 60% likelihood of also purchasing one of three types of candy bars [Forbes, Sept 8, 1997] Customers who purchase maintenance agreements are very likely to purchase large appliances (author experience) When a new hardware store opens, one of the most commonly sold items is toilet bowl (โถช กโครก) cleaners (author experience) So what 11

12 Association Rules Apply Elsewhere Besides retail supermarkets, etc Purchases made using credit/debit cards Optional Telco Service purchases Banking services Unusual combinations of insurance claims can be a warning of fraud Medical patient histories 12

13 Association Rules Association rule types: Actionable Rules contain high-quality, actionable information Trivial Rules information already well-known by those familiar with the business Inexplicable Rules no explanation and do not suggest action Trivial and Inexplicable Rules occur most often 13

14 Market Basket Analysis Analyze tables of transactions Person A B C D Basket Chips, Salsa, Cookies, Crackers, Coke, Beer Lettuce, Spinach, Oranges, Celery, Apples, Grapes Chips, Salsa, Frozen Pizza, Frozen Cake Lettuce, Spinach, Milk, Butter Can we hypothesize? Chips => Salsa Lettuce => Spinach 14

15 Market Baskets In general, data consists of TID Basket Transaction ID Subset of items 15

16 The model: data (Input) I= {i 1, i 2,, i m }: a set of items. Transaction Database T: a set of transactions T = {t 1, t 2,, t n }. Transaction t: a set of items, and t I. 16

17 Transaction data: supermarket data Market basket transactions: t 1 : {bread, cheese, milk} t 2 : {apple, eggs, salt, yogurt} t n : {biscuit, eggs, milk} Concepts: An item: an item/article in a basket I: the set of all items sold in the store A transaction: items purchased in a basket; it may have TID (transaction ID) A transactional dataset: A set of transactions 17

18 Transaction data: a set of documents A text document data set. Each document is treated as a bag of keywords doc1: doc2: doc3: doc4: doc5: doc6: doc7: Student, Teach, School Student, School Teach, School, City, Game Baseball, Basketball Basketball, Player, Spectator Baseball, Coach, Game, Team Basketball, Team, City, Game 18

19 Measuring Interesting Rules Support Ratio of # of transactions containing A and B to the total # of transactions s( A Confidence B) = { T D A B T } D Ratio of # of transactions containing A and B to #of transactions containing A c( A B) { T D A B T} = { T D A T} 19

20 Measuring Interesting Rules Rules are included/excluded based on two metrics minimum support level -how frequently all of the items in a rule appear in transactions minimum confidence level -how frequently the left hand side of a rule implies the right hand side 20

21 Two sub-problems in discovering all association rules: ARM Find all sets of items (itemsets) that have transaction support above minimum support Itemsets that qualify are called largeitemsets, and all others small itemsets. Generate from each large itemset, rules that use items from the large itemset. Given a large itemset Y, and X is a subset of Y Take the support of Y and divide it by the support of X If the ratio c is at least minconf, then X (Y -X) is satisfied with confidence factor c 21

22 Frequent Itemsets itemset any set of items k-itemset an itemset containing k items frequent itemset an itemset that satisfies a minimum support level If I contains m items, how many itemsets are there? 22

23 Strong Association Rules Given an itemset, it s easy to generate association rules Given itemset, {Chips, Salsa} => Chips, Salsa Chips => Salsa Salsa => Chips Chips, Salsa => Strong rules are interesting Generally defined as those rules satisfying minimum support and minimum confidence 23

24 Summary of Association Rule Definitions Association Rule (AR): implication X Y where X,Y I and X Y = ; Support of AR (s) X Y: Percentage of transactions that contain X Y Confidence of AR (α) X Y: Ratio of number of transactions that contain X Y to the number that contain X 24

25 Association Rule Mining Task Given a set of transactions T, the goal of association rule mining is to find all rules having support minsup threshold confidence minconf threshold Brute-force approach: List all possible association rules Compute the support and confidence for each rule Prune rules that fail the minsup and minconf thresholds Computationally prohibitive! 25

26 Generating frequent itemsets 26

27 Frequent Itemset Generation null A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Given d items, there are 2 d possible candidate itemsets 27

28 Generating Frequent Itemsets Analysis of naïve algorithm 2 m subsets of I Scan n transactions for each subset O(2 m n) tests of s being subset of T Growth is exponential in the number of items! Can we do efficiently? 28

29 Frequent Itemset Generation Brute-force approach: Each itemset in the lattice is a candidate frequent itemset Count the support of each candidate by scanning the database Transactions TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Match each transaction against every candidate Complexity ~ O(NMw) => Expensive since M = 2 d!!! 29

30 Frequent Itemset Generation Strategies Reduce the number of candidates (M) Complete search: M=2 d Use pruning techniques to reduce M Reduce the number of transactions (N) Reduce size of N as the size of itemset increases Used by DHP and vertical-based mining algorithms Reduce the number of comparisons (NM) Use efficient data structures to store the candidates or transactions No need to match every candidate against every transaction 30

31 Classical algorithm to discover frequent itemsets: Apriori 31

32 The Apriori Algorithm Progressively identifies large itemsets of different sizes A B C D AB AC AD BC BD CD ABC ABD ACD BCD ABCD 32

33 Generating Frequent Itemsets Frequent itemsets support the apriori property If A is not a frequent itemset, then any superset of A is not a frequent itemset. Support of an itemset never exceeds the support of its subsets Proof: Let n be the number of transactions. Suppose A is a subset of l transactions. If A A, then A is a subset of l l transactions. Thus, if l/n < minimum support, so is l /n. 33

34 Illustrating Apriori Principle Found to be Infrequent Pruned supersets 34

35 Generating Frequent Itemsets Central idea: Build candidate k-itemsets from frequent (k-1)-itemsets Approach Find all frequent 1-itemsets Extend (k-1)-itemsets to candidate k-itemsets Prune candidate itemsets that do not meet the minimum support. 35

36 Any subset of a frequent itemset must be also frequent an anti-monotone property Apriori: Candidate Generationand-test A transaction containing {beer, diaper, nuts} also contains {beer, diaper} {beer, diaper, nuts} is frequent {beer, diaper} must also be frequent No superset of any infrequent itemset should be generated or tested Many item combinations can be pruned 36

37 APRIORI Using the downward closure, we can prune unnecessary branches for further consideration APRIORI 1. k = 1 2. Find frequent set L k from C k of all candidate itemsets 3. Form C k+1 from L k ; k = k Repeat 2-3 until C k is empty Details about steps 2 and 3 Step 2: scan D and count each itemset in C k, if it s greater than minsup, it is frequent Step 3: next slide 37

38 Apriori s Candidate Generation For k=1, C 1 = all 1-itemsets. For k>1, generate C k from L k-1 as follows: The join step C k = k-2 way join of L k-1 with itself If both {a 1,,a k-2, a k-1 } & {a 1,, a k-2, a k } are in L k-1, then add {a 1,,a k-2, a k-1, a k } to C k (We keep items sorted). The prune step Remove {a 1,,a k-2, a k-1, a k } if it contains a non-frequent (k- 1) subset 38

39 How to Generate Candidates? Suppose the items in L k-1 are listed in an order Step 1: self-joining L k-1 insert intoc k select p.item 1, p.item 2,, p.item k-1, q.item k-1 from L k-1 p, L k-1 q where p.item 1 =q.item 1,, p.item k-2 =q.item k-2, p.item k-1 < q.item k-1 Step 2: pruning forall itemsets c in C k do forall (k-1)-subsets s of c do if (s is not in L k-1 ) then delete cfrom C k 39

40 Example of Generating L 3 ={abc, abd, acd, ace, bcd} Candidates Self-joining: L 3 *L 3 abcd from abc and abd acde from acd and ace Pruning: acde is removed because ade is not in L 3 C 4 ={abcd} 40

41 The Apriori Algorithm Example Database D TID Items Scan D itemset sup. {1} 2 {2} 3 {3} 3 {4} 1 {5} 3 C 1 L 1 itemset sup {1 2} 1 {1 3} 2 {1 5} 1 {2 3} 2 {2 5} 3 {3 5} 2 C 2 C 2 L 2 itemset sup Scan D {1 3} 2 {2 3} 2 {2 5} 3 {3 5} 2 C 3 itemset Scan D L 3 {2 3 5} itemset sup {2 3 5} 2 itemset sup. {1} 2 {2} 3 {3} 3 {5} 3 itemset {1 2} {1 3} {1 5} {2 3} {2 5} {3 5} 41

42 The core of the Apriori algorithm: Is Apriori Fast Enough? Performance Bottlenecks Use frequent (k 1)-itemsets to generate candidate frequent k- itemsets Use database scan and pattern matching to collect counts for the candidate itemsets The bottleneck of Apriori: candidate generation Huge candidate sets: 10 4 frequent 1-itemset will generate 10 7 candidate 2-itemsets To discover a frequent pattern of size 100, e.g., {a 1, a 2,, a 100 }, one needs to generate candidates. Multiple scans of database: Needs (n +1 ) scans, n is the length of the longest pattern 42

43 Apriori Adv/Disadv Advantages: Uses large itemset property. Easily parallelized Easy to implement. Disadvantages: Assumes transaction database is memory resident. Requires up to m database scans. 43

44 Improving Apriori s Efficiency Hash-based itemset counting: A k-itemset whose corresponding hashing bucket count is below the threshold cannot be frequent Transaction reduction: A transaction that does not contain any frequent k-itemset is useless in subsequent scans Partitioning: Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB Sampling: mining on a subset of given data, need a lower support threshold + a method to determine the completeness Dynamic itemset counting: add new candidate itemsets immediately (unlike Apriori) when all of their subsets are estimated to be frequent 44

45 Deriving rules from frequent itemsets 45

46 Derive rules from frequent itemsets Frequent itemsets!= association rules One more step is required to find association rules For each frequent itemset X, For each proper nonempty subset A of X, Let B = X -A A B is an association rule if Confidence (A B) minconf, where support (A B) = support (AB), and confidence (A B) = support (AB) / support (A) 46

47 Example deriving rules from frequent itemses Suppose 234 is frequent, with supp=50% Proper nonempty subsets: 23, 24, 34, 2, 3, 4, with supp=50%, 50%, 75%, 75%, 75%, 75% respectively These generate these association rules: 23 => 4, confidence=100% 24 => 3, confidence=100% 34 => 2, confidence=67% 2 => 34, confidence=67% 3 => 24, confidence=67% 4 => 23, confidence=67% All rules have support = 50% 47

48 Deriving rules To recap, in order to obtain A B, we need to have Support(AB) and Support(A) This step is not as time-consuming as frequent itemsets generation Why? It s also easy to speedup using techniques such as parallel processing. How? Do we really need candidate generation for deriving association rules? Frequent-Pattern Growth (FP-Tree) 48

49 Algorithm to Generate ARs 49

50 Computational Complexity Given d unique items: Total number of itemsets = 2 d Total number of possible association rules: R d 1 d = d k d k 1 k= j= 1 k j d d+ 1 = If d=6, R = 602 rules 50

51 Efficiency improvement 51

52 Can we improve efficiency? Efficiency Improvement Pruning without checking all k - 1 subsets? Joining and pruning without looping over entire L k-1?. Yes, one way is to use hash trees. One hash tree is created for each pass k Or one hash tree for k-itemset, k = 1, 2, 52

53 Hash Tree Storing all candidate k-itemsets and their counts. Internal node v at level m contains bucket pointers Which branch next? Use hash of m th item to decide Leaf nodes contain lists of itemsets and counts E.g., C 2 : 12, 13, 15, 23, 25, 35; use identity hash function {} ** root /1 2 \3 ** edge+label /2 3 \5 /3 \5 /5 [12:][13:] [15:] [23:] [25:] [35:] ** leaves 53

54 How to join using hash tree? Only try to join frequent k-1 itemsets with common parents in the hash tree How to prune using hash tree? To determine if a k-1 itemset is frequent with hash tree can avoid going through all itemsets of L k-1. (The same idea as the previous item) Added benefit: No need to enumerate all k-subsets of transactions. Use traversal to limit consideration of such subsets. Or enumeration is replaced by tree traversal. 54

55 Further Improvement Speed up searching and matching Reduce number of transactions (a kind of instance selection) Reduce number of passes over data on disk Reduce number of subsets per transaction that must be considered Reduce number of candidates 55

56 Speed up searching and matching Use hash counts to filter candidates (see example) Method: When counting candidate k-1 itemsets, get counts of hash-groups of k-itemsets Use a hash function h on k-itemsets For each transaction t and k-subset s of t, add 1 to count of h(s) Remove candidates q generated by Apriori if h(q) s count <= minsupp The idea is quite useful for k=2, but often not so useful elsewhere. (For sparse data, k=2 can be the most expensive for Apriori. Why?) 56

57 Suppose h2 is: Hash-based Example 1,3,4 2,3,5 1,2,3,5 2,5 h2(x,y) = ((order of x) * 10 + (order of y)) mod 7 E.g., h2(1,4) = 0, h2(1,5) = 1, bucket0 bucket1 bucket2 bucket3 bucket4 bucket5 bucket counts Then 2-itemsets hashed to buckets 1, 5 cannot be frequent (e.g. 15, 12), so remove them from C 2 57

58 Working on transactions Remove transactions that do not contain any frequent k-itemsets in each scan Remove from transactions those items that are not members of any candidate k-itemsets e.g., if 12, 24, 14 are the only candidate itemsets contained in 1234, then remove item 3 if 12, 24 are the only candidate itemsets contained in transaction 1234, then remove the transaction from next round of scan. Reducing data size leads to less reading and processing time, but extra writing time 58

59 Reducing Scans via Partitioning Divide the dataset D into m portions, D1, D2,, Dm, so that each portion can fit into memory. Find frequent itemsets F i in D i, with support minsup, for each i. If it is frequent in D, it must be frequent in some D i. The union of all F i forms a candidate set of the frequent itemsets in D; get their counts. Often this requires only two scans of D. 59

60 FP-growth 60

61 FP Growth (Han, Pei, Yin 2000) One problematic aspect of the Apriori is the candidate generation Source of exponential growth Another approach is to use a divide and conquer strategy Idea: Compress the database into a frequent pattern tree representing frequent items 61

62 Mining Frequent Patterns Without Candidate Generation Compress a large database into a compact, Frequent- Pattern tree (FP-tree) structure highly condensed, but complete for frequent pattern mining avoid costly database scans Develop an efficient, FP-tree-based frequent pattern mining method A divide-and-conquer methodology: decompose mining tasks into smaller ones Avoid candidate generation: sub-database test only! 62

63 Construct FP-tree from a Transaction DB Steps: TID Items bought (ordered) frequent items 100 {f, a, c, d, g, i, m, p} {f, c, a, m, p} 200 {a, b, c, f, l, m, o} {f, c, a, b, m} 300 {b, f, h, j, o} {f, b} 400 {b, c, k, s, p} {c, b, p} 500 {a, f, c, e, l, p, m, n} {f, c, a, m, p} 1. Scan DB once, find frequent 1-itemset (single item pattern) 2. Order frequent items in frequency descending order 3. Scan DB again, construct FP-tree Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3 min_support = 0.5 {} f:4 c:1 c:3 b:1 b:1 a:3 p:1 m:2 b:1 p:2 m:1 63

64 Benefits of the FP-tree Structure Completeness: never breaks a long pattern of any transaction preserves complete information for frequent pattern mining Compactness reduce irrelevant information infrequent items are gone frequency descending ordering: more frequent items are more likely to be shared never be larger than the original database (if not count nodelinks and counts) 64

65 Mining Frequent Patterns Using FP-tree General idea (divide-and-conquer) Recursively grow frequent pattern path using the FP-tree Method For each item, construct its conditional pattern-base, and then its conditional FP-tree Repeat the process on each newly created conditional FPtree Until the resulting FP-tree is empty, or it contains only one path (single path will generate all the combinations of its sub-paths, each of which is a frequent pattern) 65

66 Major Steps to Mine FP-tree 1) Construct conditional pattern base for each node in the FP-tree 2) Construct conditional FP-tree from each conditional pattern-base 3) Recursively mine conditional FP-trees and grow frequent patterns obtained so far If the conditional FP-tree contains a single path, simply enumerate all the patterns 66

67 Step 1: From FP-tree to Conditional Pattern Base Starting at the frequent header table in the FP-tree Traverse the FP-tree by following the link of each frequent item Accumulate all of transformed prefix paths of that item to form a conditional pattern base Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3 {} f:4 c:1 c:3 b:1 b:1 a:3 p:1 m:2 b:1 p:2 m:1 Conditional pattern bases item cond. pattern base c f:3 a fc:3 b fca:1, f:1, c:1 m fca:2, fcab:1 p fcam:2, cb:1 67

68 Properties of FP-tree for Conditional Pattern Base Construction Node-link property For any frequent item a i, all the possible frequent patterns that contain a i can be obtained by following a i 's node-links, starting from a i 's head in the FP-tree header Prefix path property To calculate the frequent patterns for a node a i in a path P, only the prefix sub-path of a i in P need to be accumulated, and its frequency count should carry the same count as node a i. 68

69 Step 2: Construct Conditional FP-tree For each pattern-base Accumulate the count for each item in the base Construct the FP-tree for the frequent items of the pattern base Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3 {} f:4 c:1 c:3 a:3 b:1 b:1 p:1 m:2 p:2 b:1 m:1 m-conditional pattern base: fca:2, fcab:1 {} f:3 c:3 a:3 m-conditional FP-tree All frequent patterns concerning m m, fm, cm, am, fcm, fam, cam, fcam 69

70 Mining Frequent Patterns by Creating Conditional Pattern-Bases Item p m b a c f Conditional pattern-base {(fcam:2), (cb:1)} {(fca:2), (fcab:1)} {(fca:1), (f:1), (c:1)} {(fc:3)} {(f:3)} Empty Conditional FP-tree {(c:3)} p {(f:3, c:3, a:3)} m Empty {(f:3, c:3)} a {(f:3)} c Empty 70

71 Step 3: Recursively mine the conditional FP-tree {} {} f:3 c:3 a:3 m-conditional FP-tree Cond. pattern base of am : (fc:3) Cond. pattern base of cm : (f:3) f:3 c:3 am-conditional FP-tree {} f:3 cm-conditional FP-tree {} Cond. pattern base of cam : (f:3) f:3 cam-conditional FP-tree 71

72 Single FP-tree Path Generation Suppose an FP-tree T has a single path P The complete set of frequent pattern of T can be generated by enumeration of all the combinations of the sub-paths of P {} f:3 c:3 a:3 All frequent patterns concerning m m, fm, cm, am, fcm, fam, cam, fcam m-conditional FP-tree 72

73 Principles of Frequent Pattern Growth Pattern growth property Let α be a frequent itemset in DB, B be α's conditional pattern base, and β be an itemset in B. Then α β is a frequent itemset in DB iff β is frequent in B. abcdef is a frequent pattern, if and only if abcde is a frequent pattern, and f is frequent in the set of transactions containing abcde 73

74 Why Is Frequent Pattern GrowthFast? Our performance study shows FP-growth is an order of magnitude faster than Apriori, and is also faster than tree-projection Reasoning No candidate generation, no candidate test Use compact data structure Eliminate repeated database scan Basic operation is counting and FP-tree building 74

75 FP-growth vs. Apriori: Scalability With the Support Threshold Data set T25I20D10K D1 FP-grow th runtime D1 Apriori runtime Run time(sec.) Support threshold(%) 75

76 Association Rules: Advanced Topics 76

77 Alternative Methods for Frequent Itemset Generation Traversal of Itemset Lattice General-to-specific vs Specific-to-general 77

78 Alternative Methods for Frequent Itemset Generation Traversal of Itemset Lattice Equivalent Classes 78

79 Alternative Methods for Frequent Itemset Generation Traversal of Itemset Lattice Breadth-first vs Depth-first 79

80 Alternative Methods for Frequent Itemset Generation Representation of Database horizontal vs vertical data layout TID Items A B C D E 1 A,B,E B,C,D C,E A,C,D A,B,C,D A,E A,B 9 8 A,B,C 9 A,C,D 10 B 80

81 Set enumeration tree: Tree Projection null Possible Extension: E(A) = {B,C,D,E} A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE Possible Extension: E(ABC) = {D,E} ABCD ABCE ABDE ACDE BCDE ABCDE 81

82 Tree Projection Items are listed in lexicographic order Each node P stores the following information: Itemset for node P List of possible lexicographic extensions of P: E(P) Pointer to projected database of its ancestor node Bitvector containing information about which transactions in the projected database contain the itemset 82

83 Original Database: TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E} Projected Database Projected Database for node A: TID Items 1 {B} 2 {} 3 {C,D,E} 4 {D,E} 5 {B,C} 6 {B,C,D} 7 {} 8 {B,C} 9 {B,D} 10 {} For each transaction T, projected transaction at node A is T E(A) 83

84 ECLAT For each item, store a list of transaction ids (tids) Horizontal Data Layout TID Items 1 A,B,E 2 B,C,D 3 C,E 4 A,C,D 5 A,B,C,D 6 A,E 7 A,B 8 A,B,C 9 A,C,D 10 B Vertical Data Layout A B C D E TID-list 84

85 ECLAT Determine support of any k-itemset by intersecting tid-lists of two of its (k-1) subsets. A B AB traversal approaches: top-down, bottom-up and hybrid Advantage: very fast support counting Disadvantage: intermediate tid-lists may become too large for memory 85

86 Eclat Algorithm Dynamically process each transaction online maintaining 2-itemset counts. Transform Partition L2 using 1-item prefix Equivalence classes - {AB, AC, AD}, {BC, BD}, {CD} Transform database to vertical form Asynchronous Phase For each equivalence class E Compute frequent (E) 86

87 Discussion 87

88 Measuring Interestingness - Discussion What are interesting association rules Novel and actionable Association mining aims to look for valid, novel, useful (= actionable) patterns. Support and confidence are not sufficient for measuring interestingness. Large support & confidence thresholds only a small number of association rules, and they are likely folklores, or known facts. Small support & confidence thresholds too many association rules. 88

89 Additional step (may be) Two-step approach: 1. Frequent Itemset Generation Generate all itemsets whose support minsup 2. Rule Generation Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset Additional step (may be) Overcoming the practical limits imposed by thousands or tens of thousands of unique items 89

90 Overcoming Practical Limits for Association Rules 1. Generate co-occurrence matrix for single items if OJ then soda 2. Generate co-occurrence matrix for two items if OJ and Milk then soda 3. Generate co-occurrence matrix for three items if OJ and Milk and Window Cleaner then soda 4. Etc 90

91 Final Thought on Association Rules: The Problem of Lots of Data Fast Food Restaurant could have 100 items on its menu How many combinations are there with 3 different menu items? 161,700! Supermarket 10,000 or more unique items 50 million 2-item combinations 100 billion 3-item combinations Use of product hierarchies (groupings) helps address this common issue Finally, know that the number of transactions in a given timeperiod could also be huge (hence expensive to analyze) 91

92 How Good is an Association Rule? Is support and confidence enough? Lift (improvement) tells us how much better a rule is at predicting the result than just assuming the result in the first place Lift = P(LHS^RHS) / P(LHS).P(RHS) When lift > 1 then the rule is better at predicting the result than guessing When lift < 1, the rule is doing worse than informed guessing and using the Negative Rule produces a better rule than guessing 92

93 Computational Complexity Given d unique items: Total number of itemsets = 2 d Total number of possible association rules: R d 1 d = d k d k 1 k= j= 1 k j d d+ 1 = If d=6, R = 602 rules 93

94 Usability of Association Rules Explainability High Intuitive explanations Accuracy Moderate Depends on rule quality Scalability Moderate/Low Performance of rule systems depends on both no. and complexity of rules Embeddability Moderate/high Can be compiled in many cases Tolerance for sparse data Low Support and confidence are both affected Tolerance for noisy data Moderate How do you use outliers? Development Speed Low/Moderate Needs lot of filtering Dependence on Experts Moderate/high Domain experts to filter rules 94

95 Unique Features of Association Rules vs. classification Right hand side can have any number of items It can find a classification like rule X cin a different way: such a rule is not about differentiating classes, but about what (X) describes class c vs. clustering It does not have to have class labels For X Y, if Y is considered as a cluster, it can form different clusters sharing the same description (X). 95

96 Multilevel Association Rules Other Association Rules Often there exist structures in data E.g., yahoo hierarchy, food hierarchy Adjusting minsup for each level Constraint-based Association Rules Knowledge constraints Data constraints Dimension/level constraints Interestingness constraints Rule constraints 96

97 Q & A 97

Association Rule Mining

Association Rule Mining Association Rule Mining Generating assoc. rules from frequent itemsets Assume that we have discovered the frequent itemsets and their support How do we generate association rules? Frequent itemsets: {1}

More information

Mining Frequent Patterns without Candidate Generation

Mining Frequent Patterns without Candidate Generation Mining Frequent Patterns without Candidate Generation Outline of the Presentation Outline Frequent Pattern Mining: Problem statement and an example Review of Apriori like Approaches FP Growth: Overview

More information

Association Rules. A. Bellaachia Page: 1

Association Rules. A. Bellaachia Page: 1 Association Rules 1. Objectives... 2 2. Definitions... 2 3. Type of Association Rules... 7 4. Frequent Itemset generation... 9 5. Apriori Algorithm: Mining Single-Dimension Boolean AR 13 5.1. Join Step:...

More information

Apriori Algorithm. 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke

Apriori Algorithm. 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Apriori Algorithm For a given set of transactions, the main aim of Association Rule Mining is to find rules that will predict the occurrence of an item based on the occurrences of the other items in the

More information

Chapter 6: Association Rules

Chapter 6: Association Rules Chapter 6: Association Rules Association rule mining Proposed by Agrawal et al in 1993. It is an important data mining model. Transaction data (no time-dependent) Assume all data are categorical. No good

More information

Association rules. Marco Saerens (UCL), with Christine Decaestecker (ULB)

Association rules. Marco Saerens (UCL), with Christine Decaestecker (ULB) Association rules Marco Saerens (UCL), with Christine Decaestecker (ULB) 1 Slides references Many slides and figures have been adapted from the slides associated to the following books: Alpaydin (2004),

More information

Data Mining for Knowledge Management. Association Rules

Data Mining for Knowledge Management. Association Rules 1 Data Mining for Knowledge Management Association Rules Themis Palpanas University of Trento http://disi.unitn.eu/~themis 1 Thanks for slides to: Jiawei Han George Kollios Zhenyu Lu Osmar R. Zaïane Mohammad

More information

Chapter 4: Association analysis:

Chapter 4: Association analysis: Chapter 4: Association analysis: 4.1 Introduction: Many business enterprises accumulate large quantities of data from their day-to-day operations, huge amounts of customer purchase data are collected daily

More information

Lecture Topic Projects 1 Intro, schedule, and logistics 2 Data Science components and tasks 3 Data types Project #1 out 4 Introduction to R,

Lecture Topic Projects 1 Intro, schedule, and logistics 2 Data Science components and tasks 3 Data types Project #1 out 4 Introduction to R, Lecture Topic Projects 1 Intro, schedule, and logistics 2 Data Science components and tasks 3 Data types Project #1 out 4 Introduction to R, statistics foundations 5 Introduction to D3, visual analytics

More information

Mining Association Rules in Large Databases

Mining Association Rules in Large Databases Mining Association Rules in Large Databases Association rules Given a set of transactions D, find rules that will predict the occurrence of an item (or a set of items) based on the occurrences of other

More information

Frequent Pattern Mining

Frequent Pattern Mining Frequent Pattern Mining...3 Frequent Pattern Mining Frequent Patterns The Apriori Algorithm The FP-growth Algorithm Sequential Pattern Mining Summary 44 / 193 Netflix Prize Frequent Pattern Mining Frequent

More information

Chapter 6: Basic Concepts: Association Rules. Basic Concepts: Frequent Patterns. (absolute) support, or, support. (relative) support, s, is the

Chapter 6: Basic Concepts: Association Rules. Basic Concepts: Frequent Patterns. (absolute) support, or, support. (relative) support, s, is the Chapter 6: What Is Frequent ent Pattern Analysis? Frequent pattern: a pattern (a set of items, subsequences, substructures, etc) that occurs frequently in a data set frequent itemsets and association rule

More information

Association Rules. Berlin Chen References:

Association Rules. Berlin Chen References: Association Rules Berlin Chen 2005 References: 1. Data Mining: Concepts, Models, Methods and Algorithms, Chapter 8 2. Data Mining: Concepts and Techniques, Chapter 6 Association Rules: Basic Concepts A

More information

Chapter 4: Mining Frequent Patterns, Associations and Correlations

Chapter 4: Mining Frequent Patterns, Associations and Correlations Chapter 4: Mining Frequent Patterns, Associations and Correlations 4.1 Basic Concepts 4.2 Frequent Itemset Mining Methods 4.3 Which Patterns Are Interesting? Pattern Evaluation Methods 4.4 Summary Frequent

More information

Frequent Pattern Mining

Frequent Pattern Mining Frequent Pattern Mining How Many Words Is a Picture Worth? E. Aiden and J-B Michel: Uncharted. Reverhead Books, 2013 Jian Pei: CMPT 741/459 Frequent Pattern Mining (1) 2 Burnt or Burned? E. Aiden and J-B

More information

Frequent Pattern Mining. Based on: Introduction to Data Mining by Tan, Steinbach, Kumar

Frequent Pattern Mining. Based on: Introduction to Data Mining by Tan, Steinbach, Kumar Frequent Pattern Mining Based on: Introduction to Data Mining by Tan, Steinbach, Kumar Item sets A New Type of Data Some notation: All possible items: Database: T is a bag of transactions Transaction transaction

More information

Data Mining Part 3. Associations Rules

Data Mining Part 3. Associations Rules Data Mining Part 3. Associations Rules 3.2 Efficient Frequent Itemset Mining Methods Fall 2009 Instructor: Dr. Masoud Yaghini Outline Apriori Algorithm Generating Association Rules from Frequent Itemsets

More information

Nesnelerin İnternetinde Veri Analizi

Nesnelerin İnternetinde Veri Analizi Bölüm 4. Frequent Patterns in Data Streams w3.gazi.edu.tr/~suatozdemir What Is Pattern Discovery? What are patterns? Patterns: A set of items, subsequences, or substructures that occur frequently together

More information

Basic Concepts: Association Rules. What Is Frequent Pattern Analysis? COMP 465: Data Mining Mining Frequent Patterns, Associations and Correlations

Basic Concepts: Association Rules. What Is Frequent Pattern Analysis? COMP 465: Data Mining Mining Frequent Patterns, Associations and Correlations What Is Frequent Pattern Analysis? COMP 465: Data Mining Mining Frequent Patterns, Associations and Correlations Slides Adapted From : Jiawei Han, Micheline Kamber & Jian Pei Data Mining: Concepts and

More information

ANU MLSS 2010: Data Mining. Part 2: Association rule mining

ANU MLSS 2010: Data Mining. Part 2: Association rule mining ANU MLSS 2010: Data Mining Part 2: Association rule mining Lecture outline What is association mining? Market basket analysis and association rule examples Basic concepts and formalism Basic rule measurements

More information

DATA MINING II - 1DL460

DATA MINING II - 1DL460 DATA MINING II - 1DL460 Spring 2013 " An second class in data mining http://www.it.uu.se/edu/course/homepage/infoutv2/vt13 Kjell Orsborn Uppsala Database Laboratory Department of Information Technology,

More information

Chapter 7: Frequent Itemsets and Association Rules

Chapter 7: Frequent Itemsets and Association Rules Chapter 7: Frequent Itemsets and Association Rules Information Retrieval & Data Mining Universität des Saarlandes, Saarbrücken Winter Semester 2013/14 VII.1&2 1 Motivational Example Assume you run an on-line

More information

Association Pattern Mining. Lijun Zhang

Association Pattern Mining. Lijun Zhang Association Pattern Mining Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction The Frequent Pattern Mining Model Association Rule Generation Framework Frequent Itemset Mining Algorithms

More information

Frequent Pattern Mining S L I D E S B Y : S H R E E J A S W A L

Frequent Pattern Mining S L I D E S B Y : S H R E E J A S W A L Frequent Pattern Mining S L I D E S B Y : S H R E E J A S W A L Topics to be covered Market Basket Analysis, Frequent Itemsets, Closed Itemsets, and Association Rules; Frequent Pattern Mining, Efficient

More information

Fundamental Data Mining Algorithms

Fundamental Data Mining Algorithms 2018 EE448, Big Data Mining, Lecture 3 Fundamental Data Mining Algorithms Weinan Zhang Shanghai Jiao Tong University http://wnzhang.net http://wnzhang.net/teaching/ee448/index.html REVIEW What is Data

More information

CS570 Introduction to Data Mining

CS570 Introduction to Data Mining CS570 Introduction to Data Mining Frequent Pattern Mining and Association Analysis Cengiz Gunay Partial slide credits: Li Xiong, Jiawei Han and Micheline Kamber George Kollios 1 Mining Frequent Patterns,

More information

BCB 713 Module Spring 2011

BCB 713 Module Spring 2011 Association Rule Mining COMP 790-90 Seminar BCB 713 Module Spring 2011 The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Outline What is association rule mining? Methods for association rule mining Extensions

More information

Market baskets Frequent itemsets FP growth. Data mining. Frequent itemset Association&decision rule mining. University of Szeged.

Market baskets Frequent itemsets FP growth. Data mining. Frequent itemset Association&decision rule mining. University of Szeged. Frequent itemset Association&decision rule mining University of Szeged What frequent itemsets could be used for? Features/observations frequently co-occurring in some database can gain us useful insights

More information

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING CSE 5243 INTRO. TO DATA MINING Mining Frequent Patterns and Associations: Basic Concepts (Chapter 6) Huan Sun, CSE@The Ohio State University 10/19/2017 Slides adapted from Prof. Jiawei Han @UIUC, Prof.

More information

Association Rule Mining

Association Rule Mining Huiping Cao, FPGrowth, Slide 1/22 Association Rule Mining FPGrowth Huiping Cao Huiping Cao, FPGrowth, Slide 2/22 Issues with Apriori-like approaches Candidate set generation is costly, especially when

More information

Association Rule Discovery

Association Rule Discovery Association Rule Discovery Association Rules describe frequent co-occurences in sets an item set is a subset A of all possible items I Example Problems: Which products are frequently bought together by

More information

Data Mining: Concepts and Techniques. Chapter 5. SS Chung. April 5, 2013 Data Mining: Concepts and Techniques 1

Data Mining: Concepts and Techniques. Chapter 5. SS Chung. April 5, 2013 Data Mining: Concepts and Techniques 1 Data Mining: Concepts and Techniques Chapter 5 SS Chung April 5, 2013 Data Mining: Concepts and Techniques 1 Chapter 5: Mining Frequent Patterns, Association and Correlations Basic concepts and a road

More information

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6 Data Mining: Concepts and Techniques (3 rd ed.) Chapter 6 Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University 2013-2017 Han, Kamber & Pei. All

More information

Association Rule Discovery

Association Rule Discovery Association Rule Discovery Association Rules describe frequent co-occurences in sets an itemset is a subset A of all possible items I Example Problems: Which products are frequently bought together by

More information

Mining Association Rules in Large Databases

Mining Association Rules in Large Databases Mining Association Rules in Large Databases Vladimir Estivill-Castro School of Computing and Information Technology With contributions fromj. Han 1 Association Rule Mining A typical example is market basket

More information

CHAPTER 8. ITEMSET MINING 226

CHAPTER 8. ITEMSET MINING 226 CHAPTER 8. ITEMSET MINING 226 Chapter 8 Itemset Mining In many applications one is interested in how often two or more objectsofinterest co-occur. For example, consider a popular web site, which logs all

More information

Data Structures. Notes for Lecture 14 Techniques of Data Mining By Samaher Hussein Ali Association Rules: Basic Concepts and Application

Data Structures. Notes for Lecture 14 Techniques of Data Mining By Samaher Hussein Ali Association Rules: Basic Concepts and Application Data Structures Notes for Lecture 14 Techniques of Data Mining By Samaher Hussein Ali 2009-2010 Association Rules: Basic Concepts and Application 1. Association rules: Given a set of transactions, find

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 1/8/2014 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 2 Supermarket shelf

More information

What Is Data Mining? CMPT 354: Database I -- Data Mining 2

What Is Data Mining? CMPT 354: Database I -- Data Mining 2 Data Mining What Is Data Mining? Mining data mining knowledge Data mining is the non-trivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data CMPT

More information

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING CSE 5243 INTRO. TO DATA MINING Mining Frequent Patterns and Associations: Basic Concepts (Chapter 6) Huan Sun, CSE@The Ohio State University Slides adapted from Prof. Jiawei Han @UIUC, Prof. Srinivasan

More information

COMP Associa0on Rules

COMP Associa0on Rules COMP 4601 Associa0on Rules 1 Road map Basic concepts Apriori algorithm Different data formats for mining Mining with mul0ple minimum supports Mining class associa0on rules Summary 2 What Is Frequent Pattern

More information

1. Interpret single-dimensional Boolean association rules from transactional databases

1. Interpret single-dimensional Boolean association rules from transactional databases 1 STARTSTUDING.COM 1. Interpret single-dimensional Boolean association rules from transactional databases Association rule mining: Finding frequent patterns, associations, correlations, or causal structures

More information

Machine Learning: Symbolische Ansätze

Machine Learning: Symbolische Ansätze Machine Learning: Symbolische Ansätze Unsupervised Learning Clustering Association Rules V2.0 WS 10/11 J. Fürnkranz Different Learning Scenarios Supervised Learning A teacher provides the value for the

More information

Improved Frequent Pattern Mining Algorithm with Indexing

Improved Frequent Pattern Mining Algorithm with Indexing IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VII (Nov Dec. 2014), PP 73-78 Improved Frequent Pattern Mining Algorithm with Indexing Prof.

More information

Association Rule Mining. Entscheidungsunterstützungssysteme

Association Rule Mining. Entscheidungsunterstützungssysteme Association Rule Mining Entscheidungsunterstützungssysteme Frequent Pattern Analysis Frequent pattern: a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set

More information

Chapter 4 Data Mining A Short Introduction

Chapter 4 Data Mining A Short Introduction Chapter 4 Data Mining A Short Introduction Data Mining - 1 1 Today's Question 1. Data Mining Overview 2. Association Rule Mining 3. Clustering 4. Classification Data Mining - 2 2 1. Data Mining Overview

More information

High dim. data. Graph data. Infinite data. Machine learning. Apps. Locality sensitive hashing. Filtering data streams.

High dim. data. Graph data. Infinite data. Machine learning. Apps. Locality sensitive hashing. Filtering data streams. http://www.mmds.org High dim. data Graph data Infinite data Machine learning Apps Locality sensitive hashing PageRank, SimRank Filtering data streams SVM Recommen der systems Clustering Network Analysis

More information

Data Mining Techniques

Data Mining Techniques Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 16: Association Rules Jan-Willem van de Meent (credit: Yijun Zhao, Yi Wang, Tan et al., Leskovec et al.) Apriori: Summary All items Count

More information

Association mining rules

Association mining rules Association mining rules Given a data set, find the items in data that are associated with each other. Association is measured as frequency of occurrence in the same context. Purchasing one product when

More information

Unsupervised learning: Data Mining. Associa6on rules and frequent itemsets mining

Unsupervised learning: Data Mining. Associa6on rules and frequent itemsets mining Unsupervised learning: Data Mining Associa6on rules and frequent itemsets mining Data Mining concepts Is the computa6onal process of discovering pa

More information

Effectiveness of Freq Pat Mining

Effectiveness of Freq Pat Mining Effectiveness of Freq Pat Mining Too many patterns! A pattern a 1 a 2 a n contains 2 n -1 subpatterns Understanding many patterns is difficult or even impossible for human users Non-focused mining A manager

More information

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING CSE 5243 INTRO. TO DATA MINING Mining Frequent Patterns and Associations: Basic Concepts (Chapter 6) Huan Sun, CSE@The Ohio State University Slides adapted from Prof. Jiawei Han @UIUC, Prof. Srinivasan

More information

Frequent Item Sets & Association Rules

Frequent Item Sets & Association Rules Frequent Item Sets & Association Rules V. CHRISTOPHIDES vassilis.christophides@inria.fr https://who.rocq.inria.fr/vassilis.christophides/big/ Ecole CentraleSupélec 1 Some History Bar code technology allowed

More information

CompSci 516 Data Intensive Computing Systems

CompSci 516 Data Intensive Computing Systems CompSci 516 Data Intensive Computing Systems Lecture 20 Data Mining and Mining Association Rules Instructor: Sudeepa Roy CompSci 516: Data Intensive Computing Systems 1 Reading Material Optional Reading:

More information

This paper proposes: Mining Frequent Patterns without Candidate Generation

This paper proposes: Mining Frequent Patterns without Candidate Generation Mining Frequent Patterns without Candidate Generation a paper by Jiawei Han, Jian Pei and Yiwen Yin School of Computing Science Simon Fraser University Presented by Maria Cutumisu Department of Computing

More information

2 CONTENTS

2 CONTENTS Contents 5 Mining Frequent Patterns, Associations, and Correlations 3 5.1 Basic Concepts and a Road Map..................................... 3 5.1.1 Market Basket Analysis: A Motivating Example........................

More information

Association Rules Apriori Algorithm

Association Rules Apriori Algorithm Association Rules Apriori Algorithm Market basket analysis n Market basket analysis might tell a retailer that customers often purchase shampoo and conditioner n Putting both items on promotion at the

More information

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6 Data Mining: Concepts and Techniques (3 rd ed.) Chapter 6 Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University 2013-2016 Han, Kamber & Pei. All

More information

AN IMPROVISED FREQUENT PATTERN TREE BASED ASSOCIATION RULE MINING TECHNIQUE WITH MINING FREQUENT ITEM SETS ALGORITHM AND A MODIFIED HEADER TABLE

AN IMPROVISED FREQUENT PATTERN TREE BASED ASSOCIATION RULE MINING TECHNIQUE WITH MINING FREQUENT ITEM SETS ALGORITHM AND A MODIFIED HEADER TABLE AN IMPROVISED FREQUENT PATTERN TREE BASED ASSOCIATION RULE MINING TECHNIQUE WITH MINING FREQUENT ITEM SETS ALGORITHM AND A MODIFIED HEADER TABLE Vandit Agarwal 1, Mandhani Kushal 2 and Preetham Kumar 3

More information

Pattern Mining. Knowledge Discovery and Data Mining 1. Roman Kern KTI, TU Graz. Roman Kern (KTI, TU Graz) Pattern Mining / 42

Pattern Mining. Knowledge Discovery and Data Mining 1. Roman Kern KTI, TU Graz. Roman Kern (KTI, TU Graz) Pattern Mining / 42 Pattern Mining Knowledge Discovery and Data Mining 1 Roman Kern KTI, TU Graz 2016-01-14 Roman Kern (KTI, TU Graz) Pattern Mining 2016-01-14 1 / 42 Outline 1 Introduction 2 Apriori Algorithm 3 FP-Growth

More information

Association Analysis: Basic Concepts and Algorithms

Association Analysis: Basic Concepts and Algorithms 5 Association Analysis: Basic Concepts and Algorithms Many business enterprises accumulate large quantities of data from their dayto-day operations. For example, huge amounts of customer purchase data

More information

Big Data Analytics CSCI 4030

Big Data Analytics CSCI 4030 Supermarket shelf management Market-basket model: Goal: Identify items that are bought together by sufficiently many customers Approach: Process the sales data collected with barcode scanners to find dependencies

More information

Scalable Frequent Itemset Mining Methods

Scalable Frequent Itemset Mining Methods Scalable Frequent Itemset Mining Methods The Downward Closure Property of Frequent Patterns The Apriori Algorithm Extensions or Improvements of Apriori Mining Frequent Patterns by Exploring Vertical Data

More information

We will be releasing HW1 today It is due in 2 weeks (1/25 at 23:59pm) The homework is long

We will be releasing HW1 today It is due in 2 weeks (1/25 at 23:59pm) The homework is long 1/21/18 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 1 We will be releasing HW1 today It is due in 2 weeks (1/25 at 23:59pm) The homework is long Requires proving theorems

More information

CMPUT 391 Database Management Systems. Data Mining. Textbook: Chapter (without 17.10)

CMPUT 391 Database Management Systems. Data Mining. Textbook: Chapter (without 17.10) CMPUT 391 Database Management Systems Data Mining Textbook: Chapter 17.7-17.11 (without 17.10) University of Alberta 1 Overview Motivation KDD and Data Mining Association Rules Clustering Classification

More information

CLOSET+:Searching for the Best Strategies for Mining Frequent Closed Itemsets

CLOSET+:Searching for the Best Strategies for Mining Frequent Closed Itemsets CLOSET+:Searching for the Best Strategies for Mining Frequent Closed Itemsets Jianyong Wang, Jiawei Han, Jian Pei Presentation by: Nasimeh Asgarian Department of Computing Science University of Alberta

More information

Knowledge Discovery in Databases

Knowledge Discovery in Databases Ludwig-Maximilians-Universität München Institut für Informatik Lehr- und Forschungseinheit für Datenbanksysteme Lecture notes Knowledge Discovery in Databases Summer Semester 2012 Lecture 3: Frequent Itemsets

More information

Mining Frequent Patterns without Candidate Generation: A Frequent-Pattern Tree Approach

Mining Frequent Patterns without Candidate Generation: A Frequent-Pattern Tree Approach Data Mining and Knowledge Discovery, 8, 53 87, 2004 c 2004 Kluwer Academic Publishers. Manufactured in The Netherlands. Mining Frequent Patterns without Candidate Generation: A Frequent-Pattern Tree Approach

More information

Roadmap DB Sys. Design & Impl. Association rules - outline. Citations. Association rules - idea. Association rules - idea.

Roadmap DB Sys. Design & Impl. Association rules - outline. Citations. Association rules - idea. Association rules - idea. 15-721 DB Sys. Design & Impl. Association Rules Christos Faloutsos www.cs.cmu.edu/~christos Roadmap 1) Roots: System R and Ingres... 7) Data Analysis - data mining datacubes and OLAP classifiers association

More information

Production rule is an important element in the expert system. By interview with

Production rule is an important element in the expert system. By interview with 2 Literature review Production rule is an important element in the expert system By interview with the domain experts, we can induce the rules and store them in a truth maintenance system An assumption-based

More information

Jeffrey D. Ullman Stanford University

Jeffrey D. Ullman Stanford University Jeffrey D. Ullman Stanford University A large set of items, e.g., things sold in a supermarket. A large set of baskets, each of which is a small set of the items, e.g., the things one customer buys on

More information

A BETTER APPROACH TO MINE FREQUENT ITEMSETS USING APRIORI AND FP-TREE APPROACH

A BETTER APPROACH TO MINE FREQUENT ITEMSETS USING APRIORI AND FP-TREE APPROACH A BETTER APPROACH TO MINE FREQUENT ITEMSETS USING APRIORI AND FP-TREE APPROACH Thesis submitted in partial fulfillment of the requirements for the award of degree of Master of Engineering in Computer Science

More information

CHAPTER 5 WEIGHTED SUPPORT ASSOCIATION RULE MINING USING CLOSED ITEMSET LATTICES IN PARALLEL

CHAPTER 5 WEIGHTED SUPPORT ASSOCIATION RULE MINING USING CLOSED ITEMSET LATTICES IN PARALLEL 68 CHAPTER 5 WEIGHTED SUPPORT ASSOCIATION RULE MINING USING CLOSED ITEMSET LATTICES IN PARALLEL 5.1 INTRODUCTION During recent years, one of the vibrant research topics is Association rule discovery. This

More information

DESIGN AND CONSTRUCTION OF A FREQUENT-PATTERN TREE

DESIGN AND CONSTRUCTION OF A FREQUENT-PATTERN TREE DESIGN AND CONSTRUCTION OF A FREQUENT-PATTERN TREE 1 P.SIVA 2 D.GEETHA 1 Research Scholar, Sree Saraswathi Thyagaraja College, Pollachi. 2 Head & Assistant Professor, Department of Computer Application,

More information

FP-Growth algorithm in Data Compression frequent patterns

FP-Growth algorithm in Data Compression frequent patterns FP-Growth algorithm in Data Compression frequent patterns Mr. Nagesh V Lecturer, Dept. of CSE Atria Institute of Technology,AIKBS Hebbal, Bangalore,Karnataka Email : nagesh.v@gmail.com Abstract-The transmission

More information

PFPM: Discovering Periodic Frequent Patterns with Novel Periodicity Measures

PFPM: Discovering Periodic Frequent Patterns with Novel Periodicity Measures PFPM: Discovering Periodic Frequent Patterns with Novel Periodicity Measures 1 Introduction Frequent itemset mining is a popular data mining task. It consists of discovering sets of items (itemsets) frequently

More information

Association rule mining

Association rule mining Association rule mining Association rule induction: Originally designed for market basket analysis. Aims at finding patterns in the shopping behavior of customers of supermarkets, mail-order companies,

More information

Product presentations can be more intelligently planned

Product presentations can be more intelligently planned Association Rules Lecture /DMBI/IKI8303T/MTI/UI Yudho Giri Sucahyo, Ph.D, CISA (yudho@cs.ui.ac.id) Faculty of Computer Science, Objectives Introduction What is Association Mining? Mining Association Rules

More information

Interestingness Measurements

Interestingness Measurements Interestingness Measurements Objective measures Two popular measurements: support and confidence Subjective measures [Silberschatz & Tuzhilin, KDD95] A rule (pattern) is interesting if it is unexpected

More information

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6 Data Mining: Concepts and Techniques (3 rd ed.) Chapter 6 Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University 2011 Han, Kamber & Pei. All rights

More information

Association Rule Mining. Introduction 46. Study core 46

Association Rule Mining. Introduction 46. Study core 46 Learning Unit 7 Association Rule Mining Introduction 46 Study core 46 1 Association Rule Mining: Motivation and Main Concepts 46 2 Apriori Algorithm 47 3 FP-Growth Algorithm 47 4 Assignment Bundle: Frequent

More information

Tutorial on Association Rule Mining

Tutorial on Association Rule Mining Tutorial on Association Rule Mining Yang Yang yang.yang@itee.uq.edu.au DKE Group, 78-625 August 13, 2010 Outline 1 Quick Review 2 Apriori Algorithm 3 FP-Growth Algorithm 4 Mining Flickr and Tag Recommendation

More information

Lecture notes for April 6, 2005

Lecture notes for April 6, 2005 Lecture notes for April 6, 2005 Mining Association Rules The goal of association rule finding is to extract correlation relationships in the large datasets of items. Many businesses are interested in extracting

More information

Association Analysis. CSE 352 Lecture Notes. Professor Anita Wasilewska

Association Analysis. CSE 352 Lecture Notes. Professor Anita Wasilewska Association Analysis CSE 352 Lecture Notes Professor Anita Wasilewska Association Rules Mining An Introduction This is an intuitive (more or less ) introduction It contains explanation of the main ideas:

More information

Data Warehousing & Mining. Data integration. OLTP versus OLAP. CPS 116 Introduction to Database Systems

Data Warehousing & Mining. Data integration. OLTP versus OLAP. CPS 116 Introduction to Database Systems Data Warehousing & Mining CPS 116 Introduction to Database Systems Data integration 2 Data resides in many distributed, heterogeneous OLTP (On-Line Transaction Processing) sources Sales, inventory, customer,

More information

Interestingness Measurements

Interestingness Measurements Interestingness Measurements Objective measures Two popular measurements: support and confidence Subjective measures [Silberschatz & Tuzhilin, KDD95] A rule (pattern) is interesting if it is unexpected

More information

Association Rule Mining: FP-Growth

Association Rule Mining: FP-Growth Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong We have already learned the Apriori algorithm for association rule mining. In this lecture, we will discuss a faster

More information

Tutorial on Assignment 3 in Data Mining 2009 Frequent Itemset and Association Rule Mining. Gyozo Gidofalvi Uppsala Database Laboratory

Tutorial on Assignment 3 in Data Mining 2009 Frequent Itemset and Association Rule Mining. Gyozo Gidofalvi Uppsala Database Laboratory Tutorial on Assignment 3 in Data Mining 2009 Frequent Itemset and Association Rule Mining Gyozo Gidofalvi Uppsala Database Laboratory Announcements Updated material for assignment 3 on the lab course home

More information

Association Rules Apriori Algorithm

Association Rules Apriori Algorithm Association Rules Apriori Algorithm Market basket analysis n Market basket analysis might tell a retailer that customers often purchase shampoo and conditioner n Putting both items on promotion at the

More information

Chapter 7: Frequent Itemsets and Association Rules

Chapter 7: Frequent Itemsets and Association Rules Chapter 7: Frequent Itemsets and Association Rules Information Retrieval & Data Mining Universität des Saarlandes, Saarbrücken Winter Semester 2011/12 VII.1-1 Chapter VII: Frequent Itemsets and Association

More information

Data Warehousing and Data Mining. Announcements (December 1) Data integration. CPS 116 Introduction to Database Systems

Data Warehousing and Data Mining. Announcements (December 1) Data integration. CPS 116 Introduction to Database Systems Data Warehousing and Data Mining CPS 116 Introduction to Database Systems Announcements (December 1) 2 Homework #4 due today Sample solution available Thursday Course project demo period has begun! Check

More information

Advance Association Analysis

Advance Association Analysis Advance Association Analysis 1 Minimum Support Threshold 3 Effect of Support Distribution Many real data sets have skewed support distribution Support distribution of a retail data set 4 Effect of Support

More information

Performance and Scalability: Apriori Implementa6on

Performance and Scalability: Apriori Implementa6on Performance and Scalability: Apriori Implementa6on Apriori R. Agrawal and R. Srikant. Fast algorithms for mining associa6on rules. VLDB, 487 499, 1994 Reducing Number of Comparisons Candidate coun6ng:

More information

A Technical Analysis of Market Basket by using Association Rule Mining and Apriori Algorithm

A Technical Analysis of Market Basket by using Association Rule Mining and Apriori Algorithm A Technical Analysis of Market Basket by using Association Rule Mining and Apriori Algorithm S.Pradeepkumar*, Mrs.C.Grace Padma** M.Phil Research Scholar, Department of Computer Science, RVS College of

More information

Supervised and Unsupervised Learning (II)

Supervised and Unsupervised Learning (II) Supervised and Unsupervised Learning (II) Yong Zheng Center for Web Intelligence DePaul University, Chicago IPD 346 - Data Science for Business Program DePaul University, Chicago, USA Intro: Supervised

More information

Data Warehousing and Data Mining

Data Warehousing and Data Mining Data Warehousing and Data Mining Lecture 3 Efficient Cube Computation CITS3401 CITS5504 Wei Liu School of Computer Science and Software Engineering Faculty of Engineering, Computing and Mathematics Acknowledgement:

More information

Decision Support Systems

Decision Support Systems Decision Support Systems 2011/2012 Week 6. Lecture 11 HELLO DATA MINING! THE PLAN: MINING FREQUENT PATTERNS (Classes 11-13) Homework 5 CLUSTER ANALYSIS (Classes 14-16) Homework 6 SUPERVISED LEARNING (Classes

More information

Association Rule Learning

Association Rule Learning Association Rule Learning 16s1: COMP9417 Machine Learning and Data Mining School of Computer Science and Engineering, University of New South Wales March 15, 2016 COMP9417 ML & DM (CSE, UNSW) Association

More information

Web Page Classification using FP Growth Algorithm Akansha Garg,Computer Science Department Swami Vivekanad Subharti University,Meerut, India

Web Page Classification using FP Growth Algorithm Akansha Garg,Computer Science Department Swami Vivekanad Subharti University,Meerut, India Web Page Classification using FP Growth Algorithm Akansha Garg,Computer Science Department Swami Vivekanad Subharti University,Meerut, India Abstract - The primary goal of the web site is to provide the

More information

Road Map. Objectives. Objectives. Frequent itemsets and rules. Items and transactions. Association Rules and Sequential Patterns

Road Map. Objectives. Objectives. Frequent itemsets and rules. Items and transactions. Association Rules and Sequential Patterns Road Map Association Rules and Sequential Patterns Frequent itemsets and rules Apriori algorithm FP-Growth Data formats Class association rules Sequential patterns. GSP algorithm 2 Objectives Association

More information