Mining Association Rules in Large Databases

Similar documents
Frequent Pattern Mining. Based on: Introduction to Data Mining by Tan, Steinbach, Kumar

Apriori Algorithm. 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke

Chapter 4: Association analysis:

Frequent Pattern Mining

Chapter 4: Mining Frequent Patterns, Associations and Correlations

Chapter 6: Basic Concepts: Association Rules. Basic Concepts: Frequent Patterns. (absolute) support, or, support. (relative) support, s, is the

Frequent Pattern Mining S L I D E S B Y : S H R E E J A S W A L

Association Rules. A. Bellaachia Page: 1

BCB 713 Module Spring 2011

Chapter 6: Association Rules

Market baskets Frequent itemsets FP growth. Data mining. Frequent itemset Association&decision rule mining. University of Szeged.

Chapter 7: Frequent Itemsets and Association Rules

Data Structures. Notes for Lecture 14 Techniques of Data Mining By Samaher Hussein Ali Association Rules: Basic Concepts and Application

Frequent Item Sets & Association Rules

ANU MLSS 2010: Data Mining. Part 2: Association rule mining

Data Mining Clustering

Association Analysis: Basic Concepts and Algorithms

Frequent Pattern Mining

Association rules. Marco Saerens (UCL), with Christine Decaestecker (ULB)

Chapter 7: Frequent Itemsets and Association Rules

Association Pattern Mining. Lijun Zhang

Introduction to Data Mining

CompSci 516 Data Intensive Computing Systems

Data Mining Part 3. Associations Rules

CS570 Introduction to Data Mining

Lecture Topic Projects 1 Intro, schedule, and logistics 2 Data Science components and tasks 3 Data types Project #1 out 4 Introduction to R,

CHAPTER 8. ITEMSET MINING 226

DATA MINING II - 1DL460

Production rule is an important element in the expert system. By interview with

Association Rule Mining (ARM) Komate AMPHAWAN

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Data Mining: Concepts and Techniques. Chapter 5. SS Chung. April 5, 2013 Data Mining: Concepts and Techniques 1

Road Map. Objectives. Objectives. Frequent itemsets and rules. Items and transactions. Association Rules and Sequential Patterns

Effectiveness of Freq Pat Mining

Association Rules and

CompSci 516 Data Intensive Computing Systems

Mining Frequent Patterns without Candidate Generation

Advance Association Analysis

CS 6093 Lecture 7 Spring 2011 Basic Data Mining. Cong Yu 03/21/2011

Association Rules Apriori Algorithm

Unsupervised learning: Data Mining. Associa6on rules and frequent itemsets mining

What Is Data Mining? CMPT 354: Database I -- Data Mining 2

Association Rules Apriori Algorithm

Data Mining Techniques

Reading Material. Data Mining - 2. Data Mining - 1. OLAP vs. Data Mining 11/19/17. Four Main Steps in KD and DM (KDD) CompSci 516: Database Systems

Data Warehousing & Mining. Data integration. OLTP versus OLAP. CPS 116 Introduction to Database Systems

PFPM: Discovering Periodic Frequent Patterns with Novel Periodicity Measures

Performance and Scalability: Apriori Implementa6on

Data Warehousing and Data Mining. Announcements (December 1) Data integration. CPS 116 Introduction to Database Systems

Association Rules Outline

Fundamental Data Mining Algorithms

Nesnelerin İnternetinde Veri Analizi

Association mining rules

Induction of Association Rules: Apriori Implementation

Basic Concepts: Association Rules. What Is Frequent Pattern Analysis? COMP 465: Data Mining Mining Frequent Patterns, Associations and Correlations

Chapter 4 Data Mining A Short Introduction

Association Rule Mining. Entscheidungsunterstützungssysteme

CSE 5243 INTRO. TO DATA MINING

2. Discovery of Association Rules

signicantly higher than it would be if items were placed at random into baskets. For example, we

Tutorial on Association Rule Mining

Building Roads. Page 2. I = {;, a, b, c, d, e, ab, ac, ad, ae, bc, bd, be, cd, ce, de, abd, abe, acd, ace, bcd, bce, bde}

Jeffrey D. Ullman Stanford University

Association Rule Mining

CMPUT 391 Database Management Systems. Data Mining. Textbook: Chapter (without 17.10)

Sequential Data. COMP 527 Data Mining Danushka Bollegala

Tutorial on Assignment 3 in Data Mining 2009 Frequent Itemset and Association Rule Mining. Gyozo Gidofalvi Uppsala Database Laboratory

Association Rule Learning

Frequent and Sequential Pattern Mining with Multiple Minimum Supports

Association Rule Mining. Introduction 46. Study core 46

1. Interpret single-dimensional Boolean association rules from transactional databases

Knowledge Discovery in Databases II Winter Term 2015/2016. Optional Lecture: Pattern Mining & High-D Data Mining

Knowledge Discovery in Databases

Data Warehousing and Data Mining

Association Rule Discovery

We will be releasing HW1 today It is due in 2 weeks (1/25 at 23:59pm) The homework is long

Model for Load Balancing on Processors in Parallel Mining of Frequent Itemsets

Improved Frequent Pattern Mining Algorithm with Indexing

Roadmap DB Sys. Design & Impl. Association rules - outline. Citations. Association rules - idea. Association rules - idea.

Association Rule Discovery

Machine Learning: Symbolische Ansätze

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Pattern Mining. Knowledge Discovery and Data Mining 1. Roman Kern KTI, TU Graz. Roman Kern (KTI, TU Graz) Pattern Mining / 42

Association Rule Mining: FP-Growth

OPTIMISING ASSOCIATION RULE ALGORITHMS USING ITEMSET ORDERING

Decision Support Systems

MS-FP-Growth: A multi-support Vrsion of FP-Growth Agorithm

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

A Two-Phase Algorithm for Fast Discovery of High Utility Itemsets

Data Mining for Knowledge Management. Association Rules

CHAPTER 3 ASSOCIATION RULE MINING WITH LEVELWISE AUTOMATIC SUPPORT THRESHOLDS

Roadmap. PCY Algorithm

A Novel method for Frequent Pattern Mining

Chapter 13, Sequence Data Mining

Mining Association Rules in Large Databases

COMP Associa0on Rules

Tree Structures for Mining Association Rules

Data Mining: Mining Association Rules. Definitions. .. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

ASSOCIATION RULE MINING: MARKET BASKET ANALYSIS OF A GROCERY STORE

2 CONTENTS

Transcription:

Mining Association Rules in Large Databases

Association rules Given a set of transactions D, find rules that will predict the occurrence of an item (or a set of items) based on the occurrences of other items in the transaction Market-Basket transactions TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Examples of association rules {Diaper} {Beer}, {Milk, Bread} {Diaper,Coke}, {Beer, Bread} {Milk},

An even simpler concept: frequent itemsets Given a set of transactions D, find combination of items that occur frequently Market-Basket transactions TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Examples of frequent itemsets {Diaper, Beer}, {Milk, Bread} {Beer, Bread, Milk},

Lecture outline Task 1: Methods for finding all frequent itemsets efficiently Task 2: Methods for finding association rules efficiently

Definition: Frequent Itemset Itemset A set of one or more items E.g.: {Milk, Bread, Diaper} k itemset An itemset that contains k items Support count (σ) Frequency of occurrence of an itemset (number of transactions it appears) E.g. σ({milk, Bread,Diaper}) = 2 Support Fraction of the transactions in which an itemset appears E.g. s({milk, Bread, Diaper}) = 2/5 Frequent Itemset An itemset whose support is greater than or equal to a minsup threshold TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke

Why do we want to find frequent itemsets? Find all combinations of items that occur together They might be interesting (e.g., in placement of items in a store ) Frequent itemsets are only positive combinations (we do not report combinations that do not occur frequently together) Frequent itemsets aims at providing a summary for the data

Finding frequent sets Task: Given a transaction database D and a minsup threshold find all frequent itemsets and the frequency of each set in this collection Stated differently: Count the number of times combinations of attributes occur in the data. If the count of a combination is above minsup report it. Recall: The input is a transaction database D where every transaction consists of a subset of items from some universe I

How many itemsets are there? null A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE Given d items, there are 2 d possible itemsets ABCDE

When is the task sensible and feasible? If minsup = 0, then all subsets of I will be frequent and thus the size of the collection will be very large This summary is very large (maybe larger than the original input) and thus not interesting The task of finding all frequent sets is interesting typically only for relatively large values of minsup

A simple algorithm for finding all frequent itemsets?? null A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE

Brute force algorithm for finding all frequent itemsets? Generate all possible itemsets (lattice of itemsets) Start with 1 itemsets, 2 itemsets,...,d itemsets Compute the frequency of each itemset from the data Count in how many transactions each itemset occurs If the support of an itemset is above minsup report it as a frequent itemset

Brute force approach for finding all Complexity? frequent itemsets Match every candidate against each transaction For M candidates and N transactions, the complexity is~ O(NMw) => Expensive since M = 2 d!!!

Speeding up the brute force algorithm Reduce the number of candidates (M) Complete search: M=2 d Use pruning techniques to reduce M Reduce the number of transactions (N) Reduce size of N as the size of itemset increases Use vertical partitioning of the data to apply the mining algorithms Reduce the number of comparisons (NM) Use efficient data structures to store the candidates or transactions No need to match every candidate against every transaction

Reduce the number of candidates Apriori principle (Main observation): If an itemset is frequent, then all of its subsets must also be frequent Apriori principle holds due to the following property of the support measure: X, Y : ( X Y ) s( X ) s( Y ) The support of an itemset never exceeds the support of its subsets This is known as the anti monotone property of support

Example TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke s(bread) > s(bread, Beer) s(milk) > s(bread, Milk) s(diaper, Beer) > s(diaper, Beer, Coke)

Illustrating the Apriori principle Found to be Infrequent Pruned supersets

Illustrating the Apriori principle Item Count Bread 4 Coke 2 Milk 4 Beer 3 Diaper 4 Eggs 1 minsup = 3/5 Items (1-itemsets) Itemset Count {Bread,Milk} 3 {Bread,Beer} 2 {Bread,Diaper} 3 {Milk,Beer} 2 {Milk,Diaper} 3 {Beer,Diaper} 3 Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs) Triplets (3-itemsets) If every subset is considered, 6 C 1 + 6 C 2 + 6 C 3 = 41 With support-based pruning, 6 + 6 + 1 = 13 Item set Count {B read,m ilk,d iaper} 3

Exploiting the Apriori principle 1. Find frequent 1 items and put them to L k (k=1) 2. Use L k to generate a collection of candidate itemsets C k+1 with size (k+1) 3. Scan the database to find which itemsets in C k+1 are frequent and put them into L k+1 4. If L k+1 is not empty k=k+1 Goto step 2 R. Agrawal, R. Srikant: "Fast Algorithms for Mining Association Rules", Proc. of the 20th Int'l Conference on Very Large Databases, 1994.

C k : Candidate itemsets of size k L k : frequent itemsets of size k The Apriori algorithm L 1 = {frequent 1 itemsets}; for (k = 2; L k!= ; k++) C k+1 = GenerateCandidates(L k ) for each transaction t in database do increment count of candidates in C k+1 that are contained in t endfor L k+1 = candidates in C k+1 with support min_sup endfor return k L k ;

GenerateCandidates Assume the items in L k are listed in an order (e.g., alphabetical) Step 1: self joining L k (IN SQL) insert into C k+1 select p.item 1, p.item 2,, p.item k, q.item k from L k p, L k q where p.item 1 =q.item 1,, p.item k 1 =q.item k 1, p.item k < q.item k

Example of Candidates Generation L 3 ={abc, abd, acd, ace, bcd} Self joining: L 3 *L 3 abcd from abc and abd acde from acd and ace {a,c,d} {a,c,d,e} {a,c,e} acd ace ade cde

GenerateCandidates Assume the items in L k are listed in an order (e.g., alphabetical) Step 1: self joining L k (IN SQL) insert into C k+1 select p.item 1, p.item 2,, p.item k, q.item k from L k p, L k q where p.item 1 =q.item 1,, p.item k 1 =q.item k 1, p.item k < q.item k Step 2: pruning forall itemsets c in C k+1 do forall k subsets s of c do if (s is not in L k ) then delete c from C k+1

Example of Candidates Generation L 3 ={abc, abd, acd, ace, bcd} Self joining: L 3 *L 3 abcd from abc and abd acde from acd and ace {a,c,d} {a,c,e} X {a,c,d,e} Pruning: acde is removed because ade is not in L 3 C 4 ={abcd} acd ace ade cde X

C k : Candidate itemsets of size k L k : frequent itemsets of size k The Apriori algorithm L 1 = {frequent items}; for (k = 1; L k!= ; k++) C k+1 = GenerateCandidates(L k ) for each transaction t in database do increment count of candidates in C k+1 that are contained in t endfor L k+1 = candidates in C k+1 with support min_sup endfor return k L k ;

How to Count Supports of Candidates? Naive algorithm? Method: Candidate itemsets are stored in a hash tree Leaf node of hash tree contains a list of itemsets and counts Interior node contains a hash table Subset function: finds all the candidates contained in a transaction

Example of the hash tree for C 3 Hash function: mod 3 H H Hash on 1 st item 1,4,.. 2,5,.. 3,6,.. H 234 567 H Hash on 2 nd item Hash on 3 rd item 145 H 345 356 689 367 368 124 457 125 458 159

Example of the hash tree for C 3 Hash function: mod 3 H 1,4,.. 2,5,.. 3,6,.. 12345 look for 1XX 12345 H H 234 567 2345 look for 2XX Hash on 1 st item H 345 look for 3XX Hash on 2 nd item Hash on 3 rd item 145 H 345 356 689 367 368 124 457 125 458 159

Example of the hash tree for C 3 Hash function: mod 3 H 1,4,.. 2,5,.. 3,6,.. 12345 look for 12X 12345 look for 13X (null) 12345 look for 14X 12345 look for 1XX 145 124 457 12345 H H 125 458 H 234 567 159 2345 look for 2XX Hash on 1 st item H 345 356 689 Hash on 2 nd item 367 368 345 look for 3XX The subset function finds all the candidates contained in a transaction: At the root level it hashes on all items in the transaction At level i it hashes on all items in the transaction that come after item the i th item

Discussion of the Apriori algorithm Much faster than the Brute force algorithm It avoids checking all elements in the lattice The running time is in the worst case O(2 d ) Pruning really prunes in practice It makes multiple passes over the dataset One pass for every level k Multiple passes over the dataset is inefficient when we have thousands of candidates and millions of transactions

Making a single pass over the data: the AprioriTid algorithm The database is not used for counting support after the 1 st pass! Instead information in data structure C k is used for counting support in every step C k = {<TID, {X k }> X k is a potentially frequent k itemset in transaction with id=tid} C 1 : corresponds to the original database (every item i is replaced by itemset {i}) The member C k corresponding to transaction t is <t.tid, {c є C k c is contained in t}>

The AprioriTID algorithm L 1 = {frequent 1 itemsets} C 1 = database D for (k=2, L k 1 empty; k++) C k = GenerateCandidates(L k 1 ) C k = {} for all entries t є C k 1 C t = {cє C k t[c c[k]]=1 and t[c c[k 1]]=1} for all cє C t {c.count++} if (C t {}) endfor endif append C t to C k endfor L k = {cє C k c.count >= minsup} return U k L k

AprioriTid Example (minsup=2) C 2 Database D TID Items 100 1 3 4 200 2 3 5 300 1 2 3 5 400 2 5 itemset {1 2} {1 3} {1 5} {2 3} {2 5} {3 5} itemset {2 3 5} TID Sets of itemsets 100 {{1},{3},{4}} 200 {{2},{3},{5}} 300 {{1},{2},{3},{5}} 400 {{2},{5}} C 2 C 1 TID Sets of itemsets 100 {{1 3}} 200 {{2 3},{2 5},{3 5}} 300 {{1 2},{1 3},{1 5}, {2 3},{2 5},{3 5}} 400 {{2 5}} TID Sets of itemsets C 200 {{2 3 5}} 3 300 {{2 3 5}} C 3 L 1 itemset sup. {1} 2 {2} 3 {3} 3 {5} 3 L 2 itemset sup {1 3} 2 {2 3} 2 {2 5} 3 {3 5} 2 itemset sup {2 3 5} 2 L 3

Discussion on the AprioriTID algorithm L 1 = {frequent 1 itemsets} C 1 = database D for (k=2, L k 1 empty; k++) C k = GenerateCandidates(L k 1 ) C k = {} for all entries t є C k 1 C t = {cє C k t[c c[k]]=1 and t[c c[k 1]]=1} endfor return U k L k for all cє C t {c.count++} if (C t {}) endif append C t to C k endfor L k = {cє C k c.count >= minsup} One single pass over the data C k is generated from C k 1 For small values of k, C k could be larger than the database! For large values of k, C k can be very small

Apriori vs. AprioriTID Apriori makes multiple passes over the data while AprioriTID makes a single pass over the data AprioriTID needs to store additional data structures that may require more space than Apriori Both algorithms need to check all candidates frequencies in every step

Lecture outline Task 1: Methods for finding all frequent itemsets efficiently Task 2: Methods for finding association rules efficiently

Definition: Association Rule Let D be database of transactions e.g.: Transaction ID Items 2000 A, B, C 1000 A, C 4000 A, D 5000 B, E, F Let I be the set of items that appear in the database, e.g., I={A,B,C,D,E,F} A rule is defined by X Y, where X I, Y I, and X Y= e.g.: {B,C} {A} is a rule

Definition: Association Rule Association Rule An implication expression of the form X Y, where X and Y are non-overlapping itemsets Example: {Milk, Diaper} {Beer} TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke Rule Evaluation Metrics Support (s) Fraction of transactions that contain both X and Y Confidence (c) Measures how often items in Y appear in transactions that contain X Example: { Milk, Diaper} (Milk, Diaper, Beer) s = σ = T σ (Milk,Diaper,Beer) c = = σ (Milk, Diaper) 2 3 2 5 Beer = 0.4 = 0.67

Rule Measures: Support and Confidence Customer buys beer Customer buys both Customer buys diaper Find all the rules X Y with minimum confidence and support support, s, probability that a transaction contains {X Y} confidence, c, conditional probability that a transaction having X also contains Y TID Items 100 A,B,C 200 A,C 300 A,D 400 B,E,F Let minimum support 50%, and minimum confidence 50%, we have A C (50%, 66.6%) C A (50%, 100%)

Example TID date items_bought 100 10/10/99 {F,A,D,B} 200 15/10/99 {D,A,C,E,B} 300 19/10/99 {C,A,B,E} 400 20/10/99 {B,A,D} What is the support and confidence of the rule: {B,D} {A} Support: percentage of tuples that contain {A,B,D} = Confidence: number of tuples that contain {A,B,D} number of tuples that contain {B, D} = 100% 75%

Association rule mining task Given a set of transactions D, the goal of association rule mining is to find all rules having support minsup threshold confidence minconf threshold

Brute force algorithm for association rule mining List all possible association rules Compute the support and confidence for each rule Prune rules that fail the minsup and minconf thresholds Computationally prohibitive!

Computational Complexity Given d unique items in I: Total number of itemsets = 2 d Total number of possible association rules: 1 2 3 1 1 1 1 + = = + = = d d d k k d j j k d k d R If d=6, R = 602 rules

Mining Association Rules TID Items 1 Bread, Milk Example of Rules: 2 Bread, Diaper, Beer, Eggs {Milk,Diaper} {Beer} (s=0.4, c=0.67) 3 Milk, Diaper, Beer, Coke {Milk,Beer} {Diaper} (s=0.4, c=1.0) 4 Bread, Milk, Diaper, Beer {Diaper,Beer} {Milk} (s=0.4, c=0.67) 5 Bread, Milk, Diaper, Coke {Beer} {Milk,Diaper} (s=0.4, c=0.67) {Diaper} {Milk,Beer} (s=0.4, c=0.5) {Milk} {Diaper,Beer} (s=0.4, c=0.5) Observations: All the above rules are binary partitions of the same itemset: {Milk, Diaper, Beer} Rules originating from the same itemset have identical support but can have different confidence Thus, we may decouple the support and confidence requirements

Mining Association Rules Two step approach: Frequent Itemset Generation Generate all itemsets whose support minsup Rule Generation Generate high confidence rules from each frequent itemset, where each rule is a binary partition of a frequent itemset

Rule Generation Naive algorithm Given a frequent itemset X, find all non empty subsets y X such that y X y satisfies the minimum confidence requirement If {A,B,C,D} is a frequent itemset, candidate rules: ABC D, ABD C, ACD B, BCD A, A BCD, B ACD, C ABD, D ABC AB CD, AC BD, AD BC, BC AD, BD AC, CD AB, If X = k, then there are 2 k 2 candidate association rules (ignoring L and L)

Efficient rule generation How to efficiently generate rules from frequent itemsets? In general, confidence does not have an anti monotone property c(abc D) can be larger or smaller than c(ab D) But confidence of rules generated from the same itemset has an anti monotone property Example: X = {A,B,C,D}: Why? c(abc D) c(ab CD) c(a BCD) Confidence is anti monotone w.r.t. number of items on the RHS of the rule

Rule Generation for Apriori Algorithm Lattice of rules Low Confidence Rule Pruned Rules

Apriori algorithm for rule generation Candidate rule is generated by merging two rules that share the same prefix in the rule consequent join(cd AB,BD >AC) would produce the candidate rule D ABC CD AB Prune rule D ABC if there exists a subset (e.g., AD BC) that does not have high confidence D ABC BD AC