Bölüm 4. Frequent Patterns in Data Streams w3.gazi.edu.tr/~suatozdemir
What Is Pattern Discovery? What are patterns? Patterns: A set of items, subsequences, or substructures that occur frequently together (or strongly correlated) in a data set Patterns represent intrinsic and important properties of datasets Pattern discovery: Uncovering patterns from massive data sets Motivation examples: What products were often purchased together? What are the subsequent purchases after buying an ipad? What code segments likely contain copy-and-paste bugs? What word sequences likely form phrases in this corpus?
Pattern Discovery: Why Is It Important? Finding inherent regularities in a data set Foundation for many essential data mining tasks Association, correlation, and causality analysis Mining sequential, structural (e.g., sub-graph) patterns Pattern analysis in spatiotemporal, multimedia, time-series, and stream data Classification: Discriminative pattern-based analysis Cluster analysis: Pattern-based subspace clustering Broad applications Market basket analysis, cross-marketing, catalog design, sale campaign analysis, Web log analysis, biological sequence analysis
Basic Concepts: k-itemsets and Their Supports Itemset: A set of one or more items k-itemset: X = {x 1,, x k } Ex. {Beer, Nuts, Diaper} is a 3- itemset (absolute) support (count) of X, sup{x}: Frequency or the number of occurrences of an itemset X Ex. sup{beer} = 3 Ex. sup{diaper} = 4 Ex. sup{beer, Diaper} = 3 Ex. sup{beer, Eggs} = 1 Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk (relative) support, s{x}: The fraction of transactions that contains X (i.e., the probability that a transaction contains X) Ex. s{beer} = 3/5 = 60% Ex. s{diaper} = 4/5 = 80% Ex. s{beer, Eggs} = 1/5 = 20%
Basic Concepts: Frequent Itemsets (Patterns) An itemset (or a pattern) X is frequent if the support of X is no less than a minsup threshold σ Let σ = 50% (σ: minsup threshold) For the given 5-transaction dataset All the frequent 1-itemsets: Beer: 3/5 (60%); Nuts: 3/5 (60%) Diaper: 4/5 (80%); Eggs: 3/5 (60%) All the frequent 2-itemsets: {Beer, Diaper}: 3/5 (60%) All the frequent 3-itemsets? None Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk Why do these itemsets (shown on the left) form the complete set of frequent k-itemsets (patterns) for any k? Observation: We may need an efficient method to mine a complete set of frequent patterns
From Frequent Itemsets to Association Rules Comparing with itemsets, rules can be more telling Ex. Diaper Beer Buying diapers may likely lead to buying beers How strong is this rule? (support, confidence) Measuring association rules: X Y (s, c) Both X and Y are itemsets Support, s: The probability that a transaction contains X Y Ex. s{diaper, Beer} = 3/5 = 0.6 (i.e., 60%) Confidence, c: The conditional probability that a transaction containing X also contains Y Calculation: c = sup(x Y) / sup(x) Ex. c = sup{diaper, Beer}/sup{Diaper} = ¾ = 0.75 Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk Beer Containing both {Beer} {Diaper} Diaper Containing diaper Containing beer {Beer} {Diaper} = {Beer, Diaper} Note: X Y: the union of two itemsets The set contains both X and Y
Mining Frequent Itemsets and Association Rules Association rule mining Given two thresholds: minsup, minconf Find all of the rules, X Y (s, c) such that, s minsup and c minconf Let minsup = 50% Freq. 1-itemsets: Beer: 3, Nuts: 3, Diaper: 4, Eggs: 3 Freq. 2-itemsets: {Beer, Diaper}: 3 Let minconf = 50% Beer Diaper (60%, 100%) Diaper Beer (60%, 75%) (Q: Are these all rules?) Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk Observations: Mining association rules and mining frequent patterns are very close problems Scalable methods are needed for mining large datasets
Challenge: There Are Too Many Frequent Patterns! A long pattern contains a combinatorial number of sub-patterns How many frequent itemsets does the following TDB 1 contain? TDB 1: T 1 : {a 1,, a 50 }; T 2 : {a 1,, a 100 } Assuming (absolute) minsup = 1 Let s have a try 1-itemsets: {a 1 }: 2, {a 2 }: 2,, {a 50 }: 2, {a 51 }: 1,, {a 100 }: 1, 2-itemsets: {a 1, a 2 }: 2,, {a 1, a 50 }: 2, {a 1, a 51 }: 1,, {a 99, a 100 }: 1,,,, 99-itemsets: {a 1, a 2,, a 99 }: 1,, {a 2, a 3,, a 100 }: 1 100-itemset: {a 1, a 2,, a 100 }: 1 The total number of frequent itemsets: A too huge set for any one to compute or store!
Example: Construct FP-tree from a Transaction DB TID Items in the Transaction Ordered, frequent itemlist 100 {f, a, c, d, g, i, m, p} f, c, a, m, p 200 {a, b, c, f, l, m, o} f, c, a, b, m 300 {b, f, h, j, o, w} f, b 400 {b, c, k, s, p} c, b, p 500 {a, f, c, e, l, p, m, n} f, c, a, m, p Let min_support = 3 1. Scan DB once, find single item frequent pattern: f:4, a:3, c:4, b:3, m:3, p:3 2. Sort frequent items in frequency descending order, f-list F-list = f-c-a-b-m-p 3. Scan DB again, construct FP-tree The frequent itemlist of each transaction is inserted as a branch, with shared subbranches merged, counts accumulated Header Table Item Frequency heade r f 4 c 4 a 3 b 3 m 3 p 3 After inserting the 1 st frequent Itemlist: f, c, a, m, p {} f:1 c:1 a:1 m:1 p:1
Example: Construct FP-tree from a Transaction DB TID Items in the Transaction Ordered, frequent itemlist 100 {f, a, c, d, g, i, m, p} f, c, a, m, p 200 {a, b, c, f, l, m, o} f, c, a, b, m 300 {b, f, h, j, o, w} f, b 400 {b, c, k, s, p} c, b, p 500 {a, f, c, e, l, p, m, n} f, c, a, m, p Let min_support = 3 1. Scan DB once, find single item frequent pattern: f:4, a:3, c:4, b:3, m:3, p:3 2. Sort frequent items in frequency descending order, f-list F-list = f-c-a-b-m-p 3. Scan DB again, construct FP-tree The frequent itemlist of each transaction is inserted as a branch, with shared subbranches merged, counts accumulated Header Table Item Frequency heade r f 4 c 4 a 3 b 3 m 3 p 3 After inserting the 2 nd frequent itemlist f, c, a, b, m m:1 { } f:2 c:2 a:2 b:1 p:1 m:1
Example: Construct FP-tree from a Transaction DB TID Items in the Transaction Ordered, frequent itemlist 100 {f, a, c, d, g, i, m, p} f, c, a, m, p 200 {a, b, c, f, l, m, o} f, c, a, b, m 300 {b, f, h, j, o, w} f, b 400 {b, c, k, s, p} c, b, p 500 {a, f, c, e, l, p, m, n} f, c, a, m, p Let min_support = 3 1. Scan DB once, find single item frequent pattern: f:4, a:3, c:4, b:3, m:3, p:3 2. Sort frequent items in frequency descending order, f-list F-list = f-c-a-b-m-p 3. Scan DB again, construct FP-tree The frequent itemlist of each transaction is inserted as a branch, with shared subbranches merged, counts accumulated Header Table Item Frequency heade r f 4 c 4 a 3 b 3 m 3 p 3 m:2 After inserting all the frequent itemlists {} c:3 a:3 f:4 c:1 b:1 p:2 m:1 b:1 b:1 p:1
Mining FP-Tree: Divide and Conquer Based on Patterns and Data Pattern mining can be partitioned according to current patterns Patterns containing p: p s conditional database: fcam:2, cb:1 p s conditional database (i.e., the database under the condition that p exists): transformed prefix paths of item p Patterns having m but no p: m s conditional database: fca:2, fcab:1 min_support = 3 Item Frequency Header f 4 c 4 a 3 b 3 m 3 p 3 m:2 c:3 a:3 {} f:4 c:1 b:1 p:2 m:1 b:1 b:1 p:1 Item c f:3 a Conditional database fc:3 b fca:1, f:1, c:1 m p Conditional database of each pattern fca:2, fcab:1 fcam:2, cb:1
Mine Each Conditional Database Recursively item c f:3 a {} f:3 c:3 a:3 m s FP-tree cond. data base fc:3 b fca:1, f:1, c:1 m p min_support = 3 Conditional Data Bases fca:2, fcab:1 fcam:2, cb:1 {} f:3 c:3 am s FP-tree {} f:3 cm s FP-tree Then, mining m s FPtree: fca:3 For each conditional database {} f:3 cam s FP-tree Mine single-item patterns Construct its FP-tree & mine it p s conditional DB: fcam:2, cb:1 c: 3 m s conditional DB: fca:2, fcab:1 fca: 3 b s conditional DB: fca:1, f:1, c:1 ɸ Actually, for single branch FP-tree, all the frequent patterns can be generated in one shot m: 3 fm: 3, cm: 3, am: 3 fcm: 3, fam:3, cam: 3 fcam: 3
Mining Frequent Itemsets from Data Streams The most difficult problem in mining frequent itemsets from data streams is that infrequent itemsets in the past might become frequent, and frequent itemsets in the past might become infrequent. Three main approaches. Approaches that do not distinguish recent items from older ones (using landmark windows); Approaches that give more importance to recent transactions (using sliding windows or decay factors); Approaches for mining at different time granularities.
LossyCounting algorithm A onepass algorithm for computing frequency counts exceeding a user-specified threshold over data streams. Although the output is approximate, the error is guaranteed not to exceed a user-specified parameter. LossyCounting accepts two user-specified parameters: a support threshold s [0,1] and an error parameter ε [0,1] such that ε s. At any point of time, the LossyCounting algorithm can produce a list of item(set)s along with their estimated frequencies.
LossyCounting algorithm Let N denote the current length of the stream. The answers produced will have the following guarantees:
LossyCounting algorithm
LossyCounting algorithm Divide the stream into windows
LossyCounting algorithm
LossyCounting algorithm
Frequent Itemsets using LossyCounting Depending on the application, the LossyCounting algorithm might treat a tuple as a single item or as a set of items. In the latter case (set of items), the input stream is not processed transaction by transaction. Instead, the available main memory is filled in with as many transactions as possible. After that, they process such a batch of transactions together.
Frequent Itemsets using LossyCounting Let β denote the number of buckets in memory.
Mining Recent Frequent Itemsets Chang and Lee (2005) propose the estwin algorithm to maintain frequent itemsets over a sliding window. The itemsets generated by estwin are maintained in a prefix tree structure, D. An itemset, X, in D has the following three fields: freq (X), err (X) and tid (X), freq (X) is the frequency of X in the current window since X was inserted into D err (X) is an upper bound for the frequency of X in the current window before X was inserted into D, tid (X) is the ID of the transaction being processed, when X was inserted into D.
Mining Recent Frequent Itemsets For each incoming transaction Y with ID = tid t, estwin increments the computed frequency of each subset of Y in D. Let N be the number of transactions in the window and tid 1 be the ID of the first transaction in the current window. We prune an itemset X and all X 's supersets if: Old and not frequent in all stream New but not frequent in the current window
The FP-Stream Algorithm For stream mining at different time granularities The FP-Stream Algorithm was designed to maintain frequent patterns under a tilted-time window framework in order to answer time-sensitive queries The frequent patterns are compressed and stored using a tree structure similar to FP-tree and updated incrementally with incoming transactions.
The FP-Stream Algorithm Three categories of patterns: Frequent patterns Subfrequent patterns Infrequent patterns The frequency of an itemset I over a period of time T is the number of transactions in T in which I occurs. The support of I is the frequency divided by the total number of transactions observed in T.
The FP-Stream Algorithm The FP-stream structure consists of two parts. A global FP-tree held in main memory, and tilted-time windows embedded in this pattern-tree. Incremental updates can be performed on both parts of the FP-stream. Incremental updates occur when some infrequent patterns become (sub)frequent, or vice versa. At any moment, the set of frequent patterns over a period can be obtained from FP-stream.
Tilted-time window The design of the tilted-time window is based on the fact that people are often interested in recent changes at a fine granularity, but long term changes at a coarse granularity the most recent 4 quarters of an hour, then the last 24 hours, and 31 days. This model registers only 4 + 24 + 31 = 59 units of time, with an acceptable trade-off of lower granularity at distant times. For each tilted-time window, a collection of patterns and their frequencies can be maintained.
The FP-Stream Algorithm A compact tree representation of the pattern collections, called pattern-tree, can be used. Each node in the pattern tree represents a pattern (from root to this node) and its frequency is recorded in the node. This tree shares a similar structure with an FP-tree The difference is that it stores patterns instead of transactions.
The FP-Stream Algorithm The patterns in adjacent time windows will likely be very similar. the tree structure for different tilted-time windows will likely have considerable overlap. Embedding the tilted-time window structure into each node, will likely save considerable space. use only one pattern tree, where at each node, the frequency for each tilted-time window is maintained. FP-Stream
The FP-Stream Algorithm Tail Pruning Let t 1 ;.; t n be the tilted-time windows which group the batches seen so far. Denote the number of transactions in t i by w i. The goal is to mine all frequent itemsets with support larger than σ over period T = t k t k+1 t k where 1 k k n The size of T, denoted by W, is the sum of the sizes of all time-windows considered in T. It is not possible to store all possible itemsets in all periods. FP-stream drops the tail sequences when
Possible Reading Mining frequent itemsets in a stream Mining frequent itemsets over distributed data streams by continuously maintaining a global synopsis Mining Frequent Itemsets Over Tuple-evolving Data Streams