Nesnelerin İnternetinde Veri Analizi

Similar documents
Chapter 4: Mining Frequent Patterns, Associations and Correlations

Chapter 6: Basic Concepts: Association Rules. Basic Concepts: Frequent Patterns. (absolute) support, or, support. (relative) support, s, is the

CSE 5243 INTRO. TO DATA MINING

Association Rule Mining

Lecture Topic Projects 1 Intro, schedule, and logistics 2 Data Science components and tasks 3 Data types Project #1 out 4 Introduction to R,

Association Rule Mining. Entscheidungsunterstützungssysteme

Association rules. Marco Saerens (UCL), with Christine Decaestecker (ULB)

Data Mining for Knowledge Management. Association Rules

Mining Frequent Patterns without Candidate Generation

Association Rule Mining

Association Rules. A. Bellaachia Page: 1

Fundamental Data Mining Algorithms

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING

Decision Support Systems

Frequent Pattern Mining

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

BCB 713 Module Spring 2011

Frequent Pattern Mining

CS570 Introduction to Data Mining

Apriori Algorithm. 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke

Data Mining Part 3. Associations Rules

What Is Data Mining? CMPT 354: Database I -- Data Mining 2

Data Mining: Concepts and Techniques. Chapter 5. SS Chung. April 5, 2013 Data Mining: Concepts and Techniques 1

DESIGN AND CONSTRUCTION OF A FREQUENT-PATTERN TREE

Frequent Pattern Mining. Based on: Introduction to Data Mining by Tan, Steinbach, Kumar

Basic Concepts: Association Rules. What Is Frequent Pattern Analysis? COMP 465: Data Mining Mining Frequent Patterns, Associations and Correlations

Chapter 4: Association analysis:

Frequent Pattern Mining S L I D E S B Y : S H R E E J A S W A L

2 CONTENTS

Association Rule Mining (ARM) Komate AMPHAWAN

Data Structures. Notes for Lecture 14 Techniques of Data Mining By Samaher Hussein Ali Association Rules: Basic Concepts and Application

Pattern Mining. Knowledge Discovery and Data Mining 1. Roman Kern KTI, TU Graz. Roman Kern (KTI, TU Graz) Pattern Mining / 42

Association Rules. Berlin Chen References:

Scalable Frequent Itemset Mining Methods

Mining Frequent Patterns without Candidate Generation: A Frequent-Pattern Tree Approach

Mining Association Rules in Large Databases

FP-Growth algorithm in Data Compression frequent patterns

Association mining rules

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Mining Association Rules in Large Databases

Market baskets Frequent itemsets FP growth. Data mining. Frequent itemset Association&decision rule mining. University of Szeged.

This paper proposes: Mining Frequent Patterns without Candidate Generation

Data Mining Clustering

Chapter 7: Frequent Itemsets and Association Rules

Tutorial on Association Rule Mining

1. Interpret single-dimensional Boolean association rules from transactional databases

Chapter 7: Frequent Itemsets and Association Rules

2. Discovery of Association Rules

Product presentations can be more intelligently planned

ANU MLSS 2010: Data Mining. Part 2: Association rule mining

CS145: INTRODUCTION TO DATA MINING

Information Management course

Thanks to the advances of data processing technologies, a lot of data can be collected and stored in databases efficiently New challenges: with a

Chapter 4 Data Mining A Short Introduction

Chapter 6: Association Rules

Association Rules Apriori Algorithm

CMPUT 391 Database Management Systems. Data Mining. Textbook: Chapter (without 17.10)

Association Pattern Mining. Lijun Zhang

CS6220: DATA MINING TECHNIQUES

WIP: mining Weighted Interesting Patterns with a strong weight and/or support affinity

Data warehouse and Data Mining

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Knowledge Discovery in Databases

CLOSET+:Searching for the Best Strategies for Mining Frequent Closed Itemsets

High dim. data. Graph data. Infinite data. Machine learning. Apps. Locality sensitive hashing. Filtering data streams.

Effectiveness of Freq Pat Mining

Association Rules Apriori Algorithm

Improved Frequent Pattern Mining Algorithm with Indexing

An Introduction to WEKA Explorer. In part from: Yizhou Sun 2008

Association Rule Mining: FP-Growth

Interestingness Measurements

Roadmap DB Sys. Design & Impl. Association rules - outline. Citations. Association rules - idea. Association rules - idea.

CHAPTER 8. ITEMSET MINING 226

D Data Mining: Concepts and and Tech Techniques

A Survey on Moving Towards Frequent Pattern Growth for Infrequent Weighted Itemset Mining

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

Mining Recent Frequent Itemsets in Data Streams with Optimistic Pruning

Data Warehousing and Data Mining. Announcements (December 1) Data integration. CPS 116 Introduction to Database Systems

INFREQUENT WEIGHTED ITEM SET MINING USING NODE SET BASED ALGORITHM

Data Mining Algorithms

Jarek Szlichta Acknowledgments: Jiawei Han, Micheline Kamber and Jian Pei, Data Mining - Concepts and Techniques

An Improved Apriori Algorithm for Association Rules

CompSci 516 Data Intensive Computing Systems

Association Rule Mining. Introduction 46. Study core 46

ALGORITHM FOR MINING TIME VARYING FREQUENT ITEMSETS

signicantly higher than it would be if items were placed at random into baskets. For example, we

Maintaining Frequent Itemsets over High-Speed Data Streams

A BETTER APPROACH TO MINE FREQUENT ITEMSETS USING APRIORI AND FP-TREE APPROACH

Performance Based Study of Association Rule Algorithms On Voter DB

Unsupervised learning: Data Mining. Associa6on rules and frequent itemsets mining

Mining Top-K Strongly Correlated Item Pairs Without Minimum Correlation Threshold

Association Rules Outline

Association Rule Discovery

APPLICATION OF FP TREE GROWTH ALGORITHM IN TEXT MINING

Machine Learning: Symbolische Ansätze

Information Management course

CHAPTER 3 ASSOCIATION RULE MINING WITH LEVELWISE AUTOMATIC SUPPORT THRESHOLDS

Data Warehousing & Mining. Data integration. OLTP versus OLAP. CPS 116 Introduction to Database Systems

Appropriate Item Partition for Improving the Mining Performance

Transcription:

Bölüm 4. Frequent Patterns in Data Streams w3.gazi.edu.tr/~suatozdemir

What Is Pattern Discovery? What are patterns? Patterns: A set of items, subsequences, or substructures that occur frequently together (or strongly correlated) in a data set Patterns represent intrinsic and important properties of datasets Pattern discovery: Uncovering patterns from massive data sets Motivation examples: What products were often purchased together? What are the subsequent purchases after buying an ipad? What code segments likely contain copy-and-paste bugs? What word sequences likely form phrases in this corpus?

Pattern Discovery: Why Is It Important? Finding inherent regularities in a data set Foundation for many essential data mining tasks Association, correlation, and causality analysis Mining sequential, structural (e.g., sub-graph) patterns Pattern analysis in spatiotemporal, multimedia, time-series, and stream data Classification: Discriminative pattern-based analysis Cluster analysis: Pattern-based subspace clustering Broad applications Market basket analysis, cross-marketing, catalog design, sale campaign analysis, Web log analysis, biological sequence analysis

Basic Concepts: k-itemsets and Their Supports Itemset: A set of one or more items k-itemset: X = {x 1,, x k } Ex. {Beer, Nuts, Diaper} is a 3- itemset (absolute) support (count) of X, sup{x}: Frequency or the number of occurrences of an itemset X Ex. sup{beer} = 3 Ex. sup{diaper} = 4 Ex. sup{beer, Diaper} = 3 Ex. sup{beer, Eggs} = 1 Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk (relative) support, s{x}: The fraction of transactions that contains X (i.e., the probability that a transaction contains X) Ex. s{beer} = 3/5 = 60% Ex. s{diaper} = 4/5 = 80% Ex. s{beer, Eggs} = 1/5 = 20%

Basic Concepts: Frequent Itemsets (Patterns) An itemset (or a pattern) X is frequent if the support of X is no less than a minsup threshold σ Let σ = 50% (σ: minsup threshold) For the given 5-transaction dataset All the frequent 1-itemsets: Beer: 3/5 (60%); Nuts: 3/5 (60%) Diaper: 4/5 (80%); Eggs: 3/5 (60%) All the frequent 2-itemsets: {Beer, Diaper}: 3/5 (60%) All the frequent 3-itemsets? None Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk Why do these itemsets (shown on the left) form the complete set of frequent k-itemsets (patterns) for any k? Observation: We may need an efficient method to mine a complete set of frequent patterns

From Frequent Itemsets to Association Rules Comparing with itemsets, rules can be more telling Ex. Diaper Beer Buying diapers may likely lead to buying beers How strong is this rule? (support, confidence) Measuring association rules: X Y (s, c) Both X and Y are itemsets Support, s: The probability that a transaction contains X Y Ex. s{diaper, Beer} = 3/5 = 0.6 (i.e., 60%) Confidence, c: The conditional probability that a transaction containing X also contains Y Calculation: c = sup(x Y) / sup(x) Ex. c = sup{diaper, Beer}/sup{Diaper} = ¾ = 0.75 Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk Beer Containing both {Beer} {Diaper} Diaper Containing diaper Containing beer {Beer} {Diaper} = {Beer, Diaper} Note: X Y: the union of two itemsets The set contains both X and Y

Mining Frequent Itemsets and Association Rules Association rule mining Given two thresholds: minsup, minconf Find all of the rules, X Y (s, c) such that, s minsup and c minconf Let minsup = 50% Freq. 1-itemsets: Beer: 3, Nuts: 3, Diaper: 4, Eggs: 3 Freq. 2-itemsets: {Beer, Diaper}: 3 Let minconf = 50% Beer Diaper (60%, 100%) Diaper Beer (60%, 75%) (Q: Are these all rules?) Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk Observations: Mining association rules and mining frequent patterns are very close problems Scalable methods are needed for mining large datasets

Challenge: There Are Too Many Frequent Patterns! A long pattern contains a combinatorial number of sub-patterns How many frequent itemsets does the following TDB 1 contain? TDB 1: T 1 : {a 1,, a 50 }; T 2 : {a 1,, a 100 } Assuming (absolute) minsup = 1 Let s have a try 1-itemsets: {a 1 }: 2, {a 2 }: 2,, {a 50 }: 2, {a 51 }: 1,, {a 100 }: 1, 2-itemsets: {a 1, a 2 }: 2,, {a 1, a 50 }: 2, {a 1, a 51 }: 1,, {a 99, a 100 }: 1,,,, 99-itemsets: {a 1, a 2,, a 99 }: 1,, {a 2, a 3,, a 100 }: 1 100-itemset: {a 1, a 2,, a 100 }: 1 The total number of frequent itemsets: A too huge set for any one to compute or store!

Example: Construct FP-tree from a Transaction DB TID Items in the Transaction Ordered, frequent itemlist 100 {f, a, c, d, g, i, m, p} f, c, a, m, p 200 {a, b, c, f, l, m, o} f, c, a, b, m 300 {b, f, h, j, o, w} f, b 400 {b, c, k, s, p} c, b, p 500 {a, f, c, e, l, p, m, n} f, c, a, m, p Let min_support = 3 1. Scan DB once, find single item frequent pattern: f:4, a:3, c:4, b:3, m:3, p:3 2. Sort frequent items in frequency descending order, f-list F-list = f-c-a-b-m-p 3. Scan DB again, construct FP-tree The frequent itemlist of each transaction is inserted as a branch, with shared subbranches merged, counts accumulated Header Table Item Frequency heade r f 4 c 4 a 3 b 3 m 3 p 3 After inserting the 1 st frequent Itemlist: f, c, a, m, p {} f:1 c:1 a:1 m:1 p:1

Example: Construct FP-tree from a Transaction DB TID Items in the Transaction Ordered, frequent itemlist 100 {f, a, c, d, g, i, m, p} f, c, a, m, p 200 {a, b, c, f, l, m, o} f, c, a, b, m 300 {b, f, h, j, o, w} f, b 400 {b, c, k, s, p} c, b, p 500 {a, f, c, e, l, p, m, n} f, c, a, m, p Let min_support = 3 1. Scan DB once, find single item frequent pattern: f:4, a:3, c:4, b:3, m:3, p:3 2. Sort frequent items in frequency descending order, f-list F-list = f-c-a-b-m-p 3. Scan DB again, construct FP-tree The frequent itemlist of each transaction is inserted as a branch, with shared subbranches merged, counts accumulated Header Table Item Frequency heade r f 4 c 4 a 3 b 3 m 3 p 3 After inserting the 2 nd frequent itemlist f, c, a, b, m m:1 { } f:2 c:2 a:2 b:1 p:1 m:1

Example: Construct FP-tree from a Transaction DB TID Items in the Transaction Ordered, frequent itemlist 100 {f, a, c, d, g, i, m, p} f, c, a, m, p 200 {a, b, c, f, l, m, o} f, c, a, b, m 300 {b, f, h, j, o, w} f, b 400 {b, c, k, s, p} c, b, p 500 {a, f, c, e, l, p, m, n} f, c, a, m, p Let min_support = 3 1. Scan DB once, find single item frequent pattern: f:4, a:3, c:4, b:3, m:3, p:3 2. Sort frequent items in frequency descending order, f-list F-list = f-c-a-b-m-p 3. Scan DB again, construct FP-tree The frequent itemlist of each transaction is inserted as a branch, with shared subbranches merged, counts accumulated Header Table Item Frequency heade r f 4 c 4 a 3 b 3 m 3 p 3 m:2 After inserting all the frequent itemlists {} c:3 a:3 f:4 c:1 b:1 p:2 m:1 b:1 b:1 p:1

Mining FP-Tree: Divide and Conquer Based on Patterns and Data Pattern mining can be partitioned according to current patterns Patterns containing p: p s conditional database: fcam:2, cb:1 p s conditional database (i.e., the database under the condition that p exists): transformed prefix paths of item p Patterns having m but no p: m s conditional database: fca:2, fcab:1 min_support = 3 Item Frequency Header f 4 c 4 a 3 b 3 m 3 p 3 m:2 c:3 a:3 {} f:4 c:1 b:1 p:2 m:1 b:1 b:1 p:1 Item c f:3 a Conditional database fc:3 b fca:1, f:1, c:1 m p Conditional database of each pattern fca:2, fcab:1 fcam:2, cb:1

Mine Each Conditional Database Recursively item c f:3 a {} f:3 c:3 a:3 m s FP-tree cond. data base fc:3 b fca:1, f:1, c:1 m p min_support = 3 Conditional Data Bases fca:2, fcab:1 fcam:2, cb:1 {} f:3 c:3 am s FP-tree {} f:3 cm s FP-tree Then, mining m s FPtree: fca:3 For each conditional database {} f:3 cam s FP-tree Mine single-item patterns Construct its FP-tree & mine it p s conditional DB: fcam:2, cb:1 c: 3 m s conditional DB: fca:2, fcab:1 fca: 3 b s conditional DB: fca:1, f:1, c:1 ɸ Actually, for single branch FP-tree, all the frequent patterns can be generated in one shot m: 3 fm: 3, cm: 3, am: 3 fcm: 3, fam:3, cam: 3 fcam: 3

Mining Frequent Itemsets from Data Streams The most difficult problem in mining frequent itemsets from data streams is that infrequent itemsets in the past might become frequent, and frequent itemsets in the past might become infrequent. Three main approaches. Approaches that do not distinguish recent items from older ones (using landmark windows); Approaches that give more importance to recent transactions (using sliding windows or decay factors); Approaches for mining at different time granularities.

LossyCounting algorithm A onepass algorithm for computing frequency counts exceeding a user-specified threshold over data streams. Although the output is approximate, the error is guaranteed not to exceed a user-specified parameter. LossyCounting accepts two user-specified parameters: a support threshold s [0,1] and an error parameter ε [0,1] such that ε s. At any point of time, the LossyCounting algorithm can produce a list of item(set)s along with their estimated frequencies.

LossyCounting algorithm Let N denote the current length of the stream. The answers produced will have the following guarantees:

LossyCounting algorithm

LossyCounting algorithm Divide the stream into windows

LossyCounting algorithm

LossyCounting algorithm

Frequent Itemsets using LossyCounting Depending on the application, the LossyCounting algorithm might treat a tuple as a single item or as a set of items. In the latter case (set of items), the input stream is not processed transaction by transaction. Instead, the available main memory is filled in with as many transactions as possible. After that, they process such a batch of transactions together.

Frequent Itemsets using LossyCounting Let β denote the number of buckets in memory.

Mining Recent Frequent Itemsets Chang and Lee (2005) propose the estwin algorithm to maintain frequent itemsets over a sliding window. The itemsets generated by estwin are maintained in a prefix tree structure, D. An itemset, X, in D has the following three fields: freq (X), err (X) and tid (X), freq (X) is the frequency of X in the current window since X was inserted into D err (X) is an upper bound for the frequency of X in the current window before X was inserted into D, tid (X) is the ID of the transaction being processed, when X was inserted into D.

Mining Recent Frequent Itemsets For each incoming transaction Y with ID = tid t, estwin increments the computed frequency of each subset of Y in D. Let N be the number of transactions in the window and tid 1 be the ID of the first transaction in the current window. We prune an itemset X and all X 's supersets if: Old and not frequent in all stream New but not frequent in the current window

The FP-Stream Algorithm For stream mining at different time granularities The FP-Stream Algorithm was designed to maintain frequent patterns under a tilted-time window framework in order to answer time-sensitive queries The frequent patterns are compressed and stored using a tree structure similar to FP-tree and updated incrementally with incoming transactions.

The FP-Stream Algorithm Three categories of patterns: Frequent patterns Subfrequent patterns Infrequent patterns The frequency of an itemset I over a period of time T is the number of transactions in T in which I occurs. The support of I is the frequency divided by the total number of transactions observed in T.

The FP-Stream Algorithm The FP-stream structure consists of two parts. A global FP-tree held in main memory, and tilted-time windows embedded in this pattern-tree. Incremental updates can be performed on both parts of the FP-stream. Incremental updates occur when some infrequent patterns become (sub)frequent, or vice versa. At any moment, the set of frequent patterns over a period can be obtained from FP-stream.

Tilted-time window The design of the tilted-time window is based on the fact that people are often interested in recent changes at a fine granularity, but long term changes at a coarse granularity the most recent 4 quarters of an hour, then the last 24 hours, and 31 days. This model registers only 4 + 24 + 31 = 59 units of time, with an acceptable trade-off of lower granularity at distant times. For each tilted-time window, a collection of patterns and their frequencies can be maintained.

The FP-Stream Algorithm A compact tree representation of the pattern collections, called pattern-tree, can be used. Each node in the pattern tree represents a pattern (from root to this node) and its frequency is recorded in the node. This tree shares a similar structure with an FP-tree The difference is that it stores patterns instead of transactions.

The FP-Stream Algorithm The patterns in adjacent time windows will likely be very similar. the tree structure for different tilted-time windows will likely have considerable overlap. Embedding the tilted-time window structure into each node, will likely save considerable space. use only one pattern tree, where at each node, the frequency for each tilted-time window is maintained. FP-Stream

The FP-Stream Algorithm Tail Pruning Let t 1 ;.; t n be the tilted-time windows which group the batches seen so far. Denote the number of transactions in t i by w i. The goal is to mine all frequent itemsets with support larger than σ over period T = t k t k+1 t k where 1 k k n The size of T, denoted by W, is the sum of the sizes of all time-windows considered in T. It is not possible to store all possible itemsets in all periods. FP-stream drops the tail sequences when

Possible Reading Mining frequent itemsets in a stream Mining frequent itemsets over distributed data streams by continuously maintaining a global synopsis Mining Frequent Itemsets Over Tuple-evolving Data Streams