Efficient Remining of Generalized Multi-supported Association Rules under Support Update

Similar documents
Maintenance of Generalized Association Rules with. Multiple Minimum Supports

Discovery of Multi-level Association Rules from Primitive Level Frequent Patterns Tree

620 HUANG Liusheng, CHEN Huaping et al. Vol.15 this itemset. Itemsets that have minimum support (minsup) are called large itemsets, and all the others

Maintenance of the Prelarge Trees for Record Deletion

Chapter 4: Mining Frequent Patterns, Associations and Correlations

Mining Frequent Patterns with Counting Inference at Multiple Levels

CS570 Introduction to Data Mining

A mining method for tracking changes in temporal association rules from an encoded database

Association Rule Mining. Entscheidungsunterstützungssysteme

2. Discovery of Association Rules

Lecture Topic Projects 1 Intro, schedule, and logistics 2 Data Science components and tasks 3 Data types Project #1 out 4 Introduction to R,

Efficient Mining of Generalized Negative Association Rules

Mining Frequent Itemsets Along with Rare Itemsets Based on Categorical Multiple Minimum Support

Mining Rare Periodic-Frequent Patterns Using Multiple Minimum Supports

Apriori Algorithm. 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke

WIP: mining Weighted Interesting Patterns with a strong weight and/or support affinity

AN EFFICIENT GRADUAL PRUNING TECHNIQUE FOR UTILITY MINING. Received April 2011; revised October 2011

To Enhance Projection Scalability of Item Transactions by Parallel and Partition Projection using Dynamic Data Set

Model for Load Balancing on Processors in Parallel Mining of Frequent Itemsets

Data Mining: Mining Association Rules. Definitions. .. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

Generation of Potential High Utility Itemsets from Transactional Databases

An Improved Algorithm for Mining Association Rules Using Multiple Support Values

Data Mining Part 3. Associations Rules

A Decremental Algorithm for Maintaining Frequent Itemsets in Dynamic Databases *

SETM*-MaxK: An Efficient SET-Based Approach to Find the Largest Itemset

An Evolutionary Algorithm for Mining Association Rules Using Boolean Approach

DMSA TECHNIQUE FOR FINDING SIGNIFICANT PATTERNS IN LARGE DATABASE

A Graph-Based Approach for Mining Closed Large Itemsets

CHAPTER V ADAPTIVE ASSOCIATION RULE MINING ALGORITHM. Please purchase PDF Split-Merge on to remove this watermark.

Mining Association Rules with Item Constraints. Ramakrishnan Srikant and Quoc Vu and Rakesh Agrawal. IBM Almaden Research Center

SA-IFIM: Incrementally Mining Frequent Itemsets in Update Distorted Databases

Incrementally mining high utility patterns based on pre-large concept

An Algorithm for Frequent Pattern Mining Based On Apriori

AN ENHANCED SEMI-APRIORI ALGORITHM FOR MINING ASSOCIATION RULES

FIT: A Fast Algorithm for Discovering Frequent Itemsets in Large Databases Sanguthevar Rajasekaran

Purna Prasad Mutyala et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (5), 2011,

Salah Alghyaline, Jun-Wei Hsieh, and Jim Z. C. Lai

Lecture 2 Wednesday, August 22, 2007

An Improved Apriori Algorithm for Association Rules

Mining Top-K Strongly Correlated Item Pairs Without Minimum Correlation Threshold

AC-Close: Efficiently Mining Approximate Closed Itemsets by Core Pattern Recovery

Association mining rules

Materialized Data Mining Views *

A NEW ASSOCIATION RULE MINING BASED ON FREQUENT ITEM SET

Parallel Association Rule Mining by Data De-Clustering to Support Grid Computing

A Survey on Moving Towards Frequent Pattern Growth for Infrequent Weighted Itemset Mining

An Efficient Reduced Pattern Count Tree Method for Discovering Most Accurate Set of Frequent itemsets

PTclose: A novel algorithm for generation of closed frequent itemsets from dense and sparse datasets

Mining N-most Interesting Itemsets. Ada Wai-chee Fu Renfrew Wang-wai Kwong Jian Tang. fadafu,

Incremental Maintenance of Generalized Association Rules under Taxonomy Evolution

Mining Temporal Association Rules in Network Traffic Data

Data Structures. Notes for Lecture 14 Techniques of Data Mining By Samaher Hussein Ali Association Rules: Basic Concepts and Application

A Technical Analysis of Market Basket by using Association Rule Mining and Apriori Algorithm

Pattern Mining. Knowledge Discovery and Data Mining 1. Roman Kern KTI, TU Graz. Roman Kern (KTI, TU Graz) Pattern Mining / 42

Improved Frequent Pattern Mining Algorithm with Indexing

Item Set Extraction of Mining Association Rule

An Efficient Algorithm for finding high utility itemsets from online sell

Improving Efficiency of Apriori Algorithm using Cache Database

Association Rules. Berlin Chen References:

CHAPTER 5 WEIGHTED SUPPORT ASSOCIATION RULE MINING USING CLOSED ITEMSET LATTICES IN PARALLEL

Appropriate Item Partition for Improving the Mining Performance

PSON: A Parallelized SON Algorithm with MapReduce for Mining Frequent Sets

Pincer-Search: An Efficient Algorithm. for Discovering the Maximum Frequent Set

Mining High Average-Utility Itemsets

Graph Based Approach for Finding Frequent Itemsets to Discover Association Rules

Data Mining Query Scheduling for Apriori Common Counting

Maintenance of fast updated frequent pattern trees for record deletion

A Fast Algorithm for Mining Rare Itemsets

Data Mining: Concepts and Techniques. Chapter 5. SS Chung. April 5, 2013 Data Mining: Concepts and Techniques 1

Mining Association rules with Dynamic and Collective Support Thresholds

Product presentations can be more intelligently planned

Data Access Paths for Frequent Itemsets Discovery

Chapter 6: Basic Concepts: Association Rules. Basic Concepts: Frequent Patterns. (absolute) support, or, support. (relative) support, s, is the

Discovering interesting rules from financial data

Mining Frequent Itemsets for data streams over Weighted Sliding Windows

Maintenance of Generalized Association Rules for Record Deletion Based on the Pre-Large Concept

Performance Analysis of Apriori Algorithm with Progressive Approach for Mining Data

Optimized Frequent Pattern Mining for Classified Data Sets

Mining Vague Association Rules

Association Rule Mining. Introduction 46. Study core 46

DESIGN AND CONSTRUCTION OF A FREQUENT-PATTERN TREE

2 CONTENTS

Tutorial on Association Rule Mining

Chapter 4 Data Mining A Short Introduction

Association Rule Mining from XML Data

Using a Hash-Based Method with Transaction Trimming for Mining Association Rules

On Multiple Query Optimization in Data Mining

Finding Sporadic Rules Using Apriori-Inverse

AN IMPROVED GRAPH BASED METHOD FOR EXTRACTING ASSOCIATION RULES

Using Pattern-Join and Purchase-Combination for Mining Web Transaction Patterns in an Electronic Commerce Environment

Discovery of Frequent Itemset and Promising Frequent Itemset Using Incremental Association Rule Mining Over Stream Data Mining

Data Structure for Association Rule Mining: T-Trees and P-Trees

EFFICIENT ALGORITHM FOR MINING FREQUENT ITEMSETS USING CLUSTERING TECHNIQUES

INFREQUENT WEIGHTED ITEM SET MINING USING NODE SET BASED ALGORITHM

Mining Frequent Itemsets from Uncertain Databases using probabilistic support

Discovering the Association Rules in OLAP Data Cube with Daily Downloads of Folklore Materials *

UAPRIORI: AN ALGORITHM FOR FINDING SEQUENTIAL PATTERNS IN PROBABILISTIC DATA

Association Pattern Mining. Lijun Zhang

Mining Temporal Indirect Associations

Optimization using Ant Colony Algorithm

Transcription:

Efficient Remining of Generalized Multi-supported Association Rules under Support Update WEN-YANG LIN 1 and MING-CHENG TSENG 1 Dept. of Information Management, Institute of Information Engineering I-Shou University 1, Section 1, Hsueh-Cheng Rd., Ta-Hsu Hsiang, Kaohsiung 84041 TAIWAN Abstract: Mining generalized association rules among items in the presence of taxonomy and with nonuniform minimum support has been recognized as an important model in data mining. In our previous work, we have investigated this problem and proposed two algorithms, MMS_Cumulate and MMS_Stratify. In real applications, however, the work of discovering interesting association rules is an iterative process. The analysts need to repeatedly adjust the constraint of minimum support and/or minimum confidence to discover real informative rules. How to reduce the response time for each remining process thus becomes a crucial issue. In this paper, we examined the problem of maintaining the discovered multi-supported generalized association rules when the multiple minimum support constraint is updated. Empirical evaluation showed that the proposed RM_MMS algorithm is very efficient and has good linear scale-up characteristic. Key-Words: Data remining, generalized association rules, multiple minimum supports, taxonomy. 1 Introduction Mining association rules from a large database of business data, such as transaction records, has been a popular topic within the area of data mining [1,, 6]. This problem is originally motivated by application known as market basket analysis to find correlations among items purchased by customers. An association rule is an expression of the form X Y, where X and Y are sets of items. Such a rule reveals that transactions in the database containing items in X tend to contain items in Y, and the probability, measured as the fraction of transactions containing X also containing Y, is called the confidence of the rule. The support of the rule is the fraction of the transactions that contain all items in both X and Y. For an association rule to be valid, the rule should satisfy a user-specified minimum support, called ms, and minimum confidence, called minconf, respectively. The problem of mining association rules is to discover all association rules that satisfy ms and minconf. For example, an association rule, Desktop Ink-jet (sup = 30%, conf = 60%), says that 30% (support) of customers purchase both Desktop PC and Ink-jet printer together, and 60% (confidence) of customers who purchase Desktop PC also purchase Ink-jet printer. Such information is useful in many aspects of market management, such as store layout planning, target marketing, understanding customers purchasing behavior, etc. In many applications, there are taxonomies (hierarchies), explicitly or implicitly, over the items. In some applications, it may be more useful to find associations at different levels of the taxonomy than only at the primitive concept level [3, 7]. For example, consider Figure 1, the taxonomy of items from which the previous association rule derived. It is likely to happen that the association rule, Desktop Ink-jet (sup = 30%, conf = 60%), does not h when the minimum support is set to 40%, but the following association rule may be valid, PC Printer. Printer PC Scanner Non-impact Dot-matrix Desktop Notebook Laser Ink-jet Fig. 1 An example of taxonomy T. In our previous work, we have investigated the problem of mining generalized association rules across different levels of taxonomy with multiple minimum supports [8]. We proposed two efficient algorithms, MMS_Cumulate and MMS_Stratify, which not only can discover associations that span different hierarchy levels but also have high potential to produce rare but informative item rules. In real applications, however, the work of discovering interesting association rules is an iterative process. The analysts need to repeatedly adjust the constraint of minimum support and/or minimum confidence to

discover real informative rules. How to reduce the response time for each remining process thus become a crucial issue. One way for dealing with this issue is to adopt the current mining approach to re-mine the database from scratch, which, however, has the following disadvantages: 1. It is not cost-effective because the discovered frequent itemsets are not utilized.. This is not acceptable in general since generating frequent itemsets is time-consuming. To be more realistic and cost-effective, it is better to perform the association mining algorithms to generate the initial association rules, and when update to the multiple minimum supports occurs, then apply an updating method to re-build the discovered rules. The challenge falls into deploying an efficient updating algorithm to facilitate the whole mining process. This problem is nontrivial because update may invalidate some of the discovered association rules, and turn previous weak rules into strong ones. In this paper, we consider the updating approach and propose an algorithm, called the RM_MMS (ReMining under Multiple Minimum Supports Update). Our algorithm, by utilizing the discovered frequent itemsets, can significantly reduce the number of candidate itemsets and database rescanning. Empirical evaluation showed that our algorithm is very efficient and has linear scalability. The remaining of this paper is organized as follows. A review of related work is given in Section. The problem statement is formalized in Section 3. In Section 4, we explain how to remine generalized association rules under multiple minimum supports update and propose an algorithm. The evaluation of the proposed algorithm is described in Section 5. Finally, our conclusion is stated in Section 6. Related Work The problem of mining association rules in presence of taxonomy information is first addressed in [4, 7], independently. In [7], they aims at finding associations among items at any level of the taxonomy under the ms and minconf constraint. In [4], they primarily devote to mining associations level-by-level in a fixed hierarchy. But they have generalized the uniform minimum support constraint to a form of level-wise assignment. Another form of association rule model with multiple minimum supports has been proposed in [5]. Their method allows users to specify different minimum support to different item and can find rules involving both frequent and rare items. However, their model considers no taxonomy at all, and hence fails to find generalized association rules. 3 Problem Formulation 3.1 Mining generalized association rules with multiple minimum supports Let I = {i 1, i,, i m } be a set of items and DB = {t 1, t,, t n } be a set of transactions, where each transaction t i = tid, A has a unique identifier tid and a set of items A (A I). We assume that the taxonomy of items, T, is available and is denoted as a directed acyclic graph on I J, where J = {j 1, j,, j p } represents the set of generalized items derived from I. An edge in T denotes an is-a relationship. That is, if there is an edge from j to i, we call j a parent (generalization) of i and i a child of j. For example, in Figure 1, I = {Laser printer, Ink-jet printer, Dot matrix printer, Desktop PC, Notebook, Scanner} and J = {Non-impact printer, Printer, Personal computer}. Definition 1 Given a transaction t = tid, A, we say an itemset B is in t if every item in B is in A or is an ancestor of some item in A. An itemset B has support s, denoted as s = sup(b), in the transaction set DB if s% of transactions in DB contain B. Definition Given a set of transactions DB and a taxonomy T, a generalized association rule is an implication of the form A B, where A, B I J, A B =, and no item in B is an ancestor of any item in A. The support of this rule, sup(a B), is equal to the support of A B. The confidence of the rule, conf(a B), is the ratio of sup(a B) and sup(a). The condition in Definition that no item in B is an ancestor of any item in A is essential; otherwise, a rule of the form, a ancestor(a), always has 100% confidence and is trivial. Definition 3 Let ms(a) denote the minimum support of an item a in I J. An itemset A = {a 1, a,, a k }, where a i I J, is frequent if the support of A is larger than the lowest value of minimum support of items in A, i.e., sup(a) min ai A ms(a i ). Definition 4 A multi-supported, generalized association rule A B is strong if sup(a B) min ai A B ms(a i ) and conf(a B) minconf. Definition 5 Given a set of transactions DB, a taxonomy T, the user-specified minimum supports for all items in T, {ms(a 1 ), ms(a ),, ms(a n )}, and the minconf, the problem of mining multi-supported,

generalized association rules is to find all association rules that are strong. Example 1 Suppose that a shopping transaction database DB in Table 1 consists of items I = {Laser printer, Ink-jet printer, Dot matrix printer, Desktop PC, Notebook, Scanner} and taxonomy T as shown in Figure 1. Let the minimum confidence (minconf) be 60% and the minimum support (ms) associated with each item in the taxonomy be as follows: ms(printer) = 80%, ms(non-impact) = 65%, ms(laser) =5%, ms(dot matrix) = 70%, ms(ink-jet) = 60%, ms(pc) = 35%, ms(desktop) = 5%, ms(notebook) = 5%, ms(scanner) = 15%. The following generalized association rule, PC, Laser Dot matrix (sup = 16.7%, conf = 50%), fails because its support is less than min{ms(pc), ms(laser), ms(dot matrix)} = 5%. But another rule, PC Laser (sup = 33.3%, conf = 66.7%), hs because both its support and confidence are larger than min{ms(pc), ms(laser)} = 5% and minconf, respectively. Table lists the frequent itemsets and the resulting strong rules for this example. Table 1 A transaction database (DB). TID Items Purchased 11 Notebook, Laser printer 1 Scanner, Dot matrix printer 13 Dot matrix printer, Ink-jet printer 14 Notebook, Dot matrix printer, Laser printer 15 Scanner 16 Desktop computer Table Frequent itemsets and resulting association rules for Example 1. Frequent Itemsets min ms (%) sup (%) {Scanner} 15 33.3 {Notebook} 5 33.3 {Laser} 5 33.3 {PC} 35 50.0 {Scanner, Printer} 15 16.7 {Scanner, Dot matrix} 15 16.7 {Laser, PC} 5 33.3 {Notebook, Printer} 5 33.3 {Notebook, Non-impact} 5 33.3 {Notebook, Laser} 5 33.3 Rules PC Laser (sup = 33.3%, conf = 66.7%) Laser PC (sup = 33.3%, conf = 100%) Notebook Printer (sup = 33.3%, conf = 100%) Notebook Non-impact (sup = 33.3%, conf = 100%) Notebook Laser (sup = 33.3%, conf = 100%) Laser Notebook (sup = 33.3%, conf = 100%) As founded in [1], the work of association rules mining can be decomposed into two phases: 1. Frequent itemsets generation: find all itemsets that sufficiently exceed the ms.. Rules construction: from the frequent itemsets generate all association rules having confidence higher than the minconf. Since the second phase is straightforward and less expensive, we concentrate only on the first phase of finding all frequent itemsets. 3. Remining generalized multi-supported association rules under support update As stated in previous section, the analysts need to repeatedly adjust the constraint of minimum support to discover real informative rules. This implies that if the constraint of minimum support is adjusted, the previous discovered associations might be invalid and some undiscovered associations should be generated. That is, the discovered association rules must be updated to reflect the circumstance. Analogous to the associations mining, this problem can be reduced to the problem of updating the frequent itemsets. Example Consider Example 1 again. The ms for each items is updated as follows: ms(printer) = 60%, ms(non-impact) = 55%, ms(laser) = 10%, ms(dot matrix) = 50%, ms(ink-jet) = 0%, ms(pc) = 55%, ms(desktop) = 5%, ms(notebook) = 35%, ms(scanner) = 15%. Table 3 lists the frequent itemsets and the resulting strong rules. Table 3 Frequent itemsets and resulting association rules for Example. Frequent Itemsets min ms (%) sup (%) {Laser} 10 33.3 {Scanner} 15 33.3 {Dot matrix} 50 50.0 {Printer} 60 66.7 {Laser, Notebook} 10 33.3 {Laser, Dot matrix} 10 16.7 {Laser, PC} 10 33.3 {Scanner, Printer} 15 16.7 {Scanner, Dot matrix} 15 16.7 {Laser, Notebook, Dot matrix} 10 16.7 {Laser, Dot matrix, PC} 10 16.7 Rules Laser PC (sup = 33.3%, conf = 100%) Laser Notebook (sup = 33.3%, conf = 100%) Laser, Dot matrix PC (sup = 16.7%, conf = 100%) Laser, Dot matrix Notebook (sup = 16.7%, conf = 100%)

Comparing Table 3 to Table, we observe that four frequent itemsets in Table {Notebook}, {PC}, {Notebook, Printer}, and {Notebook, Non-impact} are discarded, while five frequent itemsets, {Dot matrix}, {Printer}, {Laser, Dot matrix}, {Laser, Notebook, Dot matrix } and {Laser, Dot matrix, PC}, are added into Table 3.. 4 The Proposed Method 4.1 Algorithm basics The primary challenge of devising effective association rule remining algorithm is how to reuse the discovered frequent itemsets and avoid the possibility of re-scanning the database. Let a k-itemset denote an itemset with k items. The basic process of our generalized association rules remining algorithm under multiple minimum supports update follows the level-wise approach adopted by most Apriori-like algorithms. However, the well-known Apriori pruning technique based on the concept of downward closure does not work for multiple support specification. For example, consider four items a, b, c, and d that have minimum supports specified as ms(a) = 15%, ms(b) = 0%, ms(c) = 4%, and ms(d) = 6%. Clearly, a -itemset {a, b} with 10% of support is discarded for 10% < min{ms(a), ms(b)}. According to the downward closure, the 3-itemsets {a, b, c} and {a, b, d} will be pruned even though their supports may be larger than ms(c) and ms(d), respectively. To solve this problem, we have adopted the sorted closure property [5] in our previous work for mining generalized association rules with multiple minimum supports. Hereafter, to distinguish from the traditional itemset, a sorted k-itemset denoted as a 1, a,, a k is used. Lemma 1 If a sorted k-itemset a 1, a,, a k, for k and ms(a 1 ) ms(a ) ms(a k ), is frequent, then all of its sorted subsets with k 1 items are frequent, except the subset a, a 3,, a k. Lemma For k =, the procedure apriori-gen(l 1 ) fails to generate all candidate -itemsets in C. For example, consider a sorted candidate -itemset a, b. It is easy to find if we want to generate this itemset from L 1, both items a and b should be included in L 1 ; that is, each one should be occurring more frequently than the corresponding minimum support ms(a) and ms(b). Clearly, the case ms(a) sup(b) < ms(b) fails to generate a, b in C even sup( a, b ) ms(a). Lemma 3 For k 3, any k-itemset A = a 1, a,, a k generated by procedure apriori-gen(l k 1 ) can be pruned if there exists one (k 1) subset of A, say a i1, a i,, a ik 1, such that a i1, a i,, a ik 1 L k 1 and a i1 = a 1 or ms(a i1 ) = ms(a i ). For more details, please refer to [8]. 4. Algorithm RM_MMS The basic process of the RM_MMS algorithm is as follows. First, count all 1-itemsets which is infrequent correspondently to the multiple minimum supports (ms ) setting. Second, create the frequent 1-itemsets L 1 according to the multiple minimum supports (ms ) setting. Next, create the frontier set F and use it to generate candidate -itemsets C. Then, generate the frequent -itemsets L by scanning some part of C in DB. Finally, for k 3, repeatedly generate candidate k-itemsets C k from L k 1 and create frequent k-itemsets L k until no frequent itemsets. In each iteration k for C k generation, an additional pruning technique with enhancements applied in [8] are performed. Lemma 4 An itemset is frequent with respect to ms if it is frequent with respect to ms and ms ms. Lemma 5 An itemset is infrequent with respect to ms if it is infrequent with respect to ms and ms ms. Lemma 6 An itemset is uncertain of frequency with respect to ms if it is frequent with respect to ms and ms > ms, or if it is infrequent with respect to ms and ms < ms. For itemsets satisfying Lemma 4, there is no need to rescan the database DB to determine whether they are frequent. In Lemma 6, one case is that the itemsets which are frequent with respect to ms and ms > ms are not required to rescan the database DB, but they are required to be calculated to determine whether they are frequent, and the other case is that the itemsets which are infrequent with respect to ms and ms < ms need to rescan the database DB. The main steps of RM_MMS Algorithm are presented as follows: Inputs: (1) DB: the database; () the multiple minimum supports (ms ) setting; (3) the multiple minimum supports (ms ) setting; (4) T: the item taxonomy; (5) L = U k Lk the set of frequent itemsets. Output: L = U k Lk : the set of frequent itemsets corresponding to ms. Steps:

1. Divide the set of 1-itemsets C 1 into two parts: one (X 1 ) consists of items in L 1, and the other (Y 1 ) contains those not in L 1.. Add generalized items in T into DB as DB. 3. Count the supports of Y 1 in DB. 4. Sort C 1 in increasing order of their ms and create frontier set F and L 1. 5. Generate the set of candidate -itemsets C from F. 6. Divide C into two parts: the itemsets in those not in L. For the itemsets in L L and, further divide them into two parts: X a for ms ms and X b for ms > ms. For those not in L, further divide them into two parts: Y a for ms < ms and Y b for ms ms. 7. Count the supports of itemsets in Y a over DB. 8. Create L by combining X a and those itemsets which are frequent in X b and Y a. 9. Generate candidates C 3 from L. 10. Repeat Steps 6-9 for candidates C k until no frequent k-itemsets Lk created. For more details of generating frontier set F, please refer to [8]. An example illustrates the RM_MMS algorithm is provided in Appendix, where item A stands for Printer, B for Non-impact printer, C for Laser printer, D for Dot matrix printer, E for Ink-jet printer, F for Personal computer, G for Desktop PC, H for Notebook, and I for Scanner. 5 Experiments In this section, we evaluate the performance of algorithm, RM_MMS, using the synthetic dataset generated by IBM data generator []. The parameter settings are shown in Table 4. The experiments were performed on an Intel Pentium-IV.80GHz with 096MB RAM, running on Windows 000. We conducted an experiment to compare the efficiency of these three algorithms under various sizes of database. The minimum supports were specified to items randomly, ranging from 1.0% to 6.0%. Here, we adopt the ordinary case that the minimum support of an item a is no larger than any of its ancestors â, i.e., ms(a) ms( â ). We implemented the experiment in two cases: case 1 (the worst one) for ms < ms 1 and case (the best one) for ms ms. The result is depicted in Figure. RM_MMS outperforms MMS_Stratify and MMS_Cumulate both in execution time and scalability. Table 4. Parameter settings for synthetic data Parameter Default DB Number of transactions 100,000 t Average size of transactions 5 N Number of items 00 R Number of groups 30 L Number of levels 3 F Fanout 5 Time (sec.) 500 400 300 00 100 MMS_Cumulate MMS_Stratify RM_MMS (Case1) RM_MMS (Case) 0 1 3 4 5 6 7 8 9 Number of Transactions (x 10,000) Fig. Execution time for various sizes of transactions. 6 Conclusion We have investigated in this paper the problem of remining association rules under multiple minimum supports update and presented an efficient algorithm, RM_MMS. Empirical evaluation showed that the algorithm is very effective and have good linear scale-up characteristic, compared with applying our previously proposed mining algorithms, MMS_Cumulate and MMS_Stratify, to complete the remining process under multiple minimum supports update. References: [1] R. Agrawal, T. Imielinski, and A. Swami, Mining association rules between sets of items in large databases, Proc. 1993 ACM-SIGMOD Int. Conf. Management of Data, 1993, pp. 07-16. [] R. Agrawal and R. Srikant, Fast algorithms for mining association rules, Proc. 0th Int. Conf. Very Large Data Bases, 1994, pp. 487-499. [3] D.W. Cheung, S.D. Lee, and B. Kao, A general incremental technique for maintaining discovered association rules, Proc. DASFAA'97, 1997, pp. 185-194. [4] J. Han and Y. Fu, Discovery of multiple-level association rules from large databases, in Proc.

1st Int. Conf. Very Large Data Bases, pp. 40-431, Zurich, Swizerland, 1995. [5] B. Liu, W. Hsu, and Y. Ma, Mining association rules with multiple minimum supports, Proc. 5th Int. Conf. Knowledge Discovery and Data Mining, 1999, pp. 337-341. [6] A. Savasere, E. Omiecinski, and S. Navathe, An efficient algorithm for mining association rules in large databases, Proc. 1st Int. Conf. Very Large Data Bases, 1995. pp. 43-444. [7] R. Srikant and R. Agrawal, Mining generalized association rules, Proc. 1st Int. Conf. Very Large Data Bases, 1995, pp. 407-419. [8] M.C. Tseng and W.Y. Lin, Mining generalized association rules with multiple minimum support, Proc. Int. Conf. Data Warehousing and Knowledge Discovery, pp. 11-0, 001. Appendix C B A E Taxonomy D Table 5 An example for illustration of RM_MMS. Database (DB) ms % TID Items Purchased Item ms ms 11 H, C A 80 60 1 I, D B 65 55 13 D, E C 5 10 14 H, D, C D 70 50 15 I E 60 0 16 G F 35 55 G 5 5 H 5 35 I 15 15 G F H I C C 1 A, B, C, D, E, F, G, H, I C Load C not in L 1 1 1in L L 1 1 Scan DB C, F, H, I A 66.7 % B 50.0 % D 50.0 % E 16.7 % G 16.7 % Generate F F C, I, E, G, H, D, F, B, A Generate L 1 C, I, D, A Generate C L 1 CI, CE, CG, CH, CD, CF, IE, IG, IH, ID, IF, IB, IA, DF, DB C in L Load L C not in L ID, IA, CH, CF CI, CE, CG, CD, IE, IG, IH, IF, IB, DF, DB ms ms ms > ms ms < ms ms ms Cal. Scan DB ID, IA, CH, CF CI 0 % IE, IG, IH, IF, IB CE 0 % CG 0 % CD 16.7 % DF 16.7 % DB 33.3 % L Generate CH, CD, CF, ID, IA Generate C3 C 4 CHD, CDF CHD, CDF Generate Generate L C 3 C3 in Load L L 3 3 C 3 not in L 3 CHD, CDF ms < ms Scan DB CHD 16.7 % CDF 16.7 % L 3 L 3 Fig. 3 Illustration of algorithm RM_MMS C 4