An Efficient Reduced Pattern Count Tree Method for Discovering Most Accurate Set of Frequent itemsets

Similar documents
Discovery of Multi Dimensional Quantitative Closed Association Rules by Attributes Range Method

Finding the boundaries of attributes domains of quantitative association rules using abstraction- A Dynamic Approach

Mining of Web Server Logs using Extended Apriori Algorithm

EFFICIENT ALGORITHM FOR MINING FREQUENT ITEMSETS USING CLUSTERING TECHNIQUES

CS570 Introduction to Data Mining

Discovery of Multi-level Association Rules from Primitive Level Frequent Patterns Tree

AN IMPROVISED FREQUENT PATTERN TREE BASED ASSOCIATION RULE MINING TECHNIQUE WITH MINING FREQUENT ITEM SETS ALGORITHM AND A MODIFIED HEADER TABLE

Sensitive Rule Hiding and InFrequent Filtration through Binary Search Method

Comparing the Performance of Frequent Itemsets Mining Algorithms

Graph Based Approach for Finding Frequent Itemsets to Discover Association Rules

Association Rule Mining. Introduction 46. Study core 46

An Algorithm for Frequent Pattern Mining Based On Apriori

Available online at ScienceDirect. Procedia Computer Science 45 (2015 )

Lecture 2 Wednesday, August 22, 2007

A Survey on Moving Towards Frequent Pattern Growth for Infrequent Weighted Itemset Mining

Performance Analysis of Apriori Algorithm with Progressive Approach for Mining Data

DESIGN AND IMPLEMENTATION OF ALGORITHMS FOR FAST DISCOVERY OF ASSOCIATION RULES

Data Structures. Notes for Lecture 14 Techniques of Data Mining By Samaher Hussein Ali Association Rules: Basic Concepts and Application

PTclose: A novel algorithm for generation of closed frequent itemsets from dense and sparse datasets

Data Mining: Mining Association Rules. Definitions. .. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

Improved Algorithm for Frequent Item sets Mining Based on Apriori and FP-Tree

A Novel Texture Classification Procedure by using Association Rules

An Algorithm for Mining Large Sequences in Databases

Association mining rules

Association Rules Apriori Algorithm

Model for Load Balancing on Processors in Parallel Mining of Frequent Itemsets

Concurrent Processing of Frequent Itemset Queries Using FP-Growth Algorithm

Data Structure for Association Rule Mining: T-Trees and P-Trees

An Evolutionary Algorithm for Mining Association Rules Using Boolean Approach

Infrequent Weighted Itemset Mining Using SVM Classifier in Transaction Dataset

A mining method for tracking changes in temporal association rules from an encoded database

Temporal Weighted Association Rule Mining for Classification

ETP-Mine: An Efficient Method for Mining Transitional Patterns

Lecture Topic Projects 1 Intro, schedule, and logistics 2 Data Science components and tasks 3 Data types Project #1 out 4 Introduction to R,

Algorithm for Efficient Multilevel Association Rule Mining

Optimization using Ant Colony Algorithm

Parallel Popular Crime Pattern Mining in Multidimensional Databases

Performance Based Study of Association Rule Algorithms On Voter DB

DESIGN AND CONSTRUCTION OF A FREQUENT-PATTERN TREE

Improved Frequent Pattern Mining Algorithm with Indexing

On Multiple Query Optimization in Data Mining

Mining Frequent Itemsets Along with Rare Itemsets Based on Categorical Multiple Minimum Support

PSON: A Parallelized SON Algorithm with MapReduce for Mining Frequent Sets

Parallel Mining of Maximal Frequent Itemsets in PC Clusters

An Improved Apriori Algorithm for Association Rules

2. Discovery of Association Rules

Mining Frequent Patterns with Counting Inference at Multiple Levels

Medical Data Mining Based on Association Rules

PLT- Positional Lexicographic Tree: A New Structure for Mining Frequent Itemsets

Transforming Quantitative Transactional Databases into Binary Tables for Association Rule Mining Using the Apriori Algorithm

To Enhance Projection Scalability of Item Transactions by Parallel and Partition Projection using Dynamic Data Set

Improved Aprori Algorithm Based on bottom up approach using Probability and Matrix

Pattern Mining. Knowledge Discovery and Data Mining 1. Roman Kern KTI, TU Graz. Roman Kern (KTI, TU Graz) Pattern Mining / 42

Association Rule Mining. Entscheidungsunterstützungssysteme

Appropriate Item Partition for Improving the Mining Performance

An Improved Algorithm for Mining Association Rules Using Multiple Support Values

Survey: Efficent tree based structure for mining frequent pattern from transactional databases

Mining Rare Periodic-Frequent Patterns Using Multiple Minimum Supports

Mining Quantitative Association Rules on Overlapped Intervals

Efficient Remining of Generalized Multi-supported Association Rules under Support Update

A Decremental Algorithm for Maintaining Frequent Itemsets in Dynamic Databases *

620 HUANG Liusheng, CHEN Huaping et al. Vol.15 this itemset. Itemsets that have minimum support (minsup) are called large itemsets, and all the others

The Transpose Technique to Reduce Number of Transactions of Apriori Algorithm

IMPROVING APRIORI ALGORITHM USING PAFI AND TDFI

Weighted Support Association Rule Mining using Closed Itemset Lattices in Parallel

MINING ASSOCIATION RULE FOR HORIZONTALLY PARTITIONED DATABASES USING CK SECURE SUM TECHNIQUE

International Journal of Computer Trends and Technology (IJCTT) volume 27 Number 2 September 2015

Data Mining Part 3. Associations Rules

An Efficient Algorithm for finding high utility itemsets from online sell

Review paper on Mining Association rule and frequent patterns using Apriori Algorithm

Item Set Extraction of Mining Association Rule

Novel Techniques to Reduce Search Space in Multiple Minimum Supports-Based Frequent Pattern Mining Algorithms

Memory issues in frequent itemset mining

CSCI6405 Project - Association rules mining

Discovery of Frequent Itemset and Promising Frequent Itemset Using Incremental Association Rule Mining Over Stream Data Mining

STUDY ON FREQUENT PATTEREN GROWTH ALGORITHM WITHOUT CANDIDATE KEY GENERATION IN DATABASES

Integration of Candidate Hash Trees in Concurrent Processing of Frequent Itemset Queries Using Apriori

Study on Mining Weighted Infrequent Itemsets Using FP Growth

An Efficient Algorithm for Finding the Support Count of Frequent 1-Itemsets in Frequent Pattern Mining

Mining Frequent Patterns without Candidate Generation

Mining High Average-Utility Itemsets

On Frequent Itemset Mining With Closure

Mining Imperfectly Sporadic Rules with Two Thresholds

Mining Spatial Gene Expression Data Using Association Rules

WIP: mining Weighted Interesting Patterns with a strong weight and/or support affinity

Data Access Paths for Frequent Itemsets Discovery

Mining Temporal Indirect Associations

Monotone Constraints in Frequent Tree Mining

Enhanced SWASP Algorithm for Mining Associated Patterns from Wireless Sensor Networks Dataset

A Novel method for Frequent Pattern Mining

Discovery of Frequent Itemsets: Frequent Item Tree-Based Approach

DMSA TECHNIQUE FOR FINDING SIGNIFICANT PATTERNS IN LARGE DATABASE

Apriori Algorithm. 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke

Hybrid Approach for Improving Efficiency of Apriori Algorithm on Frequent Itemset

INFREQUENT WEIGHTED ITEM SET MINING USING NODE SET BASED ALGORITHM

Associated Sensor Patterns Mining of Data Stream from WSN Dataset

Optimized Frequent Pattern Mining for Classified Data Sets

Roadmap DB Sys. Design & Impl. Association rules - outline. Citations. Association rules - idea. Association rules - idea.

Chapter 4: Mining Frequent Patterns, Associations and Correlations

Performance Analysis of Frequent Closed Itemset Mining: PEPP Scalability over CHARM, CLOSET+ and BIDE

Transcription:

IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008 121 An Efficient Reduced Pattern Count Tree Method for Discovering Most Accurate Set of Frequent itemsets 1 Geetha M, 2 R. J. D Souza 1 Member IAENG, Department of Computer Science, M.I.T., Manipal, Karnataka, India. 2 Professor, Department of MACS, N.I.T.K., Surathkal, Karnataka, India. Abstract Discovery of association rules from large volumes of data is an important Data Mining problem. In this paper a novel method for discovering most accurate set of frequent itemsets using partition algorithm is implemented. This method uses the concept of Reduced Pattern Count Tree, Ladder merging and Reduced Minimum Support to discover most accurate set of frequent itemsets in a single scan of database. This algorithm is a modified version of the existing Partition Algorithm, but leads to the significant reduction in time and disk input/ output, and has lower memory requirements as compared to some of the Existing algorithms. Index Terms Frequent itemset, Path, Reduced minimum Support, Reduced Pattern Count Tree. I. INTRODUCTION The discovery of association rules is the most well studied problem and is an important problem in data mining. Let I= {i 1, i 2, i 3,.., i m } be a set of items. Let D be a set of transactions, where each transaction T contains a set of items. A transaction t is said to support an item i i, if i i, is present in t. t is said to support a subset of items X contained in I, if t supports each item in X. An itemset X contained in I has a support s in D, if s% of transactions in D supports X. An itemset with at least the user defined minimum support is called a frequent itemset. A frequent set is said to be a maximal frequent set if it is frequent and no superset of this is a frequent. Every subset of maximal frequent itemset is frequent by downward closure property [1] of frequent. An association rule [2] is an implication of the form X Y, where X I, Y I and X Y=φ. The association rule X Y [2] holds in the database D with support s if s% of transactions in D contains X Y. The association rule X Y [2] holds in the database D with confidence c if c% of transactions in D that contain X also contain Y. Mining of association rules is to find all association rules that have support and confidence greater than or equal to the userspecified minimum support and minimum confidence respectively [1]. The first step in the discovery of association rules is to find all frequent itemsets with respect to the user specified minimum support. The second step in forming the association rules from the frequent itemsets is straight forward as described in[1]. There are many interesting algorithms for finding frequent itemsets. One of the key features of all algorithms is that each of these methods assumes that the underlying database size is enormous, and involves either candidate generation process or non-candidate generation process. The algorithms with candidate generation process require multiple passes over the database and are not storage efficient. The famous algorithms Apriori[2] and Partition[2] use candidate generation and pruning function at every iteration. The number of database passes in the case of Apriori deps on the largest size of the frequent itemsets. When any one of the frequent itemsets becomes longer, the algorithm has to go through much iteration and as a result the performance decreases. The partition algorithm[2], even though it involves candidate generation procedure, is considered to be better than Apriori algorithm since the size Manuscript received August 5, 2008. Manuscript revised August 20, 2008.

122 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008 of the global candidate set is considerably smaller than the set of all possible itemsets and the number of database scans require is equal to two. The partition algorithm works in two phases as given in Figure 1. A partition P of the database refers to any subset of the transactions contained in the database. A local support for an itemset is the fraction of the transaction containing that particular itemset in a partition. An itemset is said to be a local frequent itemset if its local support in a partition is at least the user defined minimum support. In Phase I, the partition algorithm divides the whole database into a number of non-overlapping partitions, each having equal number of transactions. The partitions are considered one at a time and all local frequent itemsets of that partition are discovered and stored separately in the memory. At the of Phase I, these frequent itemsets are merged to generate a global set of all potential frequent itemsets. Hence, only at the of this step, local frequent itemsets of all partitions are merged and the resulting set does not contain any repeated local frequent itemset. In Phase II, the actual support for these itemsets is generated and the frequent itemsets are identified. II PROPOSED ALGORITHM The proposed algorithm uses the concept of Reduced Pattern Count tree, reduced minimum support and ladder merging. The Pattern Count tree (PC-tree)[5] is the contribution of V.S.Ananthanarayana et al, which is a complete and compact representation of the database. They discovered frequent itemsets by converting PC-tree to LOFP tree [5] in a single database scan. PC-tree [5] is a data structure, which is used to store all the patterns occurring in the tuples of a transaction database, where a count field is associated with each item in every pattern, which is responsible for a compact realization of the database. The completeness property of the PC-tree motivated us to discover all frequent itemsets using Reduced Pattern Count-tree. Reduced Pattern Count Tree is a complete representation of the frequent itemsets for a given database with respect to minimum threshold s. This method divides the whole database into a number of non-overlapping partitions, each having equal number of transactions. The partitions are read into memory one at a time. As and when a every transaction of partition is read into the memory a branch of PC tree is constructed. At the of every partition we have in memory a PC-tree representing all the transactions of that read partition. Based on the local support, PC-tree is reduced to contain only local frequent itemsets called Reduced Pattern Count Tree. Further, by scanning the Reduced Pattern count Tree, all local frequent itemsets of that partition are discovered along with their frequencies of occurrence. The local frequent itemsets of each partition are progressively merged with local frequent itemsets of the previous partition before another partition is processed. This step eliminates the superfluous storing of itemsets. This technique is called Ladder Merging [3]. When merging the local frequent itemsets with those of the previous partition, their counts are incremented by the number of times they occur in the current partition. In case of new frequent itemsets, their counts are simply stored along with them. After all the partitions have been processed, we have the merged set of local frequent itemsets from all partitions along with their total counts. Since this approach stores the count of each local frequent itemsets in order to save the second scan of the database and our aim is to discover frequent itemsets from the merged set using global support, we may miss some of the global frequent itemsets. To lessen this possibility, we use reduced minimum supports in the small neighborhood of global support to find most accurate set of frequent itemsets. In this regard, it is required to carry out some iteration for discovering frequent itemsets at different minimum supports starting with Global support. This process is continued until we get most accurate set of frequent itemsets. In Figure 2, LFI = Local Frequent Itemsets. P1, P2, P3, P4 are partitions of the database D i.e. D = P1 P2 P3 P4 and P1 P2 P3 P4= φ.

IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008 123 Algorithm The following is the algorithm to discover frequent itemsets, which uses the concept of Reduced Pattern Count Tree, Ladder Merging and Reduced minimum support. Input: D = given database D = P1 P2 P3.. Pn where Pi, i = 1.2.n are partitions of D and P1 P2 P3.. Pn= φ N = Total number of transactions in D n = Number of partitions L = Holds all local frequent itemsets of every partition G = Holds all merged local large itemsets i.e. G = { c c is a local frequent itemset of at least one of the partitions.} L G = Holds all frequent itemsets of D s(c) = actual support of an itemset c. local sup= user defined support for an itemset to be frequent in the partition. Global_support={ (locsup * N) ( no. of transactions per partition)} Output: Most accurate set of frequent itemsets Step 1:/*Partition wise Scan of the database*/ for i = 1 to n do begin Read_in_partition ( Pi ). Construct Reduced Pattern Count Tree. Find all frequent itemsets of Pi along with their support count s(c) by scanning Reduced Pattern Count Tree and store it in L G=G L. i.e. Add only those itemsets of L to G, which are not in G. For those itemsets present in G, update their support counts. Step 2:/*Find global frequent itemsets*/ freq_is = no. of frequent itemsets in G value= [(Globalsupport in terms of percentage 0. 5%)*N ] 100 while(globalsupport!=value) do for i = 1 to freq_is do begin L G = { cεg s(c)>=globalsupport } Globalsupport = [(Globalsupport in terms of percentage 0.1%)*N ] 100 Answer =L G. Construct Reduced Pattern Count Tree. The Reduced Pattern Count -tree algorithm (RPCA) does not generate any candidate itemsets, and through this algorithm it is possible to discover all frequent itemsets. The steps to be followed for finding thefrequent itemsets using Reduced Pattern Count tree (RPC tree) are Step 1: Construction of Pattern Count tree [5]. Step 2: Reduction of Pattern Count tree to contain only frequent items: A. Removal of Infrequent items: The items, which are not frequent, are removed from the PC -tree. for all items I in the PC tree do begin if I is not frequent, delete node I from the PC -tree. B. Removal of Repeated Transactions heads: It is assumed that the transaction head in the tree is the node through which one or more transactions originate and there are n transactions heads in the PC tree. The above step may leave PC-tree with repeated transactions heads. The merging algorithm removes such transactions heads. for all transactions head h from 1 to n-1 do begin for all transactions head j from h+1 to n do begin if (h = = j) app(h, j); // Apping the transactions head h to j. C. Removal of repeated Siblings: The PC-tree, which is reduced in the above step, may contain repeated sibling heads. i.e. even though the PC tree possesses different transaction heads, for a particular head any two siblings may have same labels. The following algorithm removes repeated siblings. for all transactions head h from 1 to n do begin Remove a transaction head h from PC tree, which results in a tree called XPC tree. App all transactions beginning with h from the removed branch of PC-tree, to the transaction head h 1 having the label of h. The transaction head h 1 which is obtained in the above step is attached to XPC tree.. The resulting tree at the of this step is called Reduced PC Tree (RPC -tree). Step 3: Discover frequent itemsets by finding all possible paths and its frequency.. This step involves several sub-steps. Reverse the set of frequent 1-itemsets f in descing order of their item number. It is assumed that there are n one frequent itemsets in f; p is a path structure to store all possible paths; h a transactions head in a reduced PC -tree; I an item in the set f; T holds the elements which are frequent with I; m the number of elements in T; Count[I] contains frequency of an item I; r is a path structure to

124 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008 store closed frequent itemsets; fcount[k] represents the frequency of reduced path r k ; N the number of reduced paths in r and s a user defined support. for every item I in f do begin Do a preorder scan of the Reduced PC-tree to get all possible paths ing with item number I. for all transactions heads h in the reduced PC-tree do begin if (h> = I) remove h from the reduced PC -tree; else {store count[i ] as first element, all other elements before I and element I, as next in path p } continue search for I in Reduced PC tree until the of the tree. /*Reduce all the paths in p to contain only frequent itemset associated with I. */ for all paths p do begin for all I in p do begin If I is not frequent, remove I from p else, add I to the set T. /* Finding the association amongst the elements in T. */ for all I j in T, j from 1 to m do begin for all reduced paths p do begin if Ij p, put all elements after Ij and before I in r. count[ j ]=count[ Ij ] /*Reduce r*/ for all paths r do begin for all I in r do begin if I is not frequent with I j remove I from r if r contains more than two frequent items then count[ r ]=count[ I ] /*Finding the frequency of reduced paths.*/ for all reduced paths r k, k from 1 to N do begin for all reduced paths r l, l from 1 to N, k l do begin If (r k = = r l ) N=N-1(Delete r l ) count[ k ]= count [k ]+ count [ l ] fcount[k]=count[ k] If ( r k is contained in r l ) count[ k ]= count [k ]+ count [ l ] fcount[k]=count[ k] /* Discovery of Frequent itemsets*/ Apply downward closure property for paths resulted in the above step. III.PERFORMANCE ANALYSIS a) Synthetic data generator For the performance analysis, the data is generated using synthetic data generator. Synthetic data generated is the simulation of buying patterns of customers in retail environment. b) Parameters and data set Used The above algorithm is verified for data sets containing 1/2K, 1K, 2K transactions by taking 100 transactions per partition against Global_support 2% and the reduced minimum support taken are respectively 1.9%, 1.8%, 1.7%, and 1.6%. The Table 3 shows the total number of frequent itemsets discovered by Existing Partition Algorithm, Single Scan Partition Algorithm using Reduced Minimum support (SSPARS)[4] and Reduced Pattern Count Tree Algorithm. Figure 3 Shows frequent itemsets graph for all the above three algorithms.

IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008 125 Table 4 gives the storage space needed for all the above three algorithms for the data sets set_1, set_2, set_3 with local support value= 2% and number of transactions taken per partition =100. Figures 4 shows the storage space needed for all the above three algorithms. Figure 5 shows the execution time for the above algorithms. The above experiment is carried out in the neighborhood of Global_support = 2% and is observed from the results that this new approach is more time and space efficient than the existing Partition Algorithm and SSPARS and most accurate sets of frequent itemsets are obtained for minimum support greater than or equal to (Global_support 0.5 %). Theoretical Comparisons with some Algorithms: 1. We know that for Apriori algorithm, the database has to be read at least k times if we have to find frequent itemsets of length k. For our method, only one scan is required to discover most accurate set of frequent itemsets. This will save considerable execution time. 2. If N is the number of transactions in the given database, our method requires exactly one scan to read the N transactions and construct Reduced Pattern Count Tree. Therefore our algorithm is in

126 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.8, August 2008 O(N), where as Apriori which deps on size of large itemsets Lk, is in O(k*N) and Existing Partition Algorithm is in O(2N) since it requires two scans of the database to discover all frequent itemsets. 3. Since our algorithm does not generate any candidate, it can handle very large databases also. In view of all the above, our approach is extremely effective in efficiently mining most frequent itemsets, and is able to gracefully handle very low support values, even in dense datasets. IV CONCLUSION We have described an algorithm, for discovering most accurate set of frequent itemsets in databases, which is more space and time efficient than existing SSPARS and Partition algorithm. In addition to this our algorithm discovers most accurate set of frequent itemsets in a single scan of the database. Since our algorithm does not generate any candidates, it can be used for discovering frequent itemsets from large volumes of data. REFERENCES [1]R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between sets of items in large databases. In Proceedings of the 1993 ACMSIGMOD International Conference on Management of Data, pages 207-216, Washington, DC, May 26-28 1993. [2] Arun K Pujari - Data mining Techniques. [3] Geetha M, R.J. D Souza, Anathanarayana V. S.N, Partition Algorithm with Ladder Merging, 2 nd National Conference On Intelligent Systems and Networks (ISN-2005) [4] Geetha M, Sana Haque & Satyra Singh R.J. D Souza, Single Scan Partition Algorithm Using Reduced Minimum Support, Asian conference on Intelligent systems and Networks, Haryana Engineering College(AISN-2006). [5] Ananthanarayana V. S, Subramanian, D.K., Narasimha Murthy, M- Scalable, distributed and dynamic mining of association rules using PC tree IISc CSA, Technical Report, (2000). [6] Rakesh Agarwal, Ramakrishnan Srikant,- Fast algorithms for mining association Rules. In proceedings of the 20 th International Conference on Very Large Databases, Santiago. [7]Han, J., Pei, J., Yin, Y. Mining Frequent Patterns without Candidate Generation, Proc. Of ACM-SIGMOD, (2000).