Graph Based Approach for Finding Frequent Itemsets to Discover Association Rules

Similar documents
Discovery of Multi-level Association Rules from Primitive Level Frequent Patterns Tree

CS570 Introduction to Data Mining

Mining of Web Server Logs using Extended Apriori Algorithm

An Evolutionary Algorithm for Mining Association Rules Using Boolean Approach

Performance Based Study of Association Rule Algorithms On Voter DB

Chapter 4: Mining Frequent Patterns, Associations and Correlations

Improved Frequent Pattern Mining Algorithm with Indexing

Lecture Topic Projects 1 Intro, schedule, and logistics 2 Data Science components and tasks 3 Data types Project #1 out 4 Introduction to R,

An Improved Apriori Algorithm for Association Rules

AN IMPROVISED FREQUENT PATTERN TREE BASED ASSOCIATION RULE MINING TECHNIQUE WITH MINING FREQUENT ITEM SETS ALGORITHM AND A MODIFIED HEADER TABLE

Mining Frequent Patterns with Counting Inference at Multiple Levels

DMSA TECHNIQUE FOR FINDING SIGNIFICANT PATTERNS IN LARGE DATABASE

A Survey on Moving Towards Frequent Pattern Growth for Infrequent Weighted Itemset Mining

APRIORI ALGORITHM FOR MINING FREQUENT ITEMSETS A REVIEW

Data Mining Part 3. Associations Rules

To Enhance Projection Scalability of Item Transactions by Parallel and Partition Projection using Dynamic Data Set

Association Pattern Mining. Lijun Zhang

Hierarchical Online Mining for Associative Rules

Web Page Classification using FP Growth Algorithm Akansha Garg,Computer Science Department Swami Vivekanad Subharti University,Meerut, India

A mining method for tracking changes in temporal association rules from an encoded database

An Efficient Algorithm for Finding the Support Count of Frequent 1-Itemsets in Frequent Pattern Mining

Data Structure for Association Rule Mining: T-Trees and P-Trees

Optimization using Ant Colony Algorithm

Infrequent Weighted Itemset Mining Using SVM Classifier in Transaction Dataset

An Efficient Reduced Pattern Count Tree Method for Discovering Most Accurate Set of Frequent itemsets

DATA MINING II - 1DL460

An Algorithm for Mining Large Sequences in Databases

Available online at ScienceDirect. Procedia Computer Science 45 (2015 )

Modified Frequent Itemset Mining using Itemset Tidset pair

Pattern Mining. Knowledge Discovery and Data Mining 1. Roman Kern KTI, TU Graz. Roman Kern (KTI, TU Graz) Pattern Mining / 42

An Efficient Algorithm for finding high utility itemsets from online sell

Frequent Item Set using Apriori and Map Reduce algorithm: An Application in Inventory Management

Efficient Remining of Generalized Multi-supported Association Rules under Support Update

An Algorithm for Frequent Pattern Mining Based On Apriori

PTclose: A novel algorithm for generation of closed frequent itemsets from dense and sparse datasets

A Study on Mining of Frequent Subsequences and Sequential Pattern Search- Searching Sequence Pattern by Subset Partition

Discovering interesting rules from financial data

Appropriate Item Partition for Improving the Mining Performance

Performance and Scalability: Apriori Implementa6on

A Literature Review of Modern Association Rule Mining Techniques

Lecture 2 Wednesday, August 22, 2007

Product presentations can be more intelligently planned

INTELLIGENT SUPERMARKET USING APRIORI

PSON: A Parallelized SON Algorithm with MapReduce for Mining Frequent Sets

Mining High Average-Utility Itemsets

Memory issues in frequent itemset mining

A Taxonomy of Classical Frequent Item set Mining Algorithms

Medical Data Mining Based on Association Rules

Chapter 6: Basic Concepts: Association Rules. Basic Concepts: Frequent Patterns. (absolute) support, or, support. (relative) support, s, is the

A Decremental Algorithm for Maintaining Frequent Itemsets in Dynamic Databases *

The Transpose Technique to Reduce Number of Transactions of Apriori Algorithm

Basic Concepts: Association Rules. What Is Frequent Pattern Analysis? COMP 465: Data Mining Mining Frequent Patterns, Associations and Correlations

Research of Improved FP-Growth (IFP) Algorithm in Association Rules Mining

A Data Mining Framework for Extracting Product Sales Patterns in Retail Store Transactions Using Association Rules: A Case Study

An Improved Technique for Frequent Itemset Mining

Improved Algorithm for Frequent Item sets Mining Based on Apriori and FP-Tree

Study on Mining Weighted Infrequent Itemsets Using FP Growth

A Technical Analysis of Market Basket by using Association Rule Mining and Apriori Algorithm

620 HUANG Liusheng, CHEN Huaping et al. Vol.15 this itemset. Itemsets that have minimum support (minsup) are called large itemsets, and all the others

Association Rule Mining. Entscheidungsunterstützungssysteme

AN ENHANCED SEMI-APRIORI ALGORITHM FOR MINING ASSOCIATION RULES

SA-IFIM: Incrementally Mining Frequent Itemsets in Update Distorted Databases

CSCI6405 Project - Association rules mining

A Review on Mining Top-K High Utility Itemsets without Generating Candidates

International Journal of Scientific Research and Reviews

Performance Analysis of Apriori Algorithm with Progressive Approach for Mining Data

IJESRT. Scientific Journal Impact Factor: (ISRA), Impact Factor: [35] [Rana, 3(12): December, 2014] ISSN:

2. Discovery of Association Rules

A BETTER APPROACH TO MINE FREQUENT ITEMSETS USING APRIORI AND FP-TREE APPROACH

Chapter 4: Association analysis:

ASSOCIATION RULE MINING: MARKET BASKET ANALYSIS OF A GROCERY STORE

ISSN: ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 2, March 2013

Tutorial on Assignment 3 in Data Mining 2009 Frequent Itemset and Association Rule Mining. Gyozo Gidofalvi Uppsala Database Laboratory

Mining Temporal Indirect Associations

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 6

CSE 5243 INTRO. TO DATA MINING

Association Rule Mining. Introduction 46. Study core 46

Association Rules. Berlin Chen References:

Optimized Frequent Pattern Mining for Classified Data Sets

AN EFFICIENT GRADUAL PRUNING TECHNIQUE FOR UTILITY MINING. Received April 2011; revised October 2011

Association Rule Mining from XML Data

Available online at ScienceDirect. Procedia Computer Science 37 (2014 )

Data Mining: Mining Association Rules. Definitions. .. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

An Improved Algorithm for Mining Association Rules Using Multiple Support Values

Web page recommendation using a stochastic process model

PLT- Positional Lexicographic Tree: A New Structure for Mining Frequent Itemsets

Lecture notes for April 6, 2005

Salah Alghyaline, Jun-Wei Hsieh, and Jim Z. C. Lai

A Graph-Based Approach for Mining Closed Large Itemsets

Data Mining: Concepts and Techniques. Chapter 5. SS Chung. April 5, 2013 Data Mining: Concepts and Techniques 1

Temporal Weighted Association Rule Mining for Classification

Tutorial on Association Rule Mining

Survey: Efficent tree based structure for mining frequent pattern from transactional databases

Data Structures. Notes for Lecture 14 Techniques of Data Mining By Samaher Hussein Ali Association Rules: Basic Concepts and Application

Pamba Pravallika 1, K. Narendra 2

Generation of Potential High Utility Itemsets from Transactional Databases

Study on Apriori Algorithm and its Application in Grocery Store

Building Data Mining Application for Customer Relationship Management

BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES

EFFICIENT TRANSACTION REDUCTION IN ACTIONABLE PATTERN MINING FOR HIGH VOLUMINOUS DATASETS BASED ON BITMAP AND CLASS LABELS

Transcription:

Graph Based Approach for Finding Frequent Itemsets to Discover Association Rules Manju Department of Computer Engg. CDL Govt. Polytechnic Education Society Nathusari Chopta, Sirsa Abstract The discovery of association rules is an important task in data mining and knowledge discovery. Several algorithms have been developed for finding frequent itemsets and mining comprehensive association rules from the databases. The efficiency of these algorithms is a major issue since a long time and has captured the interest of a large community of researchers. This paper presents a new approach that can mine frequent itemset or patterns in less time and in a straight forward way. Majority of the algorithms developed for finding frequent itemsets scan the database repeatedly and are based on the concept of minimum threshold support value. The proposed approach is based on graph and finds frequent itemsets without repeatedly scanning the database. The algorithm finds frequent itemsets irrespective of support level and can be used for finding the largest most frequent itemset. The proposed approach performs single scan of the database in first phase and draws a graph in which edges are labeled with the respective transactions ids. In second phase, a table is constructed which has all distinct labels and the corresponding itemsets. The largest most frequent itemset is selected from this table according to the given selection criterion. Index Terms data mining, association rules, knowledge and data management, database I. INTRODUCTION The advancements in technology and computation power, in addition to computerization of most of the commercial activities, have given rise to huge amount of data. Availability of powerful database systems and the ease of managing and using such databases, has also contributed to the growth of extensively large data repositories. This in turn demands equally powerful tools and techniques to be used for extracting information and knowledge. As a result data mining has become an interesting research area [1]. Data mining is the process of mining useful, comprehensive, previously unknown and interesting information from the stored data. By comprehensive we mean that the mined information should be wide-ranging that is hidden in the data and provides some facts and information that can be used further for management decision making and process control [2]. The task of data mining includes methods that mine different types of knowledge from databases. The kind of knowledge that can be exposed includes association rules, classification rules, generalizations and summarizations etc. The discovery of association rules is the area that has fascinated many researchers for finding innovative approaches to detect hidden and interesting associations that exists in the collected data. Several algorithms have been developed for mining association rules that are significant and provide important information of planning and control. The work reported in this paper is a step for finding association rules quickly from databases in a simple and straight forward way. II. RELATED WORK Association rule mining is one of the important and well researched techniques of data mining to find important correlations among data items. The first algorithm to generate all frequent itemsets was proposed by Agrawal et al. [3] and named AIS (after the name of its proposers Agrawal, Imielinski and Swami). The algorithm generates all the possible itemsets at each level of traversal. Thus it generates and stores frequent as well as infrequent itemsets in each pass. Generation of infrequent itemsets was undesirable and was a major drawback over its performance. Later on AIS was improved upon and renamed as Apriori by Agrawal et al. The new algorithm uses a level-wise and breadth-first search for generating association rules. Apriori and Apriori Tid algorithms generate the candidate itemsets by using only the itemsets found large in the previous pass and without using the transactional database. Apriori uses the downward closure property of the itemset support to prune the itemset lattice- the property that all subsets of frequent itemsets must themselves be frequent [4]. A similar algorithm called Dynamic Itemset Counting (DIC) was proposed by Brin et al. DIC partitions a database into several blocks marked by start points and repeatedly scans the database. Unlike Apriori, DIC can add new Vol. 3 Issue 2 December 2013 269 ISSN: 2319 1058

candidate itemsets at any start point, instead of just at the beginning of new database scan. At each start point, DIC estimates the support of all itemsets that are currently counted and add new itemsets to the set if all its subsets are estimated to be frequent [9]. Vu et al. proposed a rule based prediction technique to predict the user featured location, but this method generates more candidate itemsets than required. As the data database must be scanned multiple times the algorithm was expensive in terms of run time and I/O load [5]. The problem of multiple scanning was improved upon by using a compact tree structure and finding frequent itemsets directly from the data structure. The algorithm scans the database twice. The first scan of the database discards the infrequent itemsets and the second pass constructs the tree. The method encodes a dataset using a compact data structure called FP- tree and extracts the frequent itemsets directly. This FP-growth algorithm proposed by Han et al. gained lots of popularity [6]. To improve upon the cost of main memory requirement Pei et al. proposed another algorithm called H-mine using array and tree based data structure. A distinct feature of this method is that it has very limited and precisely predictable space overhead and runs really fast in memory-based setting. It can be scaled up to very large databases by database partitioning and when the dataset becomes dense, FP-tree can be constructed dynamically as part of the mining process [7]. Sahaphong and Boonjing proposed a new algorithm which constructs a pattern base using a new method that is different from the patterns base in the FP-growth and mined frequent itemsets using a new combination method without the recursive construction of a conditional FP-tree [12]. Another approach based on FP-tree and co-occurance frequent items (COFI) was proposed by V.K. Srivastava et al. to find frequent itemsets in multilevel concept hierarchy by using a non-recursive mining process [13]. Toivonen proposed the sampling algorithm for finding association rules to reduce database activity. The sampling algorithm applies level-wise method to sample and search for frequent itemsets in sample that can be done in main memory and so only one scan of transactions is required. The algorithm uses a lower minimum support threshold to mine the frequent itemset [8]. The Pincer-Search algorithm was proposed by Lin et al. in 1997 and can efficiently discover the maximum fre-quent set. The Pincer-Search combines both the bottom-up and top-down directions. The main search direction is still bottom-up but a restricted search is conducted in the top-down direction. This search is used only for maintaining and updating the new data structure designed for this study, namely, the maximum frequent candidate set. It is used to prune the candidates in the bottom-up search. Another very important feature of this algorithm is that it is not necessary to explicitly examine every frequent itemset. Therefore it performs well even when some maximal frequent sets are long. This algorithm reduces the number of times the database is scanned and the number of candidates considered [10]. The parallel algorithms for the discovery of association rules using clustering techniques to approximate the set of potentially maximal frequent itemsets was first proposed by Zaki et al in 1997. This algorithm uses two clustering schemes based on equivalence classes and maximal hypergraph cliques and study two lattice traversal techniques based on bottom-up and hybrid search and also use vertical database layout to cluster related transactions together [11]. Most of the algorithms that mine frequent itemsets use a horizontal data layout. However many researcher use a vertical layout. The Eclat algorithm proposed by Zaki et al. generates all frequent itemsets in a breadth-first search using the joining step from the Apriori property when no candidate items can be found. The Eclat algorithm is very efficient for large data sets but is less efficient for small data sets [14]. The partition algorithm for mining association rules that is different from the classical algorithms. It requires two database scan to mine frequent itemsets and also reduce I/O overhead. First the partition algorithm scans the database once to generate a set of all potential frequent itemsets and in the second scan their global support is obtained. The algorithm partitions the database into small chunks which can be held in main memory. The partitions are considered once at a time and all large itemsets are generated for that partition. These large itemsets are further merged to create a set of all potential large itemsets [15]. III. DATA MINING AND ASSOCIATION RULES Data mining is the nontrivial process of extracting useful, unknown and hidden information from data in the databases. It is also called knowledge discovery in databases, which means that interesting knowledge, regularities, or high-level information can be extracted from the relevant sets of data in databases that can be investigated from different angles. Large databases, thus, serve as rich and reliable sources for knowledge generation and verification [16]. Different types of methods and techniques are needed to mine different kind of knowledge. Based on the type of knowledge to be mined, the task of data mining can be divided into several tasks such as summarization, classification, clustering, association and trend analysis. Association rules are used to discover the relationships, and Vol. 3 Issue 2 December 2013 270 ISSN: 2319 1058

correlations among items or attributes. These rules can be very effective in revealing unknown relationships and hidden patterns, providing results that can be used for forecasting and decision making. ASSOCIATION RULE MINING (ARM) Association rule mining is one of the most important and nontrivial tasks of data mining. It aims to mine interesting relationships, frequent patterns, associations or casual structure between sets of items in a transactional database. These rules have been proven to be very useful tools for an enterprise as it strives to improve the competitiveness and profitability [17]. Mining association rules from the databases has attracted the attention of large community of researchers in the past few years [4], [6]. The formal model or statement of the problem was given by Agrawal et al. in 1993. To define the problem of ARM let I=I1,I2,...In be set of n distinct items in the database D. Let T be the transaction that contains a set of items from I. The database contains different transaction records. An association rule is an implication in the form of X Y, where X, Y are sets of items from I and are called itemsets and the sets X and Y are disjoint sets having no common items. X is called the antecedent and the Y is called consequent and rule is called X implies Y. INTERESTINGNESS MEASURES There are two important measures for finding the interestingness of association rules. These are support and confidence [18]. Support of an association rule is the fraction of transactions that contain both X and Y to the total number of transactions. Confidence is defined as the fraction of transactions that contain X and Y both to the total number of transactions that contains X. Because of the large volume of data in the database the users are interested in only those items that are purchased frequently. Thus a threshold value is usually defined for support and confidence to drop those rules that are not of much interest. Since mining association rules may require scanning the database repeatedly, the amount of processing could be huge and performance improvement becomes an essential concern [16]. IV. DISCOVERING FREQUENT ITEM SETS The most time consuming operation in the process of discovering association rules is the computation of the frequency of interesting subsets of items. The task of mining useful association rules is essentially to find strong rules. This task can be divided into two major phases. The first phase finds all of the itemsets that satisfy the minimum support threshold, which are the frequent itemsets. The second phase is the rule generation, in which high confi-dence rules are generated from the frequent itemsets found in the previous phase [18]. It is found that the overall performance of mining association rules is determined by the first step. Once the large itemsets are identified, the corresponding association rules can be derived in a straightforward way. Thus efficient computation of large frequent itemsets is the major concern in discovering association rules. Several approaches for discovering the frequent itemsets have been proposed all with their pros and cons [6] [7] [10] [12]. Most of the early approaches have two major bottlenecks. First the database has to be scanned several times which is time-consuming and second enormous large itemsets are generated and lots of CPU time is wasted in calculation of the support repeatedly. V. PROPOSED ALGORITHM As discussed earlier, the most time-consuming and important part of the rule discovery process is the discovery of large frequent itemsets. The database is arranged as set of records of transactions and kept on the disk. To find out the frequent itemsets the database is scanned many times calculating the support of different sets of items and comparing with the threshold value of support. To reduce the database scan and to generate frequent itemsets in a straightforward way, a new approach based on graph is proposed to identify the frequently occurring sets. The approach works in two phases: Graph Construction and Generation of itemsets corresponding to the graph. The major steps in the approach are outlined in the following sub- sections. GRAPH CONSTRUCTION As a first step, a graph of the database items is constructed. Items are represented by nodes and the nodes belonging to the same transactions are connected by an edge between them. This edge is labeled by the corresponding transaction Id. If the nodes (items) corresponding to same transactions have already been connected by an edge, then Vol. 3 Issue 2 December 2013 271 ISSN: 2319 1058

International Journal of Innovations in Engineering and Technology (IJIET) TABLE 1 Sample Database Figure 1. Graph for the sample database new transaction Id is added to the existing label. Thus the label of an edge contains all the transaction Ids in which the two items (end nodes of the edge) are part of the corresponding transactions. Let there be five items A,B,C,D,E and the database has four transactions T1, T2, T3 and T4 (or 1,2,3 and 4). Each transaction has some items. The transactions and the corresponding items of the sample database are shown in Table 1. The graph is constructed as follows: a) Draw as much nodes in the graph as the items in the database and represent each node with item name. b) Repeat the following steps for each transaction i) Find all possible unordered pairs of nodes corresponding to the items of the same transactions. ii) Draw an edge between alll such pair of nodes. If new edge is drawn, label this edge with the corresponding transaction Id. If for some pair of nodes, the edge is already drawn, expand its label by adding the corresponding transaction Id. The graph for Table 1 is shown in the Figure 1. First transaction T1 contains three items A, C and D. Corresponding to this transaction, there are three possible unordered pairs of items (nodes) - (A, C), (A, D), and (C, D). The edges with end nodes (A, C), (A, D), and (C, D) are assigned Label <1> corresponding to transaction id T1. Second transaction T2 contains three items B, C an E. The unordered pairs of nodes corresponding to this transaction are (B, C), (B, E), and (C, E). The three edges in the graph with end nodes (B, C), (B, E), and (C, E) are assigned Label <2>. Similarly for transaction T3, six edges are drawn in the graph with end nodes (A, B), (A, C), (A, E), (B, C), (B, E), and (C, E). The edges (A, B) and (A, E) are drawn for first time, so these edges are labeled with Label <3>. The edges (A, C), (B, C), (B, E), and (C, E) have already drawn while processing transactions T1 and T2, so the labels for these edges are updated and the corresponding transaction id 3 is appended to the existing labels, resulting the new labels for these edges <1, 3>, <2, 3>, <2, 3> and <2, 3> respectively. For transaction T4, the corresponding edge (B, E) is also already drawn, and hence the transaction id 4 is appended to the existing label <2, 3> updating it to <2, 3, 4>. GENERATION OF ITEMSETS After the graph is constructed by the above process, a table is created for finding the most frequent itemset. The table has six columns with headings- Label, Length (L), Frequency (F), L F, Edges and Itemset as shown in Table 2. For the above table, the attributes are defined as below: i) Label: It represents the label of the edges generated by the algorithm proposed in Section 5.1. ii) Length (L): It denotes the length of the Label. It is the number f the transaction Ids which are part of the Label. iii) Frequency (F): It is the no. of occurrences of the Label in the graph. iv) L F: It represents the selection criterion for finding the frequently occurring item sets. v) Edges: This column represents the edges that have been labeled with the given Label. Vol. 3 Issue 2 December 2013 272 ISSN: 2319 1058

vi) Itemset: It represents the corresponding itemset. It contains distinct items which are part of the corresponding Edges. FINDING THE FREQUENT ITEMSETS Method for finding frequent itemsets is straight forward. Once the table is constructed, frequent itemsets can be found by analyzing the proposed selection criterion of L F. The row having highest value of L F gives the largest and the most frequent itemsets. In our example, the highest value of L F is given by the second row of the Table 2, thus the most frequent largest itemset is {B, C, E}. This frequent itemset is computed irrespective of any given threshold. The same itemset is computed [16] when apriori algorithm, with support threshold value of 2, is applied to the sample data as given in Table 1. If one is interested to find the most frequent itemset of length (size) 2, it can be easily selected from the Table 2. The most frequent itemset of length 2 is {B, E} represented by first row of the table. It corresponds to the highest value of L F for the itemsets having length 2. VI. EFFICIENCY OF THE PROPOSED APPROACH The proposed approach is efficient from the point of view of database scans. Most of the previous methods perform repeated scans of the database which, in turn, waste CPU time and results in degradation in the overall performance. Another criterion for measuring the efficiency is the effect of change of support threshold value. Several previous methods including Apriori requires the complete procedure to be repeated again if the user specified threshold value is changed. But the present approach is free from such drawbacks. Once the graph and the table for finding frequent itemsets are constructed successfully, we can find the most frequent itemsets of any given length (for threshold value changed) without any sort of rework. The proposed approach when applied to the sample databases given in [16], [19], [20], reported the same frequent itemsets in single database scan, as produced by their respective approaches in multi database scans. Using the proposed approach, even the next most frequent itemsets can be found which are not discussed in the previous approaches. The next most frequent itemset corresponds to next highest value of L F for any given length. The proposed approach can be applied directly on small and medium size databases. For large databases, vertical partitioning can be used as a preprocessing step to filter out the infrequent items. This preprocessing step will make the numbers of nodes and the edges manageable. VII. CONCLUSION Finding the frequent itemsets is a major phase of association rule mining. This paper presents a new approach for finding the frequent itemsets. The approach is based on graph. The proposed approach overcomes two major drawbacks of previous approaches. The method adopted to identify largest frequent itemsets is simple and straightforward. A graph is used to represent the entire database. The graph can be constructed by scanning the database only once. Changing the minimum support (threshold value) does not affect the constructed graph or the table and thus no additional work is needed. Also unlike other approaches, the approach can find frequent itemsets without generating the candidate itemsets and thus reduces the number of repeated database scans reducing the run time and memory consumption. The method is also good where the database is updated frequently, because adding new transactions or items just require addition of few nodes and edges in the graph and amendments in the table. REFERENCES [1] U.M. Fayyad et al., Advances in Knowledge Discovery and Data mining, AAAI/MIT Press, 1996. [2] G. Piatetsky-Shapiro and W.J. Frawley, Knowledge discovery in databases, AAAI/MIT Press, 1991. [3] R. Agrawal, T. Imielinski and A. Swami, Mining association rules between sets of items in large databases, Proceedings of ACM SIGMOD International Conference on Management of Data, pp.207-216, 1993. [4] R. Agrawal and R. Srikant, Fast algorithms for mining association rules, Proceedings of 20th International Conference on very Large Data Bases, pp. 487-499, 1994. [5] T.H.N. Vu et al., Spatiotemporal pattern mining technique for location-based service system, ETRI J., pp. 421-431, 2008. [6] J. Han, J. Pei and Y.Yin, Mining Frequent Pattern without candidate generation, Procedings of ACM SIGMOD International Conference on Management of Data, pp.1-12, 2000. [7] J. Pei et al., H-mine: Hyper-structure mining of frequent pattern s in large databases, Proceedings of IEEE International Conference on Data Mining, pp. 441-448, 2001. [8] Toivonen, H. Sampling large databases for association rules, Proceedings of the 22nd VLDB Conference, pp.134 145, 1996. [9] Birn, S. et al.,. Dynamic itemset counting and implication rules for market basket data, Proceedings of the ACM SIGMOD, pp. 255 264, 1997. Vol. 3 Issue 2 December 2013 273 ISSN: 2319 1058

[10] Lin, D.I et al., Pincer search: a new algorithm for discovering the maximum frequent set, Proceedings of the 6th International Conference on Extending Database Technology: Advances in Database Technology, pp. 105 119, 1998. [11] Zaki, M.J., et al., Parallel Algorithms for Discovery of Association Rules, Data Mining and Knowledge Discovery, pp. 343 373, 1997. [12] S. Sahaphong and V. Boonjing, The combination approach to frequent itemsets mining, Proceedings of 3rd International conference on Convergence and Hybrid Information Technology, pp.565-570, 2008. [13] V.K. Shrivastava et al., FP-tree and COFI based approach for mining of multiple level association rules in large database, International Journal of Computer Science and Information Security,pp. 273-279, 2010. [14] M.J. Zaki, Scalable algorithms for association mining, IEEE Trans. Knowl. Data Eng., pp. 372-390, 2000. [15] Ashoka Savasere et al., An Efficient algorithm for mining association rules in large database, Proceedings of the 21st International Conference on very large data bases, VLDB,pp. 432-444, 1995. [16] Ming-Syan Chen et al., Data Mining: An Overview from a database Perspective, IEEE Transactions on Knowledge and data engineering, pp. 866-882, 1996. [17] META Group, Data Warehouse marketing trends/opportunities: an in-depth analysis of key market trends, META Group, 1998. [18] J. Han and M. Kamber, Data Mining: Concepts and Techniques, Elsevier, pp. 227-231, 2006. [19] Qiankun Zhao and Sourav S. Bhowmick, Association Rule Mining: A Survey, Technical Report, CAIS, Nanyang Technological University, Singapore, No. 2003116, pp. 1-20, 2003. [20] Supatra Sahaphong and Veera Boonjing, IIS-Mine: A new efficient method for mining frequent itemsets, Maejo International Journal of Science and Technology, 6(01), pp. 130-151, 2012. Vol. 3 Issue 2 December 2013 274 ISSN: 2319 1058