Copyright 2015, American-Eurasian Network for Scientific Information publisher JOURNAL OF APPLIED SCIENCES RESEARCH ISSN: 1819-544X EISSN: 1816-157X JOURNAL home page: http://www.aensiweb.com/jasr 2015 October; 11(19): pages 84-88. Published Online 10 November 2015 Research Article A Novel Algorithm To Achieve Privacy In High Utility Transaction Splitting Item sets Based On 1 S.R. Chitra Devi and 2 P. Tharcis 1 PG Scholar, Department of Computer Science and Engineering, Christian College of Engineering and Technology, Dindigul, Tamilnadu-624619, India. 2 Assistant Professor, Department of Electronics and Communication Engineering, Christian College of Engineering and Technology, Dindigul, Tamilnadu-624619, India. Received: 23 September 2015; Accepted: 25 October 2015 2015 AENSI PUBLISHER All rights reserved ABSTRACT Data mining methods are very useful in order to determine efficiently the hidden motivating and necessary information from the huge database. Extracting high utility item sets (HUI)is a hectic task using data mining techniques.hui determines the item with high profit. In many cases HUI will reduce the efficiency. To overcome this issue we introduced the new algorithms, apriorich(apriori based algorithm for closed high utility item sets)apriorihc-d(apriorihc algorithm with discarding and unpromising and isolated items) CHUD(closed high utility itemset discove/ry). Another method called DAHU(derive all high utility item sets)is introduced to recover all HUIs from the set of CHUI without accessing the original database.however it has the privacy issue in high utility item sets in industrial application.to avoid this issue we introduced the new concept of splitting the transaction rather than truncated.this algorithm is used to achieve the high privacy for high utility item sets with minimum time complexity Keywords: Frequent item set mining,utility Mining, Data Mining INTRODUCTION FIM treats all items as having the same importance/unit profit/weight and it assumes that every item in a transaction appears in a binary form, i.e., an item can be either present or absent in a transaction, which does not indicate its purchase quantity in the transaction. Hence,FIM cannot satisfy the requirement of users who desire to discover item sets with high utilities such as high profits. The utility of an item set represents its importance which can be measured in terms of weight, profit, cost, quantity or other information depending on the user preference An itemset is called a high utility itemset (HUI) if its utility is no less than a user-specified minimum utility threshold; otherwise, it is called a low utility itemset. Utility mining is an important task and has a wide range of applications such as website click stream analysis, cross marketing in retail stores, mobile commerce environment. However, HUI mining is not an easy task since the downward closure property in FIM does not hold in utility mining. In other words, the search space for mining HUIs cannot be directly reduced as it is done in FIM because a superset of a low utility itemset can be a high utility itemset. Many studies [2], [6], [13] were proposed for mining HUIs, but they often present a large number of high utility itemsets to users. A very large number of high utility itemsets makes it difficult for the users to comprehend the results. It may also cause the algorithms to become inefficient in terms of time and memory requirement, or even run out of memory. It is widely recognized that the more high utility itemsets the algorithms generate, the more processing they consume. The performance of the mining task decreases greatly for low minimum utility thresholds or when dealing with dense databases In FIM, to reduce the Corresponding Author: S.R. Chitra Devi, PG Scholar, Department of Computer Science and Engineering, Christian College of Engineering and Technology, Dindigul, Tamilnadu-624619, India. E-mail: chitra@gmail.com
85 S.R. Chitra Devi and 2P. Tharcis, 2015 /Journal Of Applied Sciences Research 11(19), October, Pages: 84-88 computational cost of the mining task and present fewer but more important patterns to users, many studies focused on developing concise representations. High Utility Item set is based on the profit of the product.while considering the uitility the redundancy may occur,so need to reduce the redundancy in the High Utility Itemset.After finding the High Utility Itemsets find the closest itemset which consist of related itemset The unit profits and purchased quantities of the items are not taken into considerations in frequent item set mining. Fig 1: Process flow for finding HUI and finding FIM. Fig.1 illustrates that there will be two database one for storing the transactions and another for profit details.high Utility Items will be found by using AprioriHC-D(AprioriHC algorithm with Discarding unpromising and isolated items) remove the unfrequented item set. With the help of CHUD (Closed High Utility Item set Discovery) find the closest itm set. An item set is closed if none of its immediate supersets has same support as item sets. An item set is maximal frequent if none of its immediate supersets is frequent. After finding the closest item set remove the unpromising items and reduce the redundancy. The AprioriHC and AprioriHC-D are based on Apriori algorithm. They use a horizontal database and explore the search space of CHUIs in a breadth-first search. The algorithm AprioriHC is regarded as a baseline algorithm in this work and AprioriHC-D is an improved version of AprioriHC. DAHU(Derive All High Utility Item sets)is used to derive all the all the high utility from the DAHU tree. For a longer transaction, if truncation is done then information loss may occur.instead,use splitting methods to split the longer transaction. Find the frequent item set and display the result. The reminder of this paper is organized as follows. Section II, describes the Related Works. Section III summarizes the conclusion. II. Related Work: Different algorithms were proposed for finding High Utility Item sets and Frequent Item set mining and their pros and cons were listed below A. High Utility Item sets and Frequent Item Set: 1. CTU-PROL algorithm: Improved performance by reducing both the search space and the number of candidates. Especially UP-Growth, not only reduce the number of candidates effectively but also outperform other algorithms substantially in terms of runtime Does not provide comprehend result to the users and lots of useless candidates are pruned carries more computations and thus slower. 2. Sparse vector mechanism: Discovers frequent item sets using a small portion of the privacy budget. Algorithm use the remaining privacy budget to efficiently and effectively build a differentially private FP-tree.
86 S.R. Chitra Devi and 2P. Tharcis, 2015 /Journal Of Applied Sciences Research 11(19), October, Pages: 84-88 For longer transactions the computation may take for longer time so if truncation is done then information loss may occur.the user always expect the result should be in a concise manner without any information loss. 3.projection based algorithm: Each candidate sequence could be thought of as a query condition, and the tuples in the database that contained these sequences were projected. The support count for each candidate sequence could also be easily obtained from the tuples. Effectively skip unpromising item sets and thus further save time for the execution. The indexing mechanism can imitate the traditional projection algorithms to achieve the aim of projecting subdatabases for mining. If Transaction is longer then the performance will be degraded.the user will not get a concise result and redundancy may occur. 4. PrivBasis algorithm: It meets the challenge of high dimensionality by projecting the input dataset onto a small number of selected dimensions that one cares about. In fact, PrivBasis often uses several sets of dimensions for such projections, to avoid any one set containing too many dimensions. Each basis in B corresponds to one such set of dimensions for projection. Our techniques enable one to select which sets of dimensions are most helpful for the purpose of finding the k most frequent itemsets. When the column N is larger than the truncated frequency approach is completely ineffective. The reason why the Truncation Frequency(TF) method does not scale is that when one needs to select the top k itemsets from a large set U of candidates, the large size of U causes two difficulties. The first is regarding the running time. Even if every single lowfrequency itemset in U is chosen with only a small probability, the sheer number of such low-frequency itemsets means that the k selected itemsets likely include many infrequent ones. The TF technique tries to address the running time challenge by pruning the search space, but it does not address the accuracy challenge. This addresses only one symptom caused by a larger candidate set, but not the root cause. 5. IUPG-Algorithm: These algorithms are experimented on synthetic datasets and real time datasets for different support threshold.iupg algorithm performs well than UPG algorithm for different support values.iupg algorithm scales well as the size of the transaction database increases. The goal of utility mining is to discover all the high utility itemsets whose utility values are beyond a user specified threshold in a transaction database. Although DGU and DGN strategies are efficiently reduce the number of candidates in Phase 1(i.e., global UP-Tree). But they cannot be applied during the construction of the local UP-Tree (Phase- 2). Instead use, DLU strategy (Discarding local unpromising items) to discarding utilities of low utility items from path utilities of the paths and DLN strategy (Discarding local node utilities) to discarding item utilities of descendant nodes during the local UP-Tree construction. 6. ClaSP (Closed Sequential Patterns algorithm): The task of discovering the set of all frequent sequences in large databases is challenging as the search space is extremely large. Different strategies have been proposed so far, among which SPADE, exploiting a vertical database format [9], FreeSpan and PrefixSpan, based on projected pattern growth [3,5] are the most popular ones. These strategies show good performances in databases containing short frequent sequences or when the support threshold is not very low. Unfortunately, when long sequences are mined, or when a very low support threshold is used, the performance of such algorithms decreases dramatically and the number of frequent patterns increases sharply, resulting in too many meaningless and redundant patterns. Even worse, sometimes it is impossible to complete the algorithm execution due to a memory overflowthere are two main different changes added in ClaSP with respect to SPADE: (1) the step to check if the subtree of a pattern can be skipped (2) the step where the remaining non-closed patterns are eliminated Regarding the non-closed pattern elimination the process consists of using a hash function with the support of a pattern as key and the pattern itself as value. If two patterns have the same support we check if one contains the other, and if this condition is satisfied, we remove the shorter pattern.it doesn t founf the item which has the high utility. 7. SPADE (Sequential PAttern Discovery using Equivalence classes): The existing solutions to this problem make repeated database scans, and use complex hash structures which have poor locality. SPADE utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations Thus depending on the amount of main-memory available, we can recursively partition large classes into smaller ones, until each class is small enough to be solved independently in main-memory t, to implement the subsequence pruning step efficiently, we need to store all frequent sequences from the
87 S.R. Chitra Devi and 2P. Tharcis, 2015 /Journal Of Applied Sciences Research 11(19), October, Pages: 84-88 previous levels in some sort of a hash structure. compression cant be done in SPADE 8. private Frequent Pattern Mining algorithm: It has been well recognized that simple anonymization schemes that only remove obvious identifiers carry serious risks to privacy. Even privacy-preserving graph mining techniques based on k-anonymity are now often considered to offer insufficient privacy under strong attack models. Recently, the model of differential privacy was proposed to restrict the inference of private information even in the presence of a strong adversary. It requires that the output of a differentially private algorithm is nearly identical (in a probabilistic sense), whether or not a participant contributes her data to the dataset The key challenge of handling graph datasets is the unknown output space when applying the exponential mechanism. The Diff-FPM algorithm meets the challenge by unifying frequent pattern mining and applying differential privacy into an MCMC sampling framework. removing a node in a star graph may result in an empty graph. 9. CHARM: That it is not necessary to mine all frequent itemsets in the first step, instead it is sufficient to mine the set of closed frequent itemsets, which is much smaller than the set of all frequent itemsets. It is also not necessary to mine the set of all possible rules. We show that any rule between itemsets is equivalent to some rule between closed itemsets. Thus many redundant rules can be eliminated. CHARM is unique in that it simultaneously explores both the itemset space and tidset space, unlike all previous association mining methods which only exploit the itemset space. When the number of closed itemsets itself becomes very large, we cannot hope to keep the set of all closed itemsets in memory CHARM does better at lower supports. 10. EFIM Algorithm(Efficient high-utility Itemset Mining): EFIM (EFficient high-utility Itemset Mining), which introduces several new ideas to more efficiently discovers high-utility itemsets both in terms of execution time and memory. EFIM relies on two upper-bounds named sub-tree utility and local utility to more effectively prune the search space. It also introduces a novel array-based utility counting technique named Fast Utility Counting to calculate these upper-bounds in linear time and space. Transaction merging is obviously desirable. However, a key problem is to implement it efficiently. The naive approach to identify identical transactions is to compare all transactions with each other. But this is inefficient (O(n 2 ), where n is the number of transactions). To find identical transactions in O(n) time,, sort the original database according to a new total order T on transactions. Sorting is achieved in O(n log(n)) time, and is performed only once. projected databases generated by EFIM are often very small due to transaction merging. Conclusion: Different algorithms were proposed for finding High Utility Item Sets but due to redundancy that may not be a effective algorithm.so,the proposed algorithm were used to find the HUI without redundancy.dahu were used to remove the unpromising items.even in the HUI closest item set plays a vital role and they are identified with the help of these algorithms.in previous work for longer transactions truncation technique were implemented.while doing truncation information may get loss so instead splitting technique were introduced for longer transactions. References 1. Sen su Shengzhi zu, 2015. Differentially Private Frequent Item Set Mining Via Transaction Splitting IEEE Trans. Knowl. Data Eng, 27-7. 2. Ahmed, C.F., S.K. Tanbeer, B.S. Jeong and Y.K. Lee, 2009. "Efficient tree structures for high utility pattern mining in incremental databases", IEEE Trans. Knowl. Data Eng., 21(12): 1708-1721. 3. Tseng, V.S., 2015. Efficient Algorithm For Mining Concise And Lossless Representation For Mining High Utility Itemsets IEEE Trans. Knowl. Data Eng, 27-3. 4. G.C.T.P.V. and S. Tseng, 2014. "An efficient projection-based indexing approach for mining high utility itemsets", Knowl. Inf. Syst, 38-1- 85-107. 5. Chan, R., Q. Yang and Y. Shen, "Mining high utility itemsets", Proc. IEEE Int. Conf. Data Min., 19-26. 6. Li, N., W. Qardaji, D. Su and J. Cao, 2012. "Privbasis: Frequent itemset mining with differential privacy,", Proc. VLDB Endowment, 5(11): 1340-1351. 7. Bonomi, L. and L. Xiong, 2013. "A two-phase algorithm for mining sequential patterns with differential privacy,", Proc. 22nd ACM Conf. Inf. Knowl. Manage., 269-278. 8. Shen, E. and T. Yu, 2013. "Mining frequent graph patterns with differential privacy", Proc. 12th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 545-553. 9. C.W.T.P. and W.H., 2011. "An effective tree structure for mining high utility itemsets", Expert Syst. Appl., 38(6): 7419-7424.
88 S.R. Chitra Devi and 2P. Tharcis, 2015 /Journal Of Applied Sciences Research 11(19), October, Pages: 84-88 10. Agrawal, R. and R. Srikant, "Fast algorithms for mining association rules", Proc. 20th Int. Conf. Very Large Data Bases, 487-499. 11. Erwin, R.P., Gopalan and N.R. Achuthan, "Efficient mining of high utility itemsets from large datasets", Proc. Int. Conf. Pacific-Asia Conf. Knowl. Discovery Data Mining, 554-561. 12. Liu, Y., W. Liao and A. Choudhary "A fast high utility itemsets mining algorithm", Proc. Utility- Based Data Mining Workshop, 90-99. 13. Lucchese, C., S. Orlando and R. Perego, 2006. "Fast and memory efficient mining of frequent closed itemsets", IEEE Trans. Knowl. Data Eng., 18(1): 21-36. 14. Tseng, V.S., C.W. Wu, B.E. Shie and P.S. Yu, "UP-Growth: An efficient algorithm for high utility itemset mining", Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, 253-262. 15. Wu, C.W., B.E. Shie, V.S. Tseng and P.S. Yu, "Mining top-k high utility itemsets", Proc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 78-86 16. Wu, C.W., P. Fournier-Viger, P.S. Yu and V.S. Tseng, "Efficient mining of a concise and lossless representation of high utility itemsets", Proc. IEEE Int. Conf. Data Mining, 824-833.