Evaluating the Classification Accuracy of Data Mining Algorithms for Anonymized Data

Similar documents
On Privacy-Preservation of Text and Sparse Binary Data with Sketches

Partition Based Perturbation for Privacy Preserving Distributed Data Mining

Comparison and Analysis of Anonymization Techniques for Preserving Privacy in Big Data

A Review on Privacy Preserving Data Mining Approaches

International Journal of Modern Trends in Engineering and Research e-issn No.: , Date: 2-4 July, 2015

Comparative Analysis of Anonymization Techniques

An Efficient Clustering Method for k-anonymization

Hiding Sensitive Predictive Frequent Itemsets

Survey Result on Privacy Preserving Techniques in Data Publishing

An Approach for Privacy Preserving in Association Rule Mining Using Data Restriction

Achieving k-anonmity* Privacy Protection Using Generalization and Suppression

Survey of Anonymity Techniques for Privacy Preserving

The Applicability of the Perturbation Model-based Privacy Preserving Data Mining for Real-world Data

International Journal of Modern Engineering and Research Technology

Optimal k-anonymity with Flexible Generalization Schemes through Bottom-up Searching

Automated Information Retrieval System Using Correlation Based Multi- Document Summarization Method

SIMPLE AND EFFECTIVE METHOD FOR SELECTING QUASI-IDENTIFIER

Emerging Measures in Preserving Privacy for Publishing The Data

Secured Medical Data Publication & Measure the Privacy Closeness Using Earth Mover Distance (EMD)

Security Control Methods for Statistical Database

(α, k)-anonymity: An Enhanced k-anonymity Model for Privacy-Preserving Data Publishing

Improving Privacy And Data Utility For High- Dimensional Data By Using Anonymization Technique

Data Distortion for Privacy Protection in a Terrorist Analysis System

Distributed Data Anonymization with Hiding Sensitive Node Labels

Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees

Privacy Preserving Two-Layer Decision Tree Classifier for Multiparty Databases

Anonymization Algorithms - Microaggregation and Clustering

Performance Analysis of Data Mining Classification Techniques

Co-clustering for differentially private synthetic data generation

Research Paper SECURED UTILITY ENHANCEMENT IN MINING USING GENETIC ALGORITHM

Distributed Bottom up Approach for Data Anonymization using MapReduce framework on Cloud

PRIVACY-PRESERVING MULTI-PARTY DECISION TREE INDUCTION

A Review of Privacy Preserving Data Publishing Technique

CS573 Data Privacy and Security. Li Xiong

Research Trends in Privacy Preserving in Association Rule Mining (PPARM) On Horizontally Partitioned Database

Random Forest A. Fornaser

Data Anonymization - Generalization Algorithms

A PRIVACY-PRESERVING DATA MINING METHOD BASED ON SINGULAR VALUE DECOMPOSITION AND INDEPENDENT COMPONENT ANALYSIS

WEIGHTED K NEAREST NEIGHBOR CLASSIFICATION ON FEATURE PROJECTIONS 1

Reconstruction-based Classification Rule Hiding through Controlled Data Modification

Temporal Weighted Association Rule Mining for Classification

Privacy Preserving Decision Tree Mining from Perturbed Data

A Survey on: Privacy Preserving Mining Implementation Techniques

Index Terms Data Mining, Classification, Rapid Miner. Fig.1. RapidMiner User Interface

Feature-weighted k-nearest Neighbor Classifier

Privacy-Preserving. Introduction to. Data Publishing. Concepts and Techniques. Benjamin C. M. Fung, Ke Wang, Chapman & Hall/CRC. S.

A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection (Kohavi, 1995)

Implementation of Privacy Mechanism using Curve Fitting Method for Data Publishing in Health Care Domain

A FUZZY BASED APPROACH FOR PRIVACY PRESERVING CLUSTERING

Introduction to Data Mining

Mining Multiple Private Databases Using a knn Classifier

Dynamic Optimization of Generalized SQL Queries with Horizontal Aggregations Using K-Means Clustering

Efficient k-anonymization Using Clustering Techniques

(δ,l)-diversity: Privacy Preservation for Publication Numerical Sensitive Data

Privacy Preserving Naïve Bayes Classifier for Horizontally Distribution Scenario Using Un-trusted Third Party

Privacy Preserving in Knowledge Discovery and Data Publishing

Uncertain Data Classification Using Decision Tree Classification Tool With Probability Density Function Modeling Technique

Privacy-Preserving Algorithms for Distributed Mining of Frequent Itemsets

AN EFFECTIVE FRAMEWORK FOR EXTENDING PRIVACY- PRESERVING ACCESS CONTROL MECHANISM FOR RELATIONAL DATA

PRIVACY PRESERVING DATA MINING BASED ON VECTOR QUANTIZATION

BIOINF 585: Machine Learning for Systems Biology & Clinical Informatics

Accumulative Privacy Preserving Data Mining Using Gaussian Noise Data Perturbation at Multi Level Trust

Data Mining Lecture 8: Decision Trees

Keyword: Frequent Itemsets, Highly Sensitive Rule, Sensitivity, Association rule, Sanitization, Performance Parameters.

Improved Frequent Pattern Mining Algorithm with Indexing

Service-Oriented Architecture for Privacy-Preserving Data Mashup

ADDITIVE GAUSSIAN NOISE BASED DATA PERTURBATION IN MULTI-LEVEL TRUST PRIVACY PRESERVING DATA MINING

Enhanced Slicing Technique for Improving Accuracy in Crowdsourcing Database

Building a Concept Hierarchy from a Distance Matrix

International Journal of Advanced Research in Computer Science and Software Engineering

The Comparative Study of Machine Learning Algorithms in Text Data Classification*

Distributed Data Mining with Differential Privacy

International Journal of Scientific Research & Engineering Trends Volume 4, Issue 6, Nov-Dec-2018, ISSN (Online): X

The Role of Biomedical Dataset in Classification

Record Linkage using Probabilistic Methods and Data Mining Techniques

Effect of Principle Component Analysis and Support Vector Machine in Software Fault Prediction

GAIN RATIO BASED FEATURE SELECTION METHOD FOR PRIVACY PRESERVATION

Privacy and Security Ensured Rule Mining under Partitioned Databases

An Attack on the Privacy of Sanitized Data That Fuses the Outputs of Multiple Data Miners

Understanding Rule Behavior through Apriori Algorithm over Social Network Data

Performance Evaluation of Various Classification Algorithms

Outlier Detection Using Unsupervised and Semi-Supervised Technique on High Dimensional Data

Privacy Preserving Classification of heterogeneous Partition Data through ID3 Technique

An Effective Performance of Feature Selection with Classification of Data Mining Using SVM Algorithm

Individualized Error Estimation for Classification and Regression Models

MINING ASSOCIATION RULE FOR HORIZONTALLY PARTITIONED DATABASES USING CK SECURE SUM TECHNIQUE

A Comparative Study and Literature Survey on Privacy Preserving Data Mining Techniques

3 Virtual attribute subsetting

Data Mining. Lesson 9 Support Vector Machines. MSc in Computer Science University of New York Tirana Assoc. Prof. Dr.

An Empirical Study of Lazy Multilabel Classification Algorithms

PPKM: Preserving Privacy in Knowledge Management

Privacy Preserving Data Mining

Cyber attack detection using decision tree approach

Privacy Preserving Data Mining Technique and Their Implementation

Using Decision Boundary to Analyze Classifiers

Slicing Technique For Privacy Preserving Data Publishing

PRIVACY PRESERVING IN DISTRIBUTED DATABASE USING DATA ENCRYPTION STANDARD (DES)

Generalization-Based Privacy-Preserving Data Collection

Privacy Preserved Data Publishing Techniques for Tabular Data

Slides for Data Mining by I. H. Witten and E. Frank

Transcription:

International Journal of Computer Science and Telecommunications [Volume 3, Issue 8, August 2012] 63 ISSN 2047-3338 Evaluating the Classification Accuracy of Data Mining Algorithms for Anonymized Data M. Sridhar 1 and Dr. B. Raveendra Babu 2 Department of Computer Applications, R.V.R. & J.C College of Engineering, Guntur, India 1 mandapati_s@yahoo.com, 2 rbhogapathi@yahoo.com Abstract Recent advances in hardware technology have increased storage and recording capability with regard to personal data on individuals. This has created fears that such data could be misused. To alleviate such concerns, data was anonymized and many techniques were recently proposed on performing data mining tasks in ways which ensured privacy. Anonymization techniques were drawn from a variety of related topics like data mining, cryptography and information hiding. Data is anoymized through methods like randomization, k-anonymous, l-diversity. Several privacy preserving data mining algorithms are available in literature. This paper investigates the classification accuracy of the data with and without k-anonymization to compare the efficiency of privacy preserving mining. The classification accuracy is evaluated using k nearest neighbor, J48 and Bagging. Index Terms Privacy Preserving Data Mining, k-anonymous, k Nearest Nneighbor, J48 and Bagging T I. INTRODUCTION O maintain the privacy of the data during data mining process, the data is anonymized to preserve privacy. Several techniques of privacy-preserving data mining are available in literature [1], [2], [3]. Most privacy computations methods use some form of data transformation to ensure privacy preservation. Usually such methods reduce representation granularity to reduce privacy. This reduction leads to some loss in data management or mining algorithms effectiveness, a trade-off between information loss and privacy. Some commonly used anonymization techniques are given below: The randomization method: The randomization method was generally used with regard to distorting data by probability distribution for surveys which include an evasive answer bias due to privacy concerns [4, 5]. Randomization is a technique for privacy-preserving data mining where noise is added to data to cover record s attribute values. [6, 7]. The added noise is large that individual record values are not recovered. Hence, techniques derive aggregate distributions from perturbed records. Data mining techniques are developed later to work with such aggregate distributions. The k-anonymity model and l-diversity: The k-anonymity model was developed due to the possibility of indirect record identification from public databases where combinations of record attributes are used to identify individual records. In k- anonymity method, data representation granularity is reduced with techniques like generalization and suppression. This granularity is reduced such that any record maps onto at least k other data records. In generalization, attribute values are generalized to reduce representation granularity. For example, a date of birth can be generalized to a range like year of birth, in order to reduce identification risks. In the suppression method, the attribute value is removed. It is clear that these methods lower identification risks through the use of public records while reducing the transformed data applications accuracy. The l-diversity model handles weaknesses in the k- anonymity model. As protecting identities to the k-individual level is not the same as protecting corresponding sensitive values, when there is homogeneity of sensitive values inside a group. To do this, a concept of sensitive values, intra-group diversity is promoted within the anonymization scheme [8]. Hence, the l-diversity technique was proposed to maintain minimum group size of k, and also to focus on maintenance of sensitive attributes diversity. The t-closeness model is an improvement on the l-diversity concept. In [9], a t-closeness model was proposed using property that distance between distributions of sensitive attribute inside an anonymized group could not be different from global distribution by more than a threshold. The Earth Mover distance metric is used to quantify distance between two distributions. Further, t- closeness approach is more effective than other privacy preserving data mining methods for numeric attributes. Distributed privacy preservation: In some cases, individual entities will derive aggregate results from data sets partitioned across entities. The goal in distributed methods for privacypreserving data mining is allowing computation of aggregate statistics over a data set without compromising individual data set privacy, within different participants. Thus, participants may collaborate to obtain aggregate results, without trusting each other with regard to distribution of own data sets. Such partitioning can be horizontal where records are distributed across multiple entities or vertical where attributes are distributed across multiple entities. While individual entities may not share entire data sets, they could consent to limited sharing through various protocols. These methods ensure Journal Homepage: www.ijcst.org

M. Sridhar and Dr. B. Raveendra Babu 64 privacy for every individual entity, while getting aggregate results over entire data. Downgrading Application Effectiveness: In some cases, though data may be un-available, application output like association rule mining, classification or query processing may led to privacy violations. This has resulted in research regarding downgrading application effectiveness by either data/application modifications. Examples of such techniques are association rule hiding [10], classifier downgrading [11], and query auditing [12]. Two approaches are used for association rule hiding: Distortion: In distortion, [13], entry for a given transaction is changed to another value. Blocking: In blocking [14], entry remains the same but is incomplete. Classifier downgrading modifies data so that classification accuracy is reduced, while retaining data utility for other applications. Query auditing, denies one or more queries from a sequence. The to-be-denied queries are chosen so that underlying data sensitivity is preserved. Data anonymization techniques were under investigation recently, for structured data, including tabular, graph and item set data. They published detailed information, permitting ad hoc queries and analyses, while at the same time guaranteeing sensitive information privacy against a variety of attacks. This paper investigates the classification accuracy of the data with and without k-anonymization to compare the efficiency of privacy preserving mining to extract the desired patterns from the dataset. The classification accuracy is evaluated using k nearest neighbor, J48 and Bagging. The proposed method is evaluated using Adult dataset. The paper is organized as follows: Section II reviews some the related works in literature, section III details the methodology, section IV gives the results and section V concludes the paper. II. LITERATURE REVIEW Aggarwal, et al., [15] reviewed the state-of-the-art methods for privacy. In the study the following was discussed: Methods for randomization, k-anonymization, and distributed privacy-preserving data mining. Cases where the output of data mining applications requires to be sanitized for privacy-preservation purposes. Methods for distributed privacy-preserving mining for handling horizontally and vertically partitioned data. Issues in downgrading the effectiveness of data mining and data management applications Limitations of the problem of privacy. Bayardo, et al., [16] proposed an optimization algorithm for k-anonymization. Optimized k-anonymity records are NPhard due to which significant computational challenges are faced. The proposed method explores the space of possible anonymization and develops strategies to reduce computation. The census data was used as dataset for evaluation of the proposed algorithm. Experiments show that the proposed method finds optimal k-anonymizations using a wide range of k. The effects of different coding approaches and quality of anonymization and performance are studied using the proposed method. LeFevre, et al., [17] proposed a suite of anonymization algorithms for producing an anonymous view based on a target class of workloads. It consists of several data mining tasks and selection predicates. The proposed suite of algorithms preserved the utility of the data while creating anonymous data snapshot. The following expressive workload characteristics were incorporated: Classification & Regression, for predicting categorical and numeric attributes. Multiple Target Models, to predict multiple different attributes. Selection & Projection, to warrant that only a subset of the data remains useful for a particular task. The results show that the proposed method produces highquality data for a variety of workloads. Inan, et al., [18] proposed a new approach for building classifiers using anonymized data by modeling anonymized data as uncertain data. In the proposed method, probability distribution over the data is not assumed. Instead, it is proposed to collect necessary statistics during anonymization and release them with anonymized data. It was demonstrated that releasing statistics is not violative of anonymity. The aim was to compare accuracy of: (1) various classification models constructed over anonymized data sets and transformed through various heuristics (2) approach of modeling anonymized data as uncertain data using expected distance functions. The experiments investigate relationship between accuracy and anonymization parameters: the anonymity requirement, k; the number of quasi-identifiers; and, especially for distributed data mining the data holders number. Run experiments were conducted on Support Vector Machine (SVM) classification methods and Instance Based learning (IB) methods. For implementation and experimentation Java source code of the libsvm SVM library, and IB1 instance-based classifier from Weka was used. For experiments, DataFly was selected as is a widely known solution. Experiments which spanned many alternatives in both local and distributed data mining settings revealed that our method performed better than heuristic approaches used to handle anonymized data. Fung., et al., [19] presented a Top-Down Specialization (TDS) approach, a practical algorithm to determine a general data version that covers sensitive information and is for modeling classification. Data generalization is implemented by specializing/detailing information level in a top-down manner till violation of a minimum privacy requirement. TDS generalizes by specializing it iteratively from a most general state. At every step, a general (i.e., parent) value is specialized into a specific (i.e., child) value for a categorical attribute. Otherwise an interval is divided into two sub-intervals for continuous attribute. Repetition of this process occurs till specialization leads to violation of anonymity requirement. This top-down specialization is efficient to handle categorical and continuous attributes. The proposed approach exploits the idea that data has redundant structures for classification. While generalization could terminate some structures, others come to help. Experimental results show that classification

International Journal of Computer Science and Telecommunications [Volume 3, Issue 8, August 2012] 65 quality can be preserved for highly restrictive privacy requirements. TDS gels well with large data sets and complex anonymity requirements. This work has applicability in public and private sectors where information is shared for mutual benefits and productivity. A. Dataset III. METHODOLGY The Adult dataset used for evaluation is obtained from UCI Machine Learning Repository [20]. The dataset contains 48842 instances, with both categorical and integer attributes from the 1994 Census information. The Adult dataset contains about 32,000 rows with 4 numerical columns. The columns and their ranges are: age {17 90}, fnlwgt {10000 1500000}, hrsweek {1 100} and edunum {1 16}. The age column and the native country are anonymized using the principles of k-anonymization. Table I and II show the original data and the modified attribute data. The dataset is classified using 10 fold cross validation the original and the K anonymized dataset. B. k Nearest Neighbor In the k Nearest Neighbor classification, the algorithm finds a set of k objects in the training set that are the closest to the input and classifies the input based on the majority of the class in that group [21]. The main elements required for this approach are: a set of labeled objects, distance metrics and number of k. Following is the k-nearest neighbor classification algorithm: Input: D with k sets of training objects xy, and test object x, y Process: compute distance d x, x between test object and every object. Select D D, set of k closest training objects to test object. Output: z y arg max I v y v xi, yi Dz Where v is the class label, I is a indicator function. C. J48 J48 algorithm is a version of the C4.5 decision tree learner [22]. Decision tree models are formed by J48 implementation. The decision trees are built using greedy technique and analyzing the training data. The nodes in the decision tree evaluate the significance of the features. Classification of input is performed by following a path from root to leaves in the decision tree, resulting in a decision of the inputs class. The decision trees are built in a top-down fashion by selecting the most suitable attribute every time. The classification power of each feature is computed by an information-theoretic measure. On choosing a feature, subsets of the training data is formed based on different values of the selected feature. Process is iterated for every subset until majority of the instances belong to the same class. Decision i Table I: The Original Attributes Of Adult Dataset age native-country Class 39 United-States <=50K 50 United-States <=50K 38 United-States <=50K 53 United-States <=50K 28 Cuba <=50K 37 United-States <=50K 49 Jamaica <=50K 52 United-States >50K 31 United-States >50K 42 United-States >50K Table II: The K-Anonymous Dataset age native-country Class middle aged United-States <=50K middle aged United-States <=50K adult US-oth <=50K middle aged African <=50K middle aged United-States >50K adult United-States >50K adult United-States >50K tree induction generally learns a high accuracy set of rules which ensure accuracy but produce excessive rules. Thus, the trees are pruned to generalize a decision tree to gain balance of flexibility and accuracy. J48 utilize two pruning methods; subtree replacement and subtree raising. In subtree replacement, nodes are replaced with a leaf, working backwards towards the root. In subtree raising, a node is moved upwards towards the root of the tree. D. Bagging Bagging improves classification [23] by combining models learned from different subsets of a dataset. Overfitting and

M. Sridhar and Dr. B. Raveendra Babu 66 variance is considerably reduced on application of bagging. The instability in the classifier is used to perturb the training set producing different classifiers using the same learning algorithm. For a training set A of size t, n number of new training sets A i (t < t) is generated. The subsets are generated by sampling of the instances from A uniformly and by replacement. Some instances are repeated in each A i due to sampling with replacement and are called bootstrap samples. The n number of models are fitted using all the n number of subsets (bootstrap samples). The output is obtained by averaging the above result. The root mean square error is shown in Fig. 2. The precision, recall and fmeasure are tabulated in Table IV and shown in Fig. 3 and Fig. 4. IV. RESULT AND DISCUSSION The classification accuracy obtained from k nearest neighbor, J48 and Bagging is tabulated in Table III and is shown in Fig. 1. It is seen that the classification accuracy does not change considerably and are within manageable limits for all the classifiers. The variation in the classification accuracy is not more than 0.4% for any classifier. Fig. 2: The root mean square error Technique used Table III: Classification Accuracy Classification Accuracy knn without anonymization 79.32% J48 without anonymization 85.32% Bagging without anonymization 85.01% knn with anonymization 79.56% J48 with anonymization 85.13% Bagging with anonymization 84.68% Table IV: Precision, Recall and fmeasure Technique used Precision Recall fmeasure knn without anonymization 0.791 0.793 0.792 J48 without anonymization 0.848 0.853 0.849 Bagging without anonymization 0.844 0.85 0.845 knn with anonymization 0.796 0.796 0.796 J48 with anonymization 0.845 0.851 0.845 Bagging with anonymization 0.84 0.847 0.841 Fig. 1: Classification accuracy for various techniques Fig. 3: Precision and Recall for different techniques

International Journal of Computer Science and Telecommunications [Volume 3, Issue 8, August 2012] 67 Fig. 4: fmeasure for different techniques V. CONCLUSION In this paper, it was proposed to compare the classification accuracy of k nearest neighbor, J48 and Bagging with anonymized and without anonymized dataset. The adult datset was used for evaluating the classification accuracy and the dataset was K-anonymized. The experimental results demonstrate that the classification accuracy of the classifiers is not diminished due to anonymization of the data. [14] Saygin Y., Verykios V., Clifton C.: Using Unknowns to prevent discovery of Association Rules, ACM SIGMOD Record, 30(4), 2001. [15] C. C. Aggarwal and P. S. Yu (eds.), Privacy-Preserving Data Mining: Models and Algorithms (Springer, New York, 2008). [16] Bayardo R. J., Agrawal R.: Data Privacy through Optimal k- Anonymization. Proceedings of the ICDE Conference, pp. 217 228, 2005 [17] LeFevre K., DeWitt D., Ramakrishnan R.: Workload Aware Anonymization. KDD Conference, 2006. [18] A. Inan, M. Kantarcioglu, and E. Bertino. Using anonymized data for classification. In ICDE, 2009. [19] Fung B., Wang K., Yu P.: Top-Down Specialization for Information and Privacy Preservation. ICDE Conference, 2005. [20] Frank, A. & Asuncion, A. (2010). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. [21] Fix E, Hodges JL, Jr (1951) Discriminatory analysis, nonparametric discrimination. USAF School of Aviation Medicine, Randolph Field, Tex., Project 21-49-004, Rept. 4, Contract AF41(128)-31, February 1951 [22] Witten, I. H. and Frank, E. (2005). Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed. Morgan Kaufmann, San Fransico, CA. [23] L. Breiman. Bagging predictors. Machine Learning, 24(2):123 140, 1996. REFERENCES [1] Verykios V. S., Elmagarmid A., Bertino E., Saygin Y. Dasseni E.: Association Rule Hiding. IEEE Transactions on Knowledge and Data Engineering, 16(4), 2004. [2] Sweeney L.: Privacy-Preserving Bio-terrorism Surveillance. AAAI Spring Symposium, AI Technologies for Homeland Security, 2005. [3] Newton E., Sweeney L., Malin B.: Preserving Privacy by Deidentifying Facial Images. IEEE Transactions on Knowledge and Data Engineering, IEEE TKDE, February 2005. [4] Liew C. K., Choi U. J., Liew C. J. A data distortion by probability distribution. ACM TODS, 10(3):395-411, 1985. [5] Warner S. L. Randomized Response: A survey technique for eliminating evasive answer bias. Journal of American Statistical Association, 60(309):63 69, March 1965. [6] Agrawal R., Srikant R. Privacy-Preserving Data Mining. Proceedings of the ACM SIGMOD Conference, 2000. [7] Agrawal D. Aggarwal C. C. On the Design and Quantification of Privacy Preserving Data Mining Algorithms. ACM PODS Conference, 2002. [8] Machanavajjhala A., Gehrke J., Kifer D., and Venkitasubramaniam M.: l-diversity: Privacy Beyond k- Anonymity. ICDE, 2006. [9] Li N., Li T., Venkatasubramanian S: t-closeness: Privacy beyond k-anonymity and l-diversity. ICDE Conference, 2007. [10] Verykios V. S., Elmagarmid A., Bertino E., Saygin Y. Dasseni E.: Association Rule Hiding. IEEE Transactions on Knowledge and Data Engineering, 16(4), 2004. [11] Moskowitz I., Chang L.: A decision theoretic system for information downgrading. Joint Conference on Information Sciences, 2000. [12] Adam N., Wortmann J. C.: Security-Control Methods for Statistical Databases: A Comparison Study. ACM Computing Surveys, 21(4), 1989. [13] Oliveira S. R. M., Zaiane O., Saygin Y.: Secure Association- Rule Sharing. PAKDD Conference, 2004.