A HYBRID APPROACH FOR DATA CLUSTERING USING DATA MINING TECHNIQUES

Similar documents
Iteration Reduction K Means Clustering Algorithm

NORMALIZATION INDEXING BASED ENHANCED GROUPING K-MEAN ALGORITHM

Dynamic Clustering of Data with Modified K-Means Algorithm

Accelerating Unique Strategy for Centroid Priming in K-Means Clustering

Discovery of Agricultural Patterns Using Parallel Hybrid Clustering Paradigm

CLUSTERING BIG DATA USING NORMALIZATION BASED k-means ALGORITHM

Keywords Clustering, Goals of clustering, clustering techniques, clustering algorithms.

Redefining and Enhancing K-means Algorithm

Enhancing K-means Clustering Algorithm with Improved Initial Center

Analyzing Outlier Detection Techniques with Hybrid Method

Comparing and Selecting Appropriate Measuring Parameters for K-means Clustering Technique

Fast Efficient Clustering Algorithm for Balanced Data

An Improved Document Clustering Approach Using Weighted K-Means Algorithm

A Review of K-mean Algorithm

Dynamic Optimization of Generalized SQL Queries with Horizontal Aggregations Using K-Means Clustering

A New Improved Hybridized K-MEANS Clustering Algorithm with Improved PCA Optimized with PSO for High Dimensional Data Set

A CRITIQUE ON IMAGE SEGMENTATION USING K-MEANS CLUSTERING ALGORITHM

Research and Improvement on K-means Algorithm Based on Large Data Set

Improved Performance of Unsupervised Method by Renovated K-Means

Comparative Study of Clustering Algorithms using R

A New Method For Forecasting Enrolments Combining Time-Variant Fuzzy Logical Relationship Groups And K-Means Clustering

REMOVAL OF REDUNDANT AND IRRELEVANT DATA FROM TRAINING DATASETS USING SPEEDY FEATURE SELECTION METHOD

An Overview of various methodologies used in Data set Preparation for Data mining Analysis

ECLT 5810 Clustering

An Enhanced K-Medoid Clustering Algorithm

K-Mean Clustering Algorithm Implemented To E-Banking

Text Documents clustering using K Means Algorithm

An Efficient Learning of Constraints For Semi-Supervised Clustering using Neighbour Clustering Algorithm

New Approach for K-mean and K-medoids Algorithm

A SURVEY ON CLUSTERING ALGORITHMS Ms. Kirti M. Patil 1 and Dr. Jagdish W. Bakal 2

ECLT 5810 Clustering

AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION

Machine Learning (BSMC-GA 4439) Wenke Liu

Text Document Clustering Using DPM with Concept and Feature Analysis

A Comparative study of Clustering Algorithms using MapReduce in Hadoop

K-Means Clustering With Initial Centroids Based On Difference Operator

K-means algorithm and its application for clustering companies listed in Zhejiang province

APPLICATION OF MULTIPLE RANDOM CENTROID (MRC) BASED K-MEANS CLUSTERING ALGORITHM IN INSURANCE A REVIEW ARTICLE

Machine Learning (BSMC-GA 4439) Wenke Liu

I. INTRODUCTION II. RELATED WORK.

Unsupervised Data Mining: Clustering. Izabela Moise, Evangelos Pournaras, Dirk Helbing

CLUSTERING. JELENA JOVANOVIĆ Web:

CHAPTER 3 A FAST K-MODES CLUSTERING ALGORITHM TO WAREHOUSE VERY LARGE HETEROGENEOUS MEDICAL DATABASES

Knowledge Discovery using PSO and DE Techniques

Outlier Detection Using Unsupervised and Semi-Supervised Technique on High Dimensional Data

Unsupervised Learning

CHAPTER 4 AN OPTIMIZED K-MEANS CLUSTERING TECHNIQUE USING BAT ALGORITHM

Open Access Research on the Prediction Model of Material Cost Based on Data Mining

Correlation Based Feature Selection with Irrelevant Feature Removal

COMPARATIVE ANALYSIS OF PARALLEL K MEANS AND PARALLEL FUZZY C MEANS CLUSTER ALGORITHMS

Clustering Analysis of Simple K Means Algorithm for Various Data Sets in Function Optimization Problem (Fop) of Evolutionary Programming

Unsupervised Learning

An Efficient Approach towards K-Means Clustering Algorithm

Nearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications

UNSUPERVISED STATIC DISCRETIZATION METHODS IN DATA MINING. Daniela Joiţa Titu Maiorescu University, Bucharest, Romania

International Journal of Advanced Research in Computer Science and Software Engineering

A New Cooperative Algorithm Based on PSO and K-Means for Data Clustering

A New Technique to Optimize User s Browsing Session using Data Mining

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Infrequent Weighted Itemset Mining Using SVM Classifier in Transaction Dataset

A Survey on k-means Clustering Algorithm Using Different Ranking Methods in Data Mining

Understanding Clustering Supervising the unsupervised

Gene Clustering & Classification

KEYWORDS: Clustering, RFPCM Algorithm, Ranking Method, Query Redirection Method.

FEATURE EXTRACTION TECHNIQUES USING SUPPORT VECTOR MACHINES IN DISEASE PREDICTION

Jarek Szlichta

Unsupervised Learning : Clustering

K-modes Clustering Algorithm for Categorical Data

CHAPTER 3 ASSOCIATON RULE BASED CLUSTERING

Introduction to Artificial Intelligence

AN EFFICIENT HYBRID APPROACH FOR DATA CLUSTERING USING DYNAMIC K-MEANS ALGORITHM AND FIREFLY ALGORITHM

International Journal of Scientific & Engineering Research, Volume 6, Issue 10, October ISSN

Regression Based Cluster Formation for Enhancement of Lifetime of WSN

Feature-Guided K-Means Algorithm for Optimal Image Vector Quantizer Design

Indexing in Search Engines based on Pipelining Architecture using Single Link HAC

Clustering & Classification (chapter 15)

Mine Blood Donors Information through Improved K- Means Clustering Bondu Venkateswarlu 1 and Prof G.S.V.Prasad Raju 2

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

K-Means. Oct Youn-Hee Han

Mounica B, Aditya Srivastava, Md. Faisal Alam

An Efficient Approach for Color Pattern Matching Using Image Mining

A STUDY OF SOME DATA MINING CLASSIFICATION TECHNIQUES

LEARNING WEIGHTS OF FUZZY RULES BY USING GRAVITATIONAL SEARCH ALGORITHM

A Naïve Soft Computing based Approach for Gene Expression Data Analysis

CHAPTER VII INDEXED K TWIN NEIGHBOUR CLUSTERING ALGORITHM 7.1 INTRODUCTION

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

Comparision between Quad tree based K-Means and EM Algorithm for Fault Prediction

Fuzzy Ant Clustering by Centroid Positioning

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

International Journal of Scientific Research & Engineering Trends Volume 4, Issue 6, Nov-Dec-2018, ISSN (Online): X

Unsupervised Learning and Clustering

A PSO-based Generic Classifier Design and Weka Implementation Study

Keywords: clustering algorithms, unsupervised learning, cluster validity

SOMSN: An Effective Self Organizing Map for Clustering of Social Networks

Data Informatics. Seon Ho Kim, Ph.D.

Semi-Supervised Clustering with Partial Background Information

Efficiency of k-means and K-Medoids Algorithms for Clustering Arbitrary Data Points

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

A Study on Different Challenges in Facial Recognition Methods

AUTOMATIC TEXT SUMMARIZATION

Transcription:

Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014, pg.81 88 RESEARCH ARTICLE ISSN 2320 088X A HYBRID APPROACH FOR DATA CLUSTERING USING DATA MINING TECHNIQUES K.PRABHA * Research Scholar in Computer Science Vivekanandha College for Women, Unjanai, Tiruchengode, India prabha.bca2007@gmail.com K.RAJESWARI, M.Sc., M.Phil, Assistant Professor in Computer Science Vivekanandha College for Women, Unjanai, Tiruchengode, India ABSTRACT Data clustering is a process of arranging similar data into groups. Data clustering is a common technique for data analysis and is used in many fields, including data mining, pattern recognition and image analysis. In this paper a hybrid clustering algorithm based on K-mean is described. K-means clustering is a common and simple approach for data clustering but this method has some limitation such as local optimal convergence and initial point sensibility. The algorithm then extended to use k-means clustering to refined centroids and clusters. The experimental results showed the accuracy and capability of proposed algorithm to data clustering. Keywords: Data clustering, K-means, Data mining, Hybrid algorithm I. INTRODUCTION Data mining is the important step for discover the knowledge in knowledge discovery process in data set. Data mining provide us useful pattern or model to discovering important and useful data from whole database. We used different algorithms to extract the valuable data. Data Base Management System (DBMS) and Data Mining (DM) are two emerging technologies in this information world. Knowledge is obtained through the collection of 2014, IJCSMC All Rights Reserved 81

information. Information is enriched in today s business world. In order to maintain the information, a new systematic way has been used such as database. In this database, there are collection of data organized in the form of tuples and attributes. In order to obtain knowledge from a collection of data, business intelligence methods are used. Data Mining is the powerful new technology with great potential that help the business environments to focus on only the essential information in their data warehouse. Using the data mining technology, it is easy for decision making by improving the business intelligence. Cluster Analysis is an effective method of analyzing and finding useful information in terms of grouping of objects from large amount of data. To group the data into clusters, many algorithms have been proposed such as k-means algorithm, Fuzzy C means, Evolutionary Algorithm and EM Method. These clustering algorithms groups the data into classes or clusters so that object within a cluster exhibit same similarity and dissimilar to other clusters. Thus based on the similar-ity and dissimilarity, the objects are grouped into clusters. In this paper, a new hybrid algorithm is for business intelligence recommender system based on knowledge of users and frequent items. This algorithms works in three phases namely preprocessing, modelling and obtaining intelligence. First, the users are filtered based on thuser s Profile and knowledge such as needs and preferences defined in the form of rules. This poses selection of features and data reduction from dataset. Second, these filtered users are then clustered using k-means clustering algorithm as a modelling phase. Third, identifies nearest neighbour for active users and generates recommendations by finding most frequent items from identified cluster of users. This algorithm is experimentally tested with e-commerce application for better decision making by recommending top n products to the active users. II. RELATED WORKS Alexandre et al, presented a framework for mining association rules from transactions consisting of categorical items where the data has been randomized to preserve privacy of individual transactions. They analyzed the nature of privacy breaches and proposed a class of randomization operators that are much more effective than uniform randomization in limiting the breaches. Jiaqi Wang et al, stated that Support vector machines (SVM) have been applied to build classifiers, which can help users make well-informed business decisions. The paper speeds up the response of SVM classifiers by reducing the number of support vectors. It was done by the Kmeans SVM (KMSVM) algorithm proposed in the paper. The KMSVM algorithm combines the K-means clustering technique with SVM and requires one more input parameter to be determined: the number of clusters. M. H. Marghny et al, stated that Clustering analysis plays an important role in scientific research and commercial application. In the article, they proposed a technique to handle large scale data, which can select initial clustering center purposefully using Genetic algorithms (GAs), reduce the sensitivity to isolated point, avoid dissevering big cluster, and overcome 2014, IJCSMC All Rights Reserved 82

deflexion of data in some degree that caused by the disproportion in data partitioning owing to adoption of multi-sampling. Ravindra Jain, explained that data clustering was a process of arranging similar data into groups. A clustering algorithm partitions a data set into several groups such that the similarity within a group was better than among groups. In the paper a hybrid clustering algorithm based on K-mean and K-harmonic mean (KHM) was described. The result obtained from proposed hybrid algorithm was much better than the traditional K-mean & KHM algorithm. David et al, described a clustering method for unsupervised classification of objects in large data sets. The new methodology combines the mixture likelihood approach with a sampling and sub sampling strategy in order to cluster large data sets efficiently. The method was quick and reliable and produces classifications comparable to previous work on these data using supervised clustering. Hong Yu et al. performed comparative study on data mining for individual credit risk evaluation. The researcher referred Credit risk is referred to as the risk of loss when a debtor does not fulfill its debt contract and is of natural interest to practitioners in bank as well as to regulators. Ji Dan et al. performed synthesized data mining algorithm based on clustering and decision tree. At present, they have accumulated abundant agriculture information data for the vast territory and diversity of crop resources. However, we just can visit a small quantity of data for lack of useful tools. III. CLUSTERING ANALYSIS Clustering is a division of data into groups of similar objects. Representing the data by fewer clusters necessarily loses certain fine details, but achieves simplification. Clustering is the process of partitioning a set of objects into a finite number of k clusters so that the objects within each cluster are similar, while objects in different clusters are dissimilar. Cluster analysis is an effective tool in scientific or managerial inquiry. The k-means clustering method is one of the simplest unsupervised learning algorithms for solving the wellknown clustering problem. The goal is to divide the data points in a data set into K clusters fixed a priori such that some metric relative to the centroids of the clusters (called the fitness function) is minimized. The algorithm consists of two stages: an initial stage and an iterative stage. The initial stage involves defining K initial centroids, one for each cluster. These centroids must be selected carefully because of differing initial centroids causes differing results. One policy for selecting the initial centroids is to place them as far as possible from each other. The second, iterative stage repeats the assignment of each data point to the nearest centroid and K new centroids are recalculated according to the new assignments. This iteration stops when a certain criterion is met, for example, when there is no further change in the assignment of the data points. Given a set of n data samples, suppose that we want to classify the data into K groups, the algorithm aims to minimize a fitness function, such as a squared error function defined as: 2014, IJCSMC All Rights Reserved 83

where xi (j) -cj 2 is a chosen distance measure between the i th data point of the data sample xi (j) which was classified into the j th group) and the j th cluster center cj, 1 = i= n, 1 = j= K and is an indicator of the distance of the n data samples from their respective cluster centroids. The k- means clustering algorithm is summarized in the following steps : Place K points into the space represented by the objects that are being clustered. These points represent the initial group centroids. Assign each object to the group that has the closest centroid. When all objects have been assigned, recalculate the positions of the K centroids. Repeat steps 2 and 3 until a certain criterion is met, such as the centroids no longer moving or a preset number of iterations have been performed. This results in the separation of objects into groups for which the score of the fitness function is minimized. IV. HYBRID ALGORITHM In this business world, there exists a lot of information. It is necessary to maintain the information for decision making in business environment. The decision making consists of two kinds of data such as Online Analytical Processing (OLAP) and Online Transactional Processing (OLTP).The former contains historical data about the business from the beginning itself and the later contains only day-to-day transactions on business. Based on these two kinds of data, decision making process can be carried out by means of a new hybrid algorithm based on frequent itemsets mining and clustering using k-means algorithm and knowledge of users in order to improve the business intelligence. Algorithm: Hybrid Algorithm Input: The number of clusters k. Dataset D with n objects. Output: A set of clusters Ck. Begin Identify the dataset D = (A) = {a 1, a 2 a n } attributes/objects. Outline the Consideration Column (CC) from D. CC = D`= (Aʹ) {aʹ1, aʹ2 aʹm} Repeat Formulate the rules for identifying the similar objects. (R) = (R) σ (a ij (Aʹ)/Dʹ), where i=1to n, j=1 to m. S = f(x) / D, where S is the sample set containing identified column FIS = value(s) > (SUP(X) and/or (SUP(X U Y) / SUP (X))), Where FIS is the frequent itemsets identified. 2014, IJCSMC All Rights Reserved 84

C n (a ij (A`)) > T from FIS, where T specifies the threshold value. Generate the Resultant Dataset, D`` Until no further partition is possible in CC. Identify the k initial mean vectors (centroids) from the objects of D``. Repeat Compute the distance, between the object a i and the centroids c j. Assign objects to cluster with min { ( c jk )} of all clusters Recalculate the k new centroids c j from the new cluster formed Until reaching convergence Find the nearest neighbour of active object Generate recommendation from most frequent items of nearest neighbour End Figure.1. Flow chart of Hybrid Algorithm V. PROPOSED WORK K-means clustering is a well known partitioning method. In this objects are classified as belonging to one of K-groups. The result of partitioning method is a set of K clusters, each object of data set belonging to one cluster. In each cluster there may be a centroid or a cluster representative. In case where we consider real-valued data, the arithmetic mean of the attribute vectors for all objects within a cluster provides an appropriate representative; alternative types of centroid may be required in other cases. 2014, IJCSMC All Rights Reserved 85

Steps of K-Means Clustering K-means algorithm was developed by Macqueen which aims to find the cluster centers, (c1,..., ck), in order to minimize the sum of the squared distances (Distortion, D) of each data point (xi) to its nearest cluster centre (ck), as shown in Equation below where d is some distance function. Typically, d is chosen as the Euclidean distance. The steps of K-means algorithm are shown as follows: (1) Initialize K centre locations (c 1,..., c K ). (2) Assign each x i to its nearest cluster centre c k. (3) update each cluster centre ck as the mean of all x i that have been assigned as closest to it. (4) Calculate D= n i=1[min k= (1 k) d (xi, ci)] 2 (5) If the value of D has converged, then return (c1,..., ck); else go to Step 2. Main advantages: 1. K-means clustering is very Fast, robust and easily understandable. If the data set is well separated from each other data set, then it gives best results. 2. The clusters do not having overlapping character and are also non-hierarchical in nature. Main disadvantages: 1. In this algorithm, complexity is more as compared to others 2. Need of predefined cluster centers. 3. Handling any of empty Clusters: One more problems with K-means clustering is that empty clusters are generated during execution, if in case no data points are allocated to a cluster under consideration during the assignment phase. The experimental results demonstrated that the proposed ranking based K-means algorithm produces better results than that of the existing k- means algorithm. VI. EXPERIMENTAL RESULTS Experiments have been performed on five data sets including Iris, WDBC, Sonar, Glass and Wine that were selected from standard data set UCI. Which the characteristics of each of them are described in the following: Iris (fisher s iris plants database): This data set is according to the Iris flowers recognition that has three different classes and each class consists of 50 samples. Every sample has four attributes. WDBC (Wisconsin diagnostic breast cancer): this data set is about breast cancer that is collected at the University of Wisconsin. That has two different classes including 357 and 212 samples. In this data set, each sample has 30 features. Sonar: this data set is about sonar signals of submarine that totally has 208 samples. In this data set, Sonar signals divided in two classes including 111 and 97 samples with 60 features. Glass (glass identification database): this data set is about several types of glass that has totally 214 samples in 6 classes. These classes are about building_windows_float_processed, 2014, IJCSMC All Rights Reserved 86

vehicle_windows_float_processe, containers, ableware, building_windows_non_float_processed and headlamps and each data has 9 attributes. Wine (wine recognition data): This data set is regarding to drinks recognition that totally has 178 samples classified into three different classes including 59, 71 and 48 samples, respectively. In this data set, each sample has 13 attributes. Table 1: COMPARISON OF INTRA-CLUSTER DISTANCE BETWEEN DIFFERENT METHODS FOR IRIS DATA SET. k-means 97.32 102.57 11.34 Table 2: COMPARISON OF INTRA-CLUSTER DISTANCE BETWEEN DIFFERENT METHODS FOR WDBC DATA. k-means 152647.25 179794.25 55222.17 Table 3: COMPARISON OF INTRA-CLUSTER DISTANCE BETWEEN DIFFERENT METHODS FOR SONAR DATA. k-means 234.77 235.06 0.15 Table4: COMPARISON OF INTRA-CLUSTER DISTANCE BETWEEN DIFFERENT METHODS FOR GLASS DATA. k-means 213.42 241.03 25.32 Table 5: COMPARISON OF INTRA-CLUSTER DISTANCE BETWEEN DIFFERENT METHODS FOR WINE DATA. k-means 16555.68 17662.73 1878.07 VII. CONCLUSION In this paper, a new hybridizes method based on k-means clustering method proposed to cluster data. In the proposed method, to find optimal cluster centers and then initialized the k- means algorithm with this centers to refine the centers. This method applies to 5 dataset. Experimental results for optimizing fitness function related to intra-cluster distance showed that the proposed obtained results that are relatively stable in different performance. 2014, IJCSMC All Rights Reserved 87

REFERENCES [1] K. C. Wong and G. C. L. Li, Simultaneous Pattern and Data Clustering for Pattern Cluster Analysis, in IEEE Transaction on Knowledge and Data Engineering, Vol. 20, pp. 911-923, Los Angeles, USA, June 2008. [2] D. W. van der Merwe and A. P. Engelbrecht, Data Clustering Using Particle Swarm Optimization, in the 2003 Congress on Evolutionary Computation, Vol. 1, pp. 215-220, December 2003. [3] Jiaqi Wang, Xindong Wu, Chengqi Zhang, Support vector machines based on K-means clustering for real-time business intelligence systems, Int. J. Business Intelligence and Data Mining, Vol. 1, No. 1, 2005. [4] Chonghui GUO, Li PENG, A Hybrid Clustering Algorithm Based on Dimensional Reduction and K-Harmonic Means, IEEE 2008. [5] Tsai C. Y. and Chiu C. C., 2008. Developing a feature weight self-adjustment mechanism for a K-Means clustering algorithm. Computational Statistics and Data Analysis, Vol. 52, pp. 4658-4672. [6] Carlos Ordonez, Integrating K-Means Clustering with a Relational DBMS Using SQL, IEEE Trans. Knowl. Data Eng., VOL. 18, NO. 2, PP. 188-201, 2006. [7] Kao, Y.T., E. Zahara and I.W. Kao, 2008. A hybridized approach to data clustering, Expert Systems with Applications, 34 (3): 1754-1762. [8] Ji Dan, Qiu Jianlin (2010) A Synthesized Data Mining Algorithm Based on Clustering and Decision Tree, 10 th IEEE International CIT. 2014, IJCSMC All Rights Reserved 88