FINAL REPORT: K MEANS CLUSTERING SAPNA GANESH (sg1368) VAIBHAV GANDHI(vrg5913)

Size: px
Start display at page:

Download "FINAL REPORT: K MEANS CLUSTERING SAPNA GANESH (sg1368) VAIBHAV GANDHI(vrg5913)"

Transcription

1 FINAL REPORT: K MEANS CLUSTERING SAPNA GANESH (sg1368) VAIBHAV GANDHI(vrg5913) Overview The partitioning of data points according to certain features of the points into small groups is called clustering. That is, similar data that is based on such features are grouped together. Clustering has several applications. The growth of data has made clustering applicable in several fields ranging from Artificial Intelligence to Economics. K means algorithm aids in partitioning of such data points into k clusters. It is a learning algorithm, which is not supervised. The method requires the user to input k which is the set of clusters. The outline of the method is, find k centers in the graph of data points and assign each data point to the nearest cluster center. K means is one of the most popular methods of clustering, as it simply takes the distance of the points that are plotted based on their features, for computations. Distance calculations use Euclidean Distance :(x p )^2 where x is the data point p is the position of the clusters. The K means algorithm is as follows. 1) Initialize K centers for the clusters 2) Assign closest cluster to each data point 3) Update the centers to the mean distance of the the data points in that cluster 4) Iterate until the centers do not change. The final output depends heavily on the initialization of the clusters. This algorithm does not give the same result every time it is run. It finds the local maxima of the given representation. Computational Problem The traditional approach is to develop a simple program that runs sequentially but will have several problems. Current computers have extremely fast multi core processors but are limited in terms of memory. This would require us to partition and process the data separately. The time required to do this would go up significantly. In the K Means algorithm, there are several computations that are independent of each other. Such computations if run sequentially will have a high time complexity. The most intensive computation in this case is calculation of the distances between points and the cluster positions. This can be implemented in parallel, as every such distance is independent of the other.

2 After the distance calculation, the centers need to be updated to the mean of the distances in that group. This needs to be done sequentially, as this computation involves the relationship of all the data points. Analysis of Research Papers Research Paper 1 [1] The K Means algorithm is highly sensitive to the initial placement of the cluster centers. Initialization of cluster centers is thus a really important factor. This paper extensively discusses several approaches based on time complexity to initialize the cluster centers. The approaches are divided into Linear Time Complexity initialization methods and Log Linear Time Complexity (O(nlogn)) initialization methods. Some of the Linear Time Complexity initialization methods: 1. Forgy's method: Assign points to one of the k clusters uniformly at random. The centers are determined using the centroids of these initial clusters. 2. Jancey's method: Assign to each cluster a synthetic point in the data space. Empty clusters is a problem. 3. MacQueen's method: Either, take the first k points as center (would be data order dependent). Or, choose k random points from the data. Chances of high density region point being selected. Some of the Log Linear Time Complexity (O(nlogn)) initialization methods: 1. Hartigan s method: Sorts the points. i'th center is (1 + (i 1)N/K) th point. Invariant to data ordering, well separated 2. Al Daoud s variance based method: Sort points on the attribute with the greatest variance, then partition them into K groups along the same dimension. 3. The ROBIN method: Avoids outliers. The paper convinces us that many of the linear time complex methods do not perform well, but the performance of every method depends on the dataset. Research Paper 2 [2] When distributing data from the dataset to the different workers, some workers might get data that takes more processing time than others. This paper addresses such load balancing using the Master Worker approach. There are three parallel strategies, disk parallel, task parallel and both data and disk parallel. The Master would read the dataset from the file and select initial centers. The Slave would execute clustering operation for received data and return the clustering results to the Master. The Master then partitions a new sub dataset and sends it to the Slave. This is continued until there are no more data in the Master.

3 Load Balancing: If data division is static, data deflection may be produced and some processors will be idle. So the available method is that after Slave completes computing for assigned sub dataset, it initially applies for the next sub dataset that has the same size from the Master until there are no more data to be allocated. This balances the load better. Research Paper 3 [3] It identifies the sequential and parallel parts of the algorithm. It explains the implementation of Map Reduce to solve the K Means problem in parallel cluster. In this identification, calculation of distance from the centroid is the parallel part and updating the centroids is the sequential part. The framework is explained in detail below. 1) map (key, value) Input: Global variable centers, the offset key, the sample value. Output: <key, value > pair, where the key is the index of the closest center point and value is a string comprise of sample information. 2) combine (key, value) Input: key is the index of the cluster, V is the list of the samples assigned to the same cluster. Output: < key, value > pair, where the key is the index of the cluster, value is a string comprised of sum of the samples in the same cluster and the sample number. 3) reduce (key, value) Input: key is the index of the cluster, V is the list of the partial sums from different host. Output: < key,value > pair, where the key is the index of the cluster, value is a string representing the new center. This paper also illustrates the performance in SpeedUp, ScaleUp and SizeUp. Usage Details DEVELOPERS MANUAL In order to compile the code run the following lines. It can be run on nessie, kraken or champ computers as we run on more than 4 cores. In the folder containing the java files, Build path for Parallel Java 2 Library $ export CLASSPATH=.:/var/tmp/parajava/pj2/pj2.jar $ export PATH=/usr/local/dcs/versions/jdk1.7.0_11_x64/bin:$PATH Compile the software $ javac *.java

4 Refer Users Manual to run the program. USERS MANUAL Sequential Program: $ java pj2 KMeansSeq <number of clusters> <number of iterations> <number of lines from dataset> <columns> <input filename.csv> Parallel Program: $ java pj2 threads=<threads> debug=makespan schedule=dynamic KMeansSmp <Number of clusters> <number of iterations> <number of inputs from the dataset> <input filename.csv> for reference, use /var/tmp/vrg5913/dataset.csv on nessie Design and Operation of Sequential Program The general design of the K means algorithm is as follows. The program takes in the number of clusters and number of iterations among other arguments. The number of lines and number of columns are also specified in the arguments.

5 Initialize a Kmeans object. The constructor initializes all the data points and number of clusters from the CSV file. And then the data is read by the filereader function. It reads the data, calls the stringtodouble function and returns an arraylist of data. The stringtodouble function converts the data that is read as a string into double datatype in order to perform computations. The clustering performs the Kmeans clustering. It iteratively updates the new positions and updates the positions of the data points. It also prints the output The mean function calculates the mean of the given numbers and the EuclideanDistance calculates the distances. Design and Operation of Parallel Program We implemented the Map Reduce framework but it does not work as it is not meant to be used iteratively, that is, the output of the first iteration of the results can not be used as an input to the second iteration. The master worker configuration had similar issues, except that in this program, we also had several problems splitting the data set to distribute to the clusters. It had to be read every time such an operation took place which in Map Reduce was handled by the library. We settled for a single node multi core program. The main method takes in the arguments, reads the data and converts it into an array list. It also initializes the cluster. It performs the number of iterations which is the sequential part of the program. Inside this iterative loop, we run the parallelfor method to compute the distances for the new clusters. At the end of the parallelfor loop, the new clusters are updated. The stringtodouble function converts the data that is read as a string into double datatype in order to perform computations. computedist calculates the distances. We also use a reduction variable myvbl (which extends Vbl from the pj2 library), it has two data members, an array and a counter. This variable takes care of the reducing of the means calculated by the parallelfor. Strong Scaling The code is run from the range of 30,000 to 150,000 lines of input from 1 through 8 (except for 30,000 where we ran from 1 through 16 cores). The results are illustrated below. We notice that the efficiency falls majorly after 3 or 4 cores. The data is tabularized below. Below every table is its graph. DataSize Cores Time (msec) Speed Up Efficiency 30,000 Seq

6 DataSize Cores Time (msec) Speed Up Efficiency 50,000 Seq

7 DataSize Cores Time (msec) Speed Up Efficiency 75,000 Seq DataSize Cores Time (msec) Speed Up Efficiency 100,000 Seq

8 DataSize Cores Time (msec) Speed Up Efficiency 150,000 Seq Issues with Strong Scaling The K Means algorithm requires the updates of the new center positions. This involves all the threads to complete execution before a new iteration. Essentially, the parallel part of the program runs within the sequential loop. This may cause some loss in efficiency. We timed

9 the sequential, parallel, run time of the program for several inputs. However, the sequential part of the program does not take as much time as expected. It is not the outer loop that is causing the major delay in the execution. As we add more threads, the run time and the total time is similar. The time required for the completion of each thread is added to the total run time. This is because, the data is not divided among all the threads. So every thread does all the computation. We conclude that the parallel loop in the program has caused the lack of efficiency. However, the loop is implemented as per the k means requirements. We intend to work on it in the future. Weak scaling The code is run through 5 different cases, 10,000, 15,000, 20,000, 25,000, 30,000 lines of input from 1 through 8. As mentioned in the tables, every input is multiplied with 1 to 8 for cores 1 to 8 respectively The results are illustrated below. We notice that the efficiency falls majorly after 3 or 4 cores. The data is tabularized below. Below every table is its graph. DataSize Multiplier Cores Time (msec) Size Up Efficiency 10,000 Seq

10 DataSize Multiplier Cores Time (msec) Size Up Efficiency 15,000 Seq ,000 Seq

11 DataSize Multiplier Cores Time (msec) Size Up Efficiency 25,000 Seq

12 DataSize Multiplier Cores Time (msec) Size Up Efficiency 30,000 Seq Issues with weak scaling As the data size multiplies, we assume that each thread can balance the load of the lines of input. As mentioned in the strong scaling issues, every thread performs reading and

13 computations on all the data. So as the load is not being balanced correctly, the weak scaling efficiency is poor. Future Work In the future work, we intend to increase the efficiency of the program that is already implemented. We also intend to extend current program from single node multi processor to cluster program. Lessons Learned We primarily learnt the complete working of the K Means algorithm. We also learnt the implementation of the Map Reduce framework and its correct usage. We unsuccessfully implemented the Master worker configuration. However we understood several subtle errors that were taking place. We also learnt some different initialization methods for finding the initial cluster positions in K Means algorithm. We explored the Parallel Java 2 library, especially the Map Reduce framework. Statement on contributions of the team members Sapna Ganesh: Summarized the first two papers, implemented a part of sequential program and performed debugging, implemented the master worker configuration and implemented a part of the multicore program. Ran the program for weak scaling and obtained results. Vaibhav Gandhi: Summarized the third paper, implemented part of the sequential program, aided in developing the master worker configuration, implemented the Map Reduce framework and performed debugging on it, and implemented a part of the multicore program. Ran the program for strong scaling and obtained results. References [1] Title: A Comparative Study of Efficient Initialization Methods for the K Means Clustering Algorithm Authors: M. Emre Celebi, Hassan A. Kingravi and Patricio A. Vela Journal: Expert Systems with Applications, 40(1): , 2013 [2]Title: The Study of Parallel K Means algorithm Authors: Yufang Zhang, Zhongyang Xiong, Jiali Mao and Ling Ou Proceedings of the 6th World Congress on Intelligent Control and Automation, June 21 23, 2006, Dalian, China [3] Title: Parallel K Means Clustering Based on MapReduce Authors: Weizhong Zhao, Huifang Ma, and Qing He Conference: International Conference On Cloud Computing Technology And Science CloudCom, pp , 2009

14

Cost Optimal Parallel Algorithm for 0-1 Knapsack Problem

Cost Optimal Parallel Algorithm for 0-1 Knapsack Problem Cost Optimal Parallel Algorithm for 0-1 Knapsack Problem Project Report Sandeep Kumar Ragila Rochester Institute of Technology sr5626@rit.edu Santosh Vodela Rochester Institute of Technology pv8395@rit.edu

More information

Subset Sum Problem Parallel Solution

Subset Sum Problem Parallel Solution Subset Sum Problem Parallel Solution Project Report Harshit Shah hrs8207@rit.edu Rochester Institute of Technology, NY, USA 1. Overview Subset sum problem is NP-complete problem which can be solved in

More information

MATE-EC2: A Middleware for Processing Data with Amazon Web Services

MATE-EC2: A Middleware for Processing Data with Amazon Web Services MATE-EC2: A Middleware for Processing Data with Amazon Web Services Tekin Bicer David Chiu* and Gagan Agrawal Department of Compute Science and Engineering Ohio State University * School of Engineering

More information

Chapter 16 Heuristic Search

Chapter 16 Heuristic Search Chapter 16 Heuristic Search Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

Analysis of Extended Performance for clustering of Satellite Images Using Bigdata Platform Spark

Analysis of Extended Performance for clustering of Satellite Images Using Bigdata Platform Spark Analysis of Extended Performance for clustering of Satellite Images Using Bigdata Platform Spark PL.Marichamy 1, M.Phil Research Scholar, Department of Computer Application, Alagappa University, Karaikudi,

More information

Volume 3, Issue 11, November 2015 International Journal of Advance Research in Computer Science and Management Studies

Volume 3, Issue 11, November 2015 International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 11, November 2015 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com

More information

Chapter 13 Strong Scaling

Chapter 13 Strong Scaling Chapter 13 Strong Scaling Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

Parallelization of K-Means Clustering on Multi-Core Processors

Parallelization of K-Means Clustering on Multi-Core Processors Parallelization of K-Means Clustering on Multi-Core Processors Kittisak Kerdprasop and Nittaya Kerdprasop Data Engineering and Knowledge Discovery (DEKD) Research Unit School of Computer Engineering, Suranaree

More information

Chapter 31 Multi-GPU Programming

Chapter 31 Multi-GPU Programming Chapter 31 Multi-GPU Programming Part I. Preliminaries Part II. Tightly Coupled Multicore Part III. Loosely Coupled Cluster Part IV. GPU Acceleration Chapter 29. GPU Massively Parallel Chapter 30. GPU

More information

Foundation of Parallel Computing- Term project report

Foundation of Parallel Computing- Term project report Foundation of Parallel Computing- Term project report Shobhit Dutia Shreyas Jayanna Anirudh S N (snd7555@rit.edu) (sj7316@rit.edu) (asn5467@rit.edu) 1. Overview: Graphs are a set of connections between

More information

Mounica B, Aditya Srivastava, Md. Faisal Alam

Mounica B, Aditya Srivastava, Md. Faisal Alam International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2017 IJSRCSEIT Volume 2 Issue 3 ISSN : 2456-3307 Clustering of large datasets using Hadoop Ecosystem

More information

Parallel K-Means Clustering with Triangle Inequality

Parallel K-Means Clustering with Triangle Inequality Parallel K-Means Clustering with Triangle Inequality Rachel Krohn and Christer Karlsson Mathematics and Computer Science Department, South Dakota School of Mines and Technology Rapid City, SD, 5771, USA

More information

Subset Sum - A Dynamic Parallel Solution

Subset Sum - A Dynamic Parallel Solution Subset Sum - A Dynamic Parallel Solution Team Cthulu - Project Report ABSTRACT Tushar Iyer Rochester Institute of Technology Rochester, New York txi9546@rit.edu The subset sum problem is an NP-Complete

More information

Hybrid MapReduce Workflow. Yang Ruan, Zhenhua Guo, Yuduo Zhou, Judy Qiu, Geoffrey Fox Indiana University, US

Hybrid MapReduce Workflow. Yang Ruan, Zhenhua Guo, Yuduo Zhou, Judy Qiu, Geoffrey Fox Indiana University, US Hybrid MapReduce Workflow Yang Ruan, Zhenhua Guo, Yuduo Zhou, Judy Qiu, Geoffrey Fox Indiana University, US Outline Introduction and Background MapReduce Iterative MapReduce Distributed Workflow Management

More information

Chapter 26 Cluster Heuristic Search

Chapter 26 Cluster Heuristic Search Chapter 26 Cluster Heuristic Search Part I. Preliminaries Part II. Tightly Coupled Multicore Part III. Loosely Coupled Cluster Chapter 18. Massively Parallel Chapter 19. Hybrid Parallel Chapter 20. Tuple

More information

Unsupervised Learning

Unsupervised Learning Outline Unsupervised Learning Basic concepts K-means algorithm Representation of clusters Hierarchical clustering Distance functions Which clustering algorithm to use? NN Supervised learning vs. unsupervised

More information

Chapter 6 Parallel Loops

Chapter 6 Parallel Loops Chapter 6 Parallel Loops Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

CATEGORIZATION OF THE DOCUMENTS BY USING MACHINE LEARNING

CATEGORIZATION OF THE DOCUMENTS BY USING MACHINE LEARNING CATEGORIZATION OF THE DOCUMENTS BY USING MACHINE LEARNING Amol Jagtap ME Computer Engineering, AISSMS COE Pune, India Email: 1 amol.jagtap55@gmail.com Abstract Machine learning is a scientific discipline

More information

Survey on MapReduce Scheduling Algorithms

Survey on MapReduce Scheduling Algorithms Survey on MapReduce Scheduling Algorithms Liya Thomas, Mtech Student, Department of CSE, SCTCE,TVM Syama R, Assistant Professor Department of CSE, SCTCE,TVM ABSTRACT MapReduce is a programming model used

More information

Data Mining Algorithms In R/Clustering/K-Means

Data Mining Algorithms In R/Clustering/K-Means 1 / 7 Data Mining Algorithms In R/Clustering/K-Means Contents 1 Introduction 2 Technique to be discussed 2.1 Algorithm 2.2 Implementation 2.3 View 2.4 Case Study 2.4.1 Scenario 2.4.2 Input data 2.4.3 Execution

More information

Parallel Implementation of K-Means on Multi-Core Processors

Parallel Implementation of K-Means on Multi-Core Processors Parallel Implementation of K-Means on Multi-Core Processors Fahim Ahmed M. Faculty of Science, Suez University, Suez, Egypt, ahmmedfahim@yahoo.com Abstract Nowadays, all most personal computers have multi-core

More information

MapReduce: Simplified Data Processing on Large Clusters 유연일민철기

MapReduce: Simplified Data Processing on Large Clusters 유연일민철기 MapReduce: Simplified Data Processing on Large Clusters 유연일민철기 Introduction MapReduce is a programming model and an associated implementation for processing and generating large data set with parallel,

More information

Chapter 27 Cluster Work Queues

Chapter 27 Cluster Work Queues Chapter 27 Cluster Work Queues Part I. Preliminaries Part II. Tightly Coupled Multicore Part III. Loosely Coupled Cluster Chapter 18. Massively Parallel Chapter 19. Hybrid Parallel Chapter 20. Tuple Space

More information

University of Florida CISE department Gator Engineering. Clustering Part 2

University of Florida CISE department Gator Engineering. Clustering Part 2 Clustering Part 2 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville Partitional Clustering Original Points A Partitional Clustering Hierarchical

More information

Searching frequent itemsets by clustering data: towards a parallel approach using MapReduce

Searching frequent itemsets by clustering data: towards a parallel approach using MapReduce Searching frequent itemsets by clustering data: towards a parallel approach using MapReduce Maria Malek and Hubert Kadima EISTI-LARIS laboratory, Ave du Parc, 95011 Cergy-Pontoise, FRANCE {maria.malek,hubert.kadima}@eisti.fr

More information

Chapter 24 File Output on a Cluster

Chapter 24 File Output on a Cluster Chapter 24 File Output on a Cluster Part I. Preliminaries Part II. Tightly Coupled Multicore Part III. Loosely Coupled Cluster Chapter 18. Massively Parallel Chapter 19. Hybrid Parallel Chapter 20. Tuple

More information

The MapReduce Framework

The MapReduce Framework The MapReduce Framework In Partial fulfilment of the requirements for course CMPT 816 Presented by: Ahmed Abdel Moamen Agents Lab Overview MapReduce was firstly introduced by Google on 2004. MapReduce

More information

N N Sudoku Solver. Sequential and Parallel Computing

N N Sudoku Solver. Sequential and Parallel Computing N N Sudoku Solver Sequential and Parallel Computing Abdulaziz Aljohani Computer Science. Rochester Institute of Technology, RIT Rochester, United States aaa4020@rit.edu Abstract 'Sudoku' is a logic-based

More information

Chapter 21 Cluster Parallel Loops

Chapter 21 Cluster Parallel Loops Chapter 21 Cluster Parallel Loops Part I. Preliminaries Part II. Tightly Coupled Multicore Part III. Loosely Coupled Cluster Chapter 18. Massively Parallel Chapter 19. Hybrid Parallel Chapter 20. Tuple

More information

Pthread Parallel K-means

Pthread Parallel K-means Pthread Parallel K-means Barbara Hohlt CS267 Applications of Parallel Computing UC Berkeley December 14, 2001 1 Introduction K-means is a popular non-hierarchical method for clustering large datasets.

More information

The MapReduce Abstraction

The MapReduce Abstraction The MapReduce Abstraction Parallel Computing at Google Leverages multiple technologies to simplify large-scale parallel computations Proprietary computing clusters Map/Reduce software library Lots of other

More information

Chapter 11 Overlapping

Chapter 11 Overlapping Chapter 11 Overlapping Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

AN EFFECTIVE DETECTION OF SATELLITE IMAGES VIA K-MEANS CLUSTERING ON HADOOP SYSTEM. Mengzhao Yang, Haibin Mei and Dongmei Huang

AN EFFECTIVE DETECTION OF SATELLITE IMAGES VIA K-MEANS CLUSTERING ON HADOOP SYSTEM. Mengzhao Yang, Haibin Mei and Dongmei Huang International Journal of Innovative Computing, Information and Control ICIC International c 2017 ISSN 1349-4198 Volume 13, Number 3, June 2017 pp. 1037 1046 AN EFFECTIVE DETECTION OF SATELLITE IMAGES VIA

More information

CLUSTERING BIG DATA USING NORMALIZATION BASED k-means ALGORITHM

CLUSTERING BIG DATA USING NORMALIZATION BASED k-means ALGORITHM Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320 088X IMPACT FACTOR: 5.258 IJCSMC,

More information

Improved MapReduce k-means Clustering Algorithm with Combiner

Improved MapReduce k-means Clustering Algorithm with Combiner 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation Improved MapReduce k-means Clustering Algorithm with Combiner Prajesh P Anchalia Department Of Computer Science and Engineering

More information

Distributed Computations MapReduce. adapted from Jeff Dean s slides

Distributed Computations MapReduce. adapted from Jeff Dean s slides Distributed Computations MapReduce adapted from Jeff Dean s slides What we ve learnt so far Basic distributed systems concepts Consistency (sequential, eventual) Fault tolerance (recoverability, availability)

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK DISTRIBUTED FRAMEWORK FOR DATA MINING AS A SERVICE ON PRIVATE CLOUD RUCHA V. JAMNEKAR

More information

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering SYDE 372 - Winter 2011 Introduction to Pattern Recognition Clustering Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 5 All the approaches we have learned

More information

Unsupervised Learning : Clustering

Unsupervised Learning : Clustering Unsupervised Learning : Clustering Things to be Addressed Traditional Learning Models. Cluster Analysis K-means Clustering Algorithm Drawbacks of traditional clustering algorithms. Clustering as a complex

More information

L22: SC Report, Map Reduce

L22: SC Report, Map Reduce L22: SC Report, Map Reduce November 23, 2010 Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance Google version = Map Reduce; Hadoop = Open source

More information

Maximum Clique Problem

Maximum Clique Problem Maximum Clique Problem Dler Ahmad dha3142@rit.edu Yogesh Jagadeesan yj6026@rit.edu 1. INTRODUCTION Graph is a very common approach to represent computational problems. A graph consists a set of vertices

More information

K-Means Clustering With Initial Centroids Based On Difference Operator

K-Means Clustering With Initial Centroids Based On Difference Operator K-Means Clustering With Initial Centroids Based On Difference Operator Satish Chaurasiya 1, Dr.Ratish Agrawal 2 M.Tech Student, School of Information and Technology, R.G.P.V, Bhopal, India Assistant Professor,

More information

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS By HAI JIN, SHADI IBRAHIM, LI QI, HAIJUN CAO, SONG WU and XUANHUA SHI Prepared by: Dr. Faramarz Safi Islamic Azad

More information

Lecture-17: Clustering with K-Means (Contd: DT + Random Forest)

Lecture-17: Clustering with K-Means (Contd: DT + Random Forest) Lecture-17: Clustering with K-Means (Contd: DT + Random Forest) Medha Vidyotma April 24, 2018 1 Contd. Random Forest For Example, if there are 50 scholars who take the measurement of the length of the

More information

Case Study on Enhanced K-Means Algorithm for Bioinformatics Data Clustering

Case Study on Enhanced K-Means Algorithm for Bioinformatics Data Clustering Case Study on Enhanced K-Means Algorithm for Bioinformatics Data Clustering Jasmin T. Jose 1, Ushus Zachariah 2, Lijo V.P. 3, Lydia J Gnanasigamani 4 and Jimmy Mathew 5 1 Assistant Professor, 2 Assistant

More information

Department of Computer Science San Marcos, TX Report Number TXSTATE-CS-TR Clustering in the Cloud. Xuan Wang

Department of Computer Science San Marcos, TX Report Number TXSTATE-CS-TR Clustering in the Cloud. Xuan Wang Department of Computer Science San Marcos, TX 78666 Report Number TXSTATE-CS-TR-2010-24 Clustering in the Cloud Xuan Wang 2010-05-05 !"#$%&'()*+()+%,&+!"-#. + /+!"#$%&'()*+0"*-'(%,1$+0.23%(-)+%-+42.--3+52367&.#8&+9'21&:-';

More information

An improved MapReduce Design of Kmeans for clustering very large datasets

An improved MapReduce Design of Kmeans for clustering very large datasets An improved MapReduce Design of Kmeans for clustering very large datasets Amira Boukhdhir Laboratoire SOlE Higher Institute of management Tunis Tunis, Tunisia Boukhdhir _ amira@yahoo.fr Oussama Lachiheb

More information

Data Mining. SPSS Clementine k-means Algorithm. Spring 2010 Instructor: Dr. Masoud Yaghini. Clementine

Data Mining. SPSS Clementine k-means Algorithm. Spring 2010 Instructor: Dr. Masoud Yaghini. Clementine Data Mining SPSS 12.0 6. k-means Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Outline K-Means Algorithm in K-Means Node References K-Means Algorithm in Overview The k-means method is a clustering

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)

More information

Understanding Clustering Supervising the unsupervised

Understanding Clustering Supervising the unsupervised Understanding Clustering Supervising the unsupervised Janu Verma IBM T.J. Watson Research Center, New York http://jverma.github.io/ jverma@us.ibm.com @januverma Clustering Grouping together similar data

More information

An Optimization Algorithm of Selecting Initial Clustering Center in K means

An Optimization Algorithm of Selecting Initial Clustering Center in K means 2nd International Conference on Machinery, Electronics and Control Simulation (MECS 2017) An Optimization Algorithm of Selecting Initial Clustering Center in K means Tianhan Gao1, a, Xue Kong2, b,* 1 School

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

Keywords clustering, parallel kmeans, improvement kmeans, kmeans, Big Data

Keywords clustering, parallel kmeans, improvement kmeans, kmeans, Big Data Volume 5, Issue 4, 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Review of Various Enhancement

More information

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING CSE 5243 INTRO. TO DATA MINING Cluster Analysis: Basic Concepts and Methods Huan Sun, CSE@The Ohio State University 09/25/2017 Slides adapted from UIUC CS412, Fall 2017, by Prof. Jiawei Han 2 Chapter 10.

More information

MapReduce: A Programming Model for Large-Scale Distributed Computation

MapReduce: A Programming Model for Large-Scale Distributed Computation CSC 258/458 MapReduce: A Programming Model for Large-Scale Distributed Computation University of Rochester Department of Computer Science Shantonu Hossain April 18, 2011 Outline Motivation MapReduce Overview

More information

A Comparative study of Clustering Algorithms using MapReduce in Hadoop

A Comparative study of Clustering Algorithms using MapReduce in Hadoop A Comparative study of Clustering Algorithms using MapReduce in Hadoop Dweepna Garg 1, Khushboo Trivedi 2, B.B.Panchal 3 1 Department of Computer Science and Engineering, Parul Institute of Engineering

More information

Optimizing the use of the Hard Disk in MapReduce Frameworks for Multi-core Architectures*

Optimizing the use of the Hard Disk in MapReduce Frameworks for Multi-core Architectures* Optimizing the use of the Hard Disk in MapReduce Frameworks for Multi-core Architectures* Tharso Ferreira 1, Antonio Espinosa 1, Juan Carlos Moure 2 and Porfidio Hernández 2 Computer Architecture and Operating

More information

Enhanced Bug Detection by Data Mining Techniques

Enhanced Bug Detection by Data Mining Techniques ISSN (e): 2250 3005 Vol, 04 Issue, 7 July 2014 International Journal of Computational Engineering Research (IJCER) Enhanced Bug Detection by Data Mining Techniques Promila Devi 1, Rajiv Ranjan* 2 *1 M.Tech(CSE)

More information

Big Data Management and NoSQL Databases

Big Data Management and NoSQL Databases NDBI040 Big Data Management and NoSQL Databases Lecture 2. MapReduce Doc. RNDr. Irena Holubova, Ph.D. holubova@ksi.mff.cuni.cz http://www.ksi.mff.cuni.cz/~holubova/ndbi040/ Framework A programming model

More information

Parallel Programming

Parallel Programming Parallel Programming Lecture delivered by: Venkatanatha Sarma Y Assistant Professor MSRSAS-Bangalore 1 Session Objectives To understand the parallelization in terms of computational solutions. To understand

More information

Multivariate Analysis

Multivariate Analysis Multivariate Analysis Cluster Analysis Prof. Dr. Anselmo E de Oliveira anselmo.quimica.ufg.br anselmo.disciplinas@gmail.com Unsupervised Learning Cluster Analysis Natural grouping Patterns in the data

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Mitigating Data Skew Using Map Reduce Application

Mitigating Data Skew Using Map Reduce Application Ms. Archana P.M Mitigating Data Skew Using Map Reduce Application Mr. Malathesh S.H 4 th sem, M.Tech (C.S.E) Associate Professor C.S.E Dept. M.S.E.C, V.T.U Bangalore, India archanaanil062@gmail.com M.S.E.C,

More information

EE/CSCI 451: Parallel and Distributed Computation

EE/CSCI 451: Parallel and Distributed Computation EE/CSCI 451: Parallel and Distributed Computation Lecture #7 2/5/2017 Xuehai Qian Xuehai.qian@usc.edu http://alchem.usc.edu/portal/xuehaiq.html University of Southern California 1 Outline From last class

More information

Chapter 19 Hybrid Parallel

Chapter 19 Hybrid Parallel Chapter 19 Hybrid Parallel Part I. Preliminaries Part II. Tightly Coupled Multicore Part III. Loosely Coupled Cluster Chapter 18. Massively Parallel Chapter 19. Hybrid Parallel Chapter 20. Tuple Space

More information

Introduction to Computer Science

Introduction to Computer Science DM534 Introduction to Computer Science Clustering and Feature Spaces Richard Roettger: About Me Computer Science (Technical University of Munich and thesis at the ICSI at the University of California at

More information

Parallel Computing with MATLAB

Parallel Computing with MATLAB Parallel Computing with MATLAB CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University

More information

Workload Characterization Techniques

Workload Characterization Techniques Workload Characterization Techniques Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 Jain@cse.wustl.edu These slides are available on-line at: http://www.cse.wustl.edu/~jain/cse567-08/

More information

Load Balancing for Entity Matching over Big Data using Sorted Neighborhood

Load Balancing for Entity Matching over Big Data using Sorted Neighborhood San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Fall 2015 Load Balancing for Entity Matching over Big Data using Sorted Neighborhood Yogesh Wattamwar

More information

Concurrency for data-intensive applications

Concurrency for data-intensive applications Concurrency for data-intensive applications Dennis Kafura CS5204 Operating Systems 1 Jeff Dean Sanjay Ghemawat Dennis Kafura CS5204 Operating Systems 2 Motivation Application characteristics Large/massive

More information

Dynamic Clustering in WSN

Dynamic Clustering in WSN Dynamic Clustering in WSN Software Recommended: NetSim Standard v11.1 (32/64 bit), Visual Studio 2015/2017, MATLAB (32/64 bit) Project Download Link: https://github.com/netsim-tetcos/dynamic_clustering_project_v11.1/archive/master.zip

More information

MSA220 - Statistical Learning for Big Data

MSA220 - Statistical Learning for Big Data MSA220 - Statistical Learning for Big Data Lecture 13 Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology Clustering Explorative analysis - finding groups

More information

Robust PDF Table Locator

Robust PDF Table Locator Robust PDF Table Locator December 17, 2016 1 Introduction Data scientists rely on an abundance of tabular data stored in easy-to-machine-read formats like.csv files. Unfortunately, most government records

More information

APPLICATION OF MULTIPLE RANDOM CENTROID (MRC) BASED K-MEANS CLUSTERING ALGORITHM IN INSURANCE A REVIEW ARTICLE

APPLICATION OF MULTIPLE RANDOM CENTROID (MRC) BASED K-MEANS CLUSTERING ALGORITHM IN INSURANCE A REVIEW ARTICLE APPLICATION OF MULTIPLE RANDOM CENTROID (MRC) BASED K-MEANS CLUSTERING ALGORITHM IN INSURANCE A REVIEW ARTICLE Sundari NallamReddy, Samarandra Behera, Sanjeev Karadagi, Dr. Anantha Desik ABSTRACT: Tata

More information

MapReduce Simplified Data Processing on Large Clusters

MapReduce Simplified Data Processing on Large Clusters MapReduce Simplified Data Processing on Large Clusters Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) MapReduce 1393/8/5 1 /

More information

Clustering Large scale data using MapReduce

Clustering Large scale data using MapReduce Clustering Large scale data using MapReduce Topics Clustering Very Large Multi-dimensional Datasets with MapReduce by Robson L. F. Cordeiro et al. Fast Clustering using MapReduce by Alina Ene et al. Background

More information

Accelerating K-Means Clustering with Parallel Implementations and GPU computing

Accelerating K-Means Clustering with Parallel Implementations and GPU computing Accelerating K-Means Clustering with Parallel Implementations and GPU computing Janki Bhimani Electrical and Computer Engineering Dept. Northeastern University Boston, MA Email: bhimani@ece.neu.edu Miriam

More information

Parallel K-means Clustering. Ajay Padoor Chandramohan Fall 2012 CSE 633

Parallel K-means Clustering. Ajay Padoor Chandramohan Fall 2012 CSE 633 Parallel K-means Clustering Ajay Padoor Chandramohan Fall 2012 CSE 633 Outline Problem description Implementation MPI Implementation OpenMP Test Results Conclusions Future work Problem Description Clustering

More information

Clustering Part 4 DBSCAN

Clustering Part 4 DBSCAN Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

Parallel Nested Loops

Parallel Nested Loops Parallel Nested Loops For each tuple s i in S For each tuple t j in T If s i =t j, then add (s i,t j ) to output Create partitions S 1, S 2, T 1, and T 2 Have processors work on (S 1,T 1 ), (S 1,T 2 ),

More information

Parallel Partition-Based. Parallel Nested Loops. Median. More Join Thoughts. Parallel Office Tools 9/15/2011

Parallel Partition-Based. Parallel Nested Loops. Median. More Join Thoughts. Parallel Office Tools 9/15/2011 Parallel Nested Loops Parallel Partition-Based For each tuple s i in S For each tuple t j in T If s i =t j, then add (s i,t j ) to output Create partitions S 1, S 2, T 1, and T 2 Have processors work on

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2008 CS 551, Spring 2008 c 2008, Selim Aksoy (Bilkent University)

More information

9/17/2009. Wenyan Li (Emily Li) Sep. 15, Introduction to Clustering Analysis

9/17/2009. Wenyan Li (Emily Li) Sep. 15, Introduction to Clustering Analysis Introduction ti to K-means Algorithm Wenan Li (Emil Li) Sep. 5, 9 Outline Introduction to Clustering Analsis K-means Algorithm Description Eample of K-means Algorithm Other Issues of K-means Algorithm

More information

Clustering Documents. Case Study 2: Document Retrieval

Clustering Documents. Case Study 2: Document Retrieval Case Study 2: Document Retrieval Clustering Documents Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade April 21 th, 2015 Sham Kakade 2016 1 Document Retrieval Goal: Retrieve

More information

732A54/TDDE31 Big Data Analytics

732A54/TDDE31 Big Data Analytics 732A54/TDDE31 Big Data Analytics Lecture 10: Machine Learning with MapReduce Jose M. Peña IDA, Linköping University, Sweden 1/27 Contents MapReduce Framework Machine Learning with MapReduce Neural Networks

More information

Performance impact of dynamic parallelism on different clustering algorithms

Performance impact of dynamic parallelism on different clustering algorithms Performance impact of dynamic parallelism on different clustering algorithms Jeffrey DiMarco and Michela Taufer Computer and Information Sciences, University of Delaware E-mail: jdimarco@udel.edu, taufer@udel.edu

More information

Scalability of Efficient Parallel K-Means

Scalability of Efficient Parallel K-Means Scalability of Efficient Parallel K-Means David Pettinger and Giuseppe Di Fatta School of Systems Engineering The University of Reading Whiteknights, Reading, Berkshire, RG6 6AY, UK {D.G.Pettinger,G.DiFatta}@reading.ac.uk

More information

Clustering Documents in Large Text Corpora

Clustering Documents in Large Text Corpora Clustering Documents in Large Text Corpora Bin He Faculty of Computer Science Dalhousie University Halifax, Canada B3H 1W5 bhe@cs.dal.ca http://www.cs.dal.ca/ bhe Yongzheng Zhang Faculty of Computer Science

More information

Parallel K-Means Clustering for Gene Expression Data on SNOW

Parallel K-Means Clustering for Gene Expression Data on SNOW Parallel K-Means Clustering for Gene Expression Data on SNOW Briti Deb Institute of Computer Science, University of Tartu, J.Liivi 2, Tartu, Estonia Satish Narayana Srirama Institute of Computer Science,

More information

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM. Center of Atmospheric Sciences, UNAM November 16, 2016 Cluster Analisis Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster)

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #14: Clustering Seoul National University 1 In This Lecture Learn the motivation, applications, and goal of clustering Understand the basic methods of clustering (bottom-up

More information

Segmentation (continued)

Segmentation (continued) Segmentation (continued) Lecture 05 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr Mubarak Shah Professor, University of Central Florida The Robotics

More information

k-means Clustering Todd W. Neller Gettysburg College Laura E. Brown Michigan Technological University

k-means Clustering Todd W. Neller Gettysburg College Laura E. Brown Michigan Technological University k-means Clustering Todd W. Neller Gettysburg College Laura E. Brown Michigan Technological University Outline Unsupervised versus Supervised Learning Clustering Problem k-means Clustering Algorithm Visual

More information

Clustering and Visualisation of Data

Clustering and Visualisation of Data Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some

More information

Dynamic Clustering of Data with Modified K-Means Algorithm

Dynamic Clustering of Data with Modified K-Means Algorithm 2012 International Conference on Information and Computer Networks (ICICN 2012) IPCSIT vol. 27 (2012) (2012) IACSIT Press, Singapore Dynamic Clustering of Data with Modified K-Means Algorithm Ahamed Shafeeq

More information

Unsupervised Learning Hierarchical Methods

Unsupervised Learning Hierarchical Methods Unsupervised Learning Hierarchical Methods Road Map. Basic Concepts 2. BIRCH 3. ROCK The Principle Group data objects into a tree of clusters Hierarchical methods can be Agglomerative: bottom-up approach

More information

Chapter 9 Reduction Variables

Chapter 9 Reduction Variables Chapter 9 Reduction Variables Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

Clustering. Chapter 10 in Introduction to statistical learning

Clustering. Chapter 10 in Introduction to statistical learning Clustering Chapter 10 in Introduction to statistical learning 16 14 12 10 8 6 4 2 0 2 4 6 8 10 12 14 1 Clustering ² Clustering is the art of finding groups in data (Kaufman and Rousseeuw, 1990). ² What

More information

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval Case Study 2: Document Retrieval Clustering Documents Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade April, 2017 Sham Kakade 2017 1 Document Retrieval n Goal: Retrieve

More information

Clustering Lecture 3: Hierarchical Methods

Clustering Lecture 3: Hierarchical Methods Clustering Lecture 3: Hierarchical Methods Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced

More information