Analysis of Extended Performance for clustering of Satellite Images Using Bigdata Platform Spark

Similar documents
Processing of big data with Apache Spark

AN EFFECTIVE DETECTION OF SATELLITE IMAGES VIA K-MEANS CLUSTERING ON HADOOP SYSTEM. Mengzhao Yang, Haibin Mei and Dongmei Huang

Apache Spark is a fast and general-purpose engine for large-scale data processing Spark aims at achieving the following goals in the Big data context

Analytic Cloud with. Shelly Garion. IBM Research -- Haifa IBM Corporation

CATEGORIZATION OF THE DOCUMENTS BY USING MACHINE LEARNING

Cloud, Big Data & Linear Algebra

Massive Online Analysis - Storm,Spark

2/26/2017. Originally developed at the University of California - Berkeley's AMPLab

Hadoop 2.x Core: YARN, Tez, and Spark. Hortonworks Inc All Rights Reserved

Data Clustering on the Parallel Hadoop MapReduce Model. Dimitrios Verraros

CLUSTERING BIG DATA USING NORMALIZATION BASED k-means ALGORITHM

Blended Learning Outline: Developer Training for Apache Spark and Hadoop (180404a)

Lecture 11 Hadoop & Spark

IBM Data Science Experience White paper. SparkR. Transforming R into a tool for big data analytics

Cloud Computing & Visualization

INDEX-BASED JOIN IN MAPREDUCE USING HADOOP MAPFILES

Spark Overview. Professor Sasu Tarkoma.

An Efficient Clustering for Crime Analysis

Twitter data Analytics using Distributed Computing

Logging Reservoir Evaluation Based on Spark. Meng-xin SONG*, Hong-ping MIAO and Yao SUN

An improved MapReduce Design of Kmeans for clustering very large datasets

An Introduction to Apache Spark

COMPARATIVE EVALUATION OF BIG DATA FRAMEWORKS ON BATCH PROCESSING

Big data systems 12/8/17

Big Data Infrastructures & Technologies

Introduction to Big-Data

Distributed Machine Learning" on Spark

Clustering. (Part 2)

Spark. Cluster Computing with Working Sets. Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica.

2/4/2019 Week 3- A Sangmi Lee Pallickara

Integration of Machine Learning Library in Apache Apex

A Tutorial on Apache Spark

Apache Spark and Hadoop Based Big Data Processing System for Clinical Research

Big Data Architect.

DATA SCIENCE USING SPARK: AN INTRODUCTION

K-Means Clustering 3/3/17

A Comparative study of Clustering Algorithms using MapReduce in Hadoop

Project Design. Version May, Computer Science Department, Texas Christian University

Overview. Prerequisites. Course Outline. Course Outline :: Apache Spark Development::

Specialist ICT Learning

Comparative Analysis of K means Clustering Sequentially And Parallely


MapReduce, Hadoop and Spark. Bompotas Agorakis

Comparative Study of Apache Hadoop vs Spark

Nearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications

Mounica B, Aditya Srivastava, Md. Faisal Alam

CompSci 516: Database Systems

Efficient Algorithm for Frequent Itemset Generation in Big Data

FINAL REPORT: K MEANS CLUSTERING SAPNA GANESH (sg1368) VAIBHAV GANDHI(vrg5913)

a Spark in the cloud iterative and interactive cluster computing

University of Florida CISE department Gator Engineering. Clustering Part 2

Big Data. Big Data Analyst. Big Data Engineer. Big Data Architect

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

Big Data Analytics using Apache Hadoop and Spark with Scala

Resilient Distributed Datasets

CSC 261/461 Database Systems Lecture 24. Spring 2017 MW 3:25 pm 4:40 pm January 18 May 3 Dewey 1101

Distributed Computing with Spark and MapReduce

COSC 6339 Big Data Analytics. Introduction to Spark. Edgar Gabriel Fall What is SPARK?

Principal Software Engineer Red Hat Emerging Technology June 24, 2015

Department of Computer Science San Marcos, TX Report Number TXSTATE-CS-TR Clustering in the Cloud. Xuan Wang

Introduction to MapReduce

On Performance Evaluation of BM-Based String Matching Algorithms in Distributed Computing Environment

Survey on Incremental MapReduce for Data Mining

Chapter 4: Apache Spark

Topics. Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples

An Introduction to Apache Spark Big Data Madison: 29 July William Red Hat, Inc.

Understanding Clustering Supervising the unsupervised

Distributed Systems. 22. Spark. Paul Krzyzanowski. Rutgers University. Fall 2016

Announcements. Reading Material. Map Reduce. The Map-Reduce Framework 10/3/17. Big Data. CompSci 516: Database Systems

Mitigating Data Skew Using Map Reduce Application

Apache Spark and Scala Certification Training

Experiences Running and Optimizing the Berkeley Data Analytics Stack on Cray Platforms

Using Existing Numerical Libraries on Spark

CSE 444: Database Internals. Lecture 23 Spark

Stream Processing on IoT Devices using Calvin Framework

Volume 3, Issue 11, November 2015 International Journal of Advance Research in Computer Science and Management Studies

Certified Big Data Hadoop and Spark Scala Course Curriculum

Introduction to Hadoop and MapReduce

Improvements and Implementation of Hierarchical Clustering based on Hadoop Jun Zhang1, a, Chunxiao Fan1, Yuexin Wu2,b, Ao Xiao1

Parallel HITS Algorithm Implemented Using HADOOP GIRAPH Framework to resolve Big Data Problem

Distributed Computing with Spark

Backtesting with Spark

Project Requirements

A brief history on Hadoop

Chapter 1 - The Spark Machine Learning Library

In-memory data pipeline and warehouse at scale using Spark, Spark SQL, Tachyon and Parquet

Machine Learning With Spark

Apache Spark 2.0. Matei

Analysis of Different Approaches of Parallel Block Processing for K-Means Clustering Algorithm

CSE 5243 INTRO. TO DATA MINING

An Indian Journal FULL PAPER ABSTRACT KEYWORDS. Trade Science Inc. The study on magnanimous data-storage system based on cloud computing

Memory Management for Spark. Ken Salem Cheriton School of Computer Science University of Waterloo

Deep Learning Frameworks with Spark and GPUs

MSA220 - Statistical Learning for Big Data

Using Numerical Libraries on Spark

Machine Learning for Large-Scale Data Analysis and Decision Making A. Distributed Machine Learning Week #9

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Slides From Lecture Notes for Chapter 8. Introduction to Data Mining

Big Data Hadoop Stack

All Rights Reserved 2017 IJARCET

Batch Inherence of Map Reduce Framework

Transcription:

Analysis of Extended Performance for clustering of Satellite Images Using Bigdata Platform Spark PL.Marichamy 1, M.Phil Research Scholar, Department of Computer Application, Alagappa University, Karaikudi, India. 1 plmarichamy23@gmail.com Abstract - Due to the recent emergence Clustering techniques have been widely adopted in many real world data analysis applications, such as customer behavior analysis, targeted marketing, digital forensics, etc. As the satellite imagery is getting generated at a higher rate than the previous decades, it becomes essential to have better solutions in terms of accuracy as well as performance. In this paper, we are proposing the solution over big data which performs the clustering of images using different methods viz. Scalable K- means++, Bisecting Kmeans and Gaussian Mixture. Since the number of clusters is not known in advance in any of the methods, we also propose a better approach of validating the number of clusters using Simple Silhouette Index algorithm and thus to provide the better clustering possible. Keyword: Images, Distributed Processing, Scalable Kmeans++, K-means Clustering, Bigdata, Datamining, Security, Gaussian Mixture I. INTRODUCTION Image Clustering plays an important role in the field of remote sensing. Deforestation, ecosystem, land cover and climate change etc. are few areas where clustering is useful. Among different image processing techniques such as image segmentation, image compression and classification, etc., clustering is the vital step. The aim of the clustering is to create clusters in such a way that pixels in one cluster are more closely related, i.e. similar to each other than pixels in other clusters. Clustering is different fro m classification as it involves learning based on observation. K-Means, a basic algorithm, is the most popular and widely used clustering technique across all fields, including remote sensing [1]. Industry experts are facing challenges with respect to the processing of the large amount of data generated in satellite imagery. Hadoop is the distributed platform to store and process big data. It distributes the data and process it on distinct nodes in the cluster thereby minimizing network transfers. The Hadoop Distributed File System (HDFS) helps in distributing the part of the file effectively [2]. Apache Spark is another distributed processing framework which can read from HDFS. Researchers have performed various studies to run Kmeans on Hadoop [3] [4]. Studies have been conducted to run the algorithm effectively on Hadoop to improve its performance and scalability [5] [6]. The paper explores the algorithms to run multiple parallel Scalable K-means++ clustering of satellite images for different values of k [7]. As the number of clusters is usually not known in advance, experiments are conducted by selecting the initial value of k and then incrementing it for a certain number of times. Then the appropriate number of clusters is calculated by validating algorithms such as Elbow method and Silhouette Index [8] [9]. In this paper, we are proposing a solution which uses Apache Spark as the distributed computing framework and finds the best clustering among the different clustering algorithms. We have used three clustering algorithms viz. Scalable K-Means++, Bisecting K-Means and Gaussian Mixture Model. Spark Mllib provides support for various clustering algorithms out of which K Means is one of them [10] [11]. The Mllib library provides an implementation of K means [5]. The bisecting K-means is a divisive hierarchical clustering algorith m. It is also a variation of K-means. Similar to K-means, the number of clusters must be predefined. The Gaussian mixture clustering algorithm is based on the so-called Gaussian Mixture Model for assembling the clusters. This algorithm is used to improve the performance of image segmentation [12]. Similar to K-means and bisecting K means, the Gaussian mixture clustering algorithm implementation by Spark requires a predefined number of clusters. All the three mentioned algorithms have been implemented in the Spark MLLib library. In the next section, the paper provides the background & related work. In section 3, we discuss the Hadoop ecosystem with the basic information of Apache Spark. In section 4 we discuss the methodology and proposed solution. Section 5 provides all the details of the experiments and the last section of the paper concludes our findings. 539

II. LITERATURE SURVEY K-Means has been one of the most effective clustering algorithms. It has proven its effectiveness in almost all scientific applications including remote sensing. The groups are created taking similarity of data points into account. Along with simplicity and effectiveness, the algorithm has certain limitations and drawbacks. The first limitat ion is the choice of initial centroids. If they are not chosen appropriately, then there is a possibility that K-Means converge to just a local optimum. Also, the number of clusters needs to be known in advance. Gaussian Mixture Model also has the same problem as far as image segmentation is concerned [12]. The survey paper [13] discusses all the different approaches that have been used for determination of number of clusters. AlDaoud and Roberts proposed the initialization method based on the density of uniformly partitioned data [14]. The algorithm partitioned the data into N cells and the centroids are then chosen from each cell on the basis of the density proportion of that particular cell. Kauffman also came up with the greedy approach to initialize the K point [15]. In 2007, Arthur and Vassilvitskii proposed the K-means++ clustering algorithm [16]. The paper proposed the careful seeding methodology to select the initial points closer to the optimum result. The initial point is chosen at random. Next subsequent centers are calculated proportional to the closest squared distance from the already chosen center. In [3], researchers proposed the K-Means clustering algorithm which ran in parallel based on MapReduce. Zhenhua et al. in [1] ran the MapReduce K-means clustering on satellite images. Taking the help of a MapReduce execution framework, the algorithm scaled pretty well on commodity hardware. Later, [5] came up with scalable K-means++ to optimize it further by sampling more points in each iteration instead of single point. Even though the number of iterations decreases still many iterations are needed. In [4], the paper proposed an efficient k-means approximation with MapReduce. It proposed that a single MapReduce is sufficient for the in itializat ion phase instead of multiple iterations. The Gaussian Mixture Model has proven its utility in image segmentation of MRI images [17]. Some studies have also evaluated K-means and its variants like Bisecting K-means, Fuzzy C-Means etc. [18] III. HADOOP ECOSYSTEM Hadoop is a framework which deals with storage of large amounts of data and processing them in a distributed manner. A large file is split into mult iple parts on different nodes of the cluster. The same task gets executed on each part of the file simultaneously. Hadoop is based on two core concepts. Hadoop Distributed File System (HDFS) and Distributed Processing Framework MapReduce. After the creation of Hadoop by Doug Cutting, it has evolved a lot and has built a well-integrated ecosystem around its core concepts of MapReduce and HDFS. A. Hadoop Distributed File System (HDFS) HDFS works with the help of two types of nodes. Name node and Data node. The Name node is the node in the cluster that stores the details of all the files on the cluster. The Hadoop client talks with this node to read and write from/to data nodes. The name node stores the locations of all the splits of the large file. The data node actually stores the part of the file. The block size is the maximum size of the file portion which can be stored on one node. In earlier versions of Hadoop, the default value of block size used to be 64 MB. This has been increased to 128MB now. B. MapReduce On the other hand, a MapReduce layer consists of Job tracker and Task trackers. The job tracker is initialized every time a job starts executing. It is the responsibility of the job tracker to initialize the multiple task trackers on each and every node where the split of the file exists. The task trackers then execute the desired task on the respective nodes. In this way, Hadoop achieves parallel execution on commodity hardware. For reading binary data there are other file formats specific to Hadoop such as SequenceFile Format which stores data in binary key-value pair. Since our dataset deals with small and medium sized images, SequenceFiles are appropriate for our experiment [2]. C. Apache Spark Similar to Hadoop MapReduce, there is another clustering computing framework, called Spark [11]. Spark can be deployed on HDFS as well as standalone. Spark has some in-memory processing capabilities, although it doesn t store all the data in the memory. In our experiments using Spark, we have deployed it over HDFS. 540

The core concept of Apache Spark which has helped Spark in achieving high performance is Resilient Distributed Datasets (RDD). RDDs are distributed, lazy and can be persisted. Since the data is distributed on HDFS over multiple nodes the processing can happen on individual nodes minimizing the transfer of data over the network. This is similar to the MapReduce approach. Another property is laziness. Spark reads all the operations to be executed for the particular output and doesn t execute them until the output is explicitly asked. The operations are not executed individually, which means the data need not be stored on disk in order to pass it to the next operation. RDD can be persisted in the memory as well on the disk Here, we have initialized the value of k to 2 and then incremented it till k is equal to 7. For each value of k all three clustering algorithms are performed and the centroids are calculated. IV. M ETHODOLOGY In this paper, we are proposing a solution over Hadoop to find the best possible clustering of a satellite image using Spark library. We have implemented the solution which reads a sequence file containing multiple images and then apply different clustering algorithms for different values of k, where k is the number of desired clusters. The proposed framework has three phases viz. Reading, Clustering and Best of Breed. Figure 1 shows the schematic diagram of the methodology followed in this paper. Spark comes with different libraries meant for special purposes. These include Spark SQL for structured data processing, GraphX for graph processing and Spark Streaming for streaming. Similarly, Mllib provides out-of-the-box capability for common learning algorithms such as classification, regression, clustering, and collaborative filtering. As the prerequisite for our Spark based experiment, satellite images were converted into a sequence file. In a sequence file, these satellite images were stored as values in key-value pairs with the file name as the key. Sequence Files were then read as byte arrays and converted into a Buffered Image. Following are the three phases of the experiment. A. Reading Sequence Files In the first phase, the sequence file is read from HDFS using Spark context and created an RDD. B. Clustering Phase For each file that has been read from the sequence file, we performed different clustering algorithms available in the Spark MLLib library. The library includes parallelized version of K-means++ [10]. GMM and Bisecting K-means are also part of the library. Figure 1. Process Methodology C. Extended Performance - Validating clusters As an output of the second phase we have different clusters for three clustering algorithms. We need to find the best clustering for the image which means the best possible number of clusters as well as the algorithm that has produced the clustered image. In this third phase, we have used Silhouette Index algorithm in order to validate the consistency of data clusters. This algorithm suggests the cohesion among the objects in the cluster. The purpose of this algorithm is to tell the cohesion among 541

the objects in the cluster. For better clustering, cohesion needs to be more. For validating the clusters, [6] chose Simplified Silhouette Index (SSI) which is one of the variants of the original Silhouette Index. SSI is the simplified approach of the Silhouette Index as the distance is calculated between the centroid and the data point instead of calculating all the distances between all the data points. Elbow method sometimes doesn t work when the dataset is not clustered properly. V. EXPERIMENTS TABLE I. NODE CONFIGURATION Node type # of Cores Number of Nodes RAM Name Node 8 1 32GB Data Node 8 15 32GB We had Spark 2.0.2 on our cluster with the executor memo ry was set at 16 GB with 10 executor workers running per data node. We ran our experiments on 80 Landsat images with sizes varying from 10 MB to 200 MB. The objective of the experiment was not only to propose the best of breed solution but also to verify its scalability. A sequence file was first created with the satellite images of varying sizes. The sequence file was then ingested into the framework in order to find the best clustering of each of the images in the sequence file. Three spark jobs were run simultaneously corresponding to three different clustering methods. Each record of the sequence file was an image. Since we don t know the exact number of clusters for any image, the initial value of k varied from two to seven. For each value of k, each of the concerned algorithms was run. This information was then passed through the implementation of Simplified Silhouette Index, which suggested us the clustering algorithm that produced the best clustering for the image. Out of eighty satellite images, K Means was suggested as the best clustering for twenty-four images while thirty-six images had best clustering using Bisecting K Means and Gaussian Mixture Model was suggested for the remaining images. Since lots of algorithms were run in parallel over large sized images, we also evaluated the scalability of the proposed solution. In order to find the scalability metrics, we calculated scale up and speed up values. Scaleup was calculated by increasing the number of images in the same proportion as the resources. We took two sets of images. In the first set we had the image of around 20 MB and calculate the total time taken by the proposed solution for 10, 20, 40 and 80 images. These readings were taken in 4-node, 8-node, 12-node and 16- node clusters respectively. The scaleup ratio was found to be 1.3. The same process was then repeated for the image of size 80 MB and the ratio came out to be 1.23. For speedup evaluation, we multiplied the resources by keeping the input size fixed. In this case, we took 10 images, each 30 MB and 95 MB in size. We repeated the experiment for 4-node, 8-node and 16-node clusters. The speedup of the images was 1.12 and 1.23 respectively. VI. CONCLUSION & FUTURE SCOPE In our experiment we have created a solution which suggests the best clustering algorithm that could be applied to a particular satellite image and also the number of appropriate clusters required for image processing. We ran three clustering algorithms on the images and then finds out the better among them using the Simplified Silhouette Index. We also proved that the proposed solution is scalable for images with less than 150 M B in size. In our experiment, we assumed that the image will be processed on single node only. We are now working on the solution such that larger images which span over two nodes or more could be processed on multiple nodes. We further plan to add more clustering algorithms which are not present in Apache Spark. We are also going to evaluate these algorithms on the basis of their scalability and performance. REFERENCES [1] Z. Lv, Y. Hu, Z. Haidong, J. Wu, B. Li and H. Zhao, Parallel K- Means Clustering of Remote Sensing Images Based on MapReduce, in 2010 International Conference on Web Information Systems and Mining, WISM 2010,2010. [2] T. White, Hadoop I/O : File Based Data Structures, in Hadoop - The Definitive Guide, O'Reilly. [3] W. Zhao, H. Ma and Q. He, Parallel K-Means Clustering Based on MapReduce, in CloudCom, Beijing, 2009. [4] Y. Xu, W. Qu, G. Min, K. Li and Z. Liu, Efficient kmeans++ Approximation with MapReduce, IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYST EMS, vol. 25, no. 12, December [5] B. Bahmani, B. Moseley, A. Vattani, R. Kumar and S. Vassilvitskii, Scalable k-means++, in International Conference on Very Large Databases, 2012. [6] K. D. Garcia and M. C. Naldi, Multiple Parallel MapReduce k-means Clustering with Validation and Selection, in Brazilian Conference on Intelligent Systems, IEEE, 2014. [7] T. Shama, V. Shokeen and S. Mathur, Multiple K Means++ 542

Clustering of Satellite Image Using Hadoop MapReduce and Spark, International Journal Of Advanced Studies In Computer Science And Engineering, vol. 5, no. 4, 2016. [8] Wikipedia, Elbow Method (CLustering), [Online]. Available: https://en.wikipedia.org/wiki/elbow_method_(clustering). [9] Wiki, Silhouette Index(Clustering), [Online]. Available: https://en.wikipedia.org/wiki/silhouette_(clustering). [10] A. Spark, Clustering - RDD-based API, Apache Spark, [Online]. Available: http://spark.apache.org/docs/latest/mllibclustering.html. [11] Apache Spark, [Online]. Available: http://spark.apache.org/docs/1.3.0/mllib-clustering.html#kmeans. 543