Cooperation between Data Modeling and Simulation Modeling for Performance Analysis of Hadoop

Similar documents
High Performance Computing on MapReduce Programming Framework

Batch Inherence of Map Reduce Framework

Cloud Computing. Hwajung Lee. Key Reference: Prof. Jong-Moon Chung s Lecture Notes at Yonsei University

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing

SQL Query Optimization on Cross Nodes for Distributed System

Global Journal of Engineering Science and Research Management

CLIENT DATA NODE NAME NODE

Google File System (GFS) and Hadoop Distributed File System (HDFS)

Indexing Strategies of MapReduce for Information Retrieval in Big Data

MI-PDB, MIE-PDB: Advanced Database Systems

Application-Aware SDN Routing for Big-Data Processing

Introduction to MapReduce

HADOOP FRAMEWORK FOR BIG DATA

QADR with Energy Consumption for DIA in Cloud

CLOUD-SCALE FILE SYSTEMS

Dynamic Data Placement Strategy in MapReduce-styled Data Processing Platform Hua-Ci WANG 1,a,*, Cai CHEN 2,b,*, Yi LIANG 3,c

MATE-EC2: A Middleware for Processing Data with Amazon Web Services

A Fast and High Throughput SQL Query System for Big Data

Data Clustering on the Parallel Hadoop MapReduce Model. Dimitrios Verraros

Distributed Face Recognition Using Hadoop

Research on Load Balancing in Task Allocation Process in Heterogeneous Hadoop Cluster

Dynamic processing slots scheduling for I/O intensive jobs of Hadoop MapReduce

APPLICATION OF HADOOP MAPREDUCE TECHNIQUE TOVIRTUAL DATABASE SYSTEM DESIGN. Neha Tiwari Rahul Pandita Nisha Chhatwani Divyakalpa Patil Prof. N.B.

Data Analysis Using MapReduce in Hadoop Environment

MAPREDUCE FOR BIG DATA PROCESSING BASED ON NETWORK TRAFFIC PERFORMANCE Rajeshwari Adrakatti

Fast and Effective System for Name Entity Recognition on Big Data

Survey on MapReduce Scheduling Algorithms

Research Article Mobile Storage and Search Engine of Information Oriented to Food Cloud

Huge Data Analysis and Processing Platform based on Hadoop Yuanbin LI1, a, Rong CHEN2

A priority based dynamic bandwidth scheduling in SDN networks 1

The Google File System. Alexandru Costan

International Journal of Advance Engineering and Research Development. A Study: Hadoop Framework

A Robust Cloud-based Service Architecture for Multimedia Streaming Using Hadoop

Clustering Lecture 8: MapReduce

An Improved Performance Evaluation on Large-Scale Data using MapReduce Technique

Hadoop Virtualization Extensions on VMware vsphere 5 T E C H N I C A L W H I T E P A P E R

CLUSTERING BIG DATA USING NORMALIZATION BASED k-means ALGORITHM

CCA-410. Cloudera. Cloudera Certified Administrator for Apache Hadoop (CCAH)

An improved MapReduce Design of Kmeans for clustering very large datasets

Database Applications (15-415)

A New Model of Search Engine based on Cloud Computing

ADAPTIVE HANDLING OF 3V S OF BIG DATA TO IMPROVE EFFICIENCY USING HETEROGENEOUS CLUSTERS

Survey Paper on Traditional Hadoop and Pipelined Map Reduce

Optimization Scheme for Storing and Accessing Huge Number of Small Files on HADOOP Distributed File System

PROFILING BASED REDUCE MEMORY PROVISIONING FOR IMPROVING THE PERFORMANCE IN HADOOP

Implementation of Aggregation of Map and Reduce Function for Performance Improvisation

A SURVEY ON SCHEDULING IN HADOOP FOR BIGDATA PROCESSING

A brief history on Hadoop

Correlation based File Prefetching Approach for Hadoop

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

PSON: A Parallelized SON Algorithm with MapReduce for Mining Frequent Sets

SMCCSE: PaaS Platform for processing large amounts of social media

Decision analysis of the weather log by Hadoop

Introduction to MapReduce

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

A Multilevel Secure MapReduce Framework for Cross-Domain Information Sharing in the Cloud

The Establishment of Large Data Mining Platform Based on Cloud Computing. Wei CAI

International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN

Integration of analytic model and simulation model for analysis on system survivability

LITERATURE SURVEY (BIG DATA ANALYTICS)!

Novel Scheduling Algorithms for Efficient Deployment of MapReduce Applications in Heterogeneous Computing Environments

Nowadays data-intensive applications play a

Apache Flink: Distributed Stream Data Processing

Big Data 7. Resource Management

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Facilitating Consistency Check between Specification & Implementation with MapReduce Framework

Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA {Shv, Hairong, SRadia,

A MapReduce based Parallel K-Means Clustering for Large Scale CIM Data Verification

Improved MapReduce k-means Clustering Algorithm with Combiner

A Cloud Computing Implementation of XML Indexing Method Using Hadoop

A Study of Cloud Computing Scheduling Algorithm Based on Task Decomposition

ABSTRACT I. INTRODUCTION

Analytics in the cloud

A Comparative study of Clustering Algorithms using MapReduce in Hadoop

Department of Computer Science San Marcos, TX Report Number TXSTATE-CS-TR Clustering in the Cloud. Xuan Wang

An Exploration of Designing a Hybrid Scale-Up/Out Hadoop Architecture Based on Performance Measurements

The MapReduce Framework

Big Data for Engineers Spring Resource Management

An Indian Journal FULL PAPER ABSTRACT KEYWORDS. Trade Science Inc. The study on magnanimous data-storage system based on cloud computing

HiTune. Dataflow-Based Performance Analysis for Big Data Cloud

The Analysis Research of Hierarchical Storage System Based on Hadoop Framework Yan LIU 1, a, Tianjian ZHENG 1, Mingjiang LI 1, Jinpeng YUAN 1

Yuval Carmel Tel-Aviv University "Advanced Topics in Storage Systems" - Spring 2013

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

Collaboration System using Agent based on MRA in Cloud

MapReduce, Hadoop and Spark. Bompotas Agorakis

Hadoop/MapReduce Computing Paradigm

Mitigating Data Skew Using Map Reduce Application

Load Balancing Algorithm over a Distributed Cloud Network

Distributed Systems 16. Distributed File Systems II

IMPLEMENTATION OF INFORMATION RETRIEVAL (IR) ALGORITHM FOR CLOUD COMPUTING: A COMPARATIVE STUDY BETWEEN WITH AND WITHOUT MAPREDUCE MECHANISM *

A New HadoopBased Network Management System with Policy Approach

Frequent Item Set using Apriori and Map Reduce algorithm: An Application in Inventory Management

Review On Data Replication with QoS and Energy Consumption for Data Intensive Applications in Cloud Computing

Various Strategies of Load Balancing Techniques and Challenges in Distributed Systems

Research on Applications of Data Mining in Electronic Commerce. Xiuping YANG 1, a

A Survey on Parallel Rough Set Based Knowledge Acquisition Using MapReduce from Big Data

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E.

PWBRR Algorithm of Hadoop Platform

Towards an Adaptive, Fully Automated Performance Modeling Methodology for Cloud Applications

HDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung

Transcription:

Cooperation between Data ing and Simulation ing for Performance Analysis of Hadoop Byeong Soo Kim and Tag Gon Kim Department of Electrical Engineering Korea Advanced Institute of Science and Technology Daejeon, Republic of Korea E-mail: {kevinzzang, tkim}@kaist.ac.kr Abstract Performance analysis of a complex computing system requires a lot of time and effort. Many studies related to performance analysis have been conducted. Firstly, there are studies about workload modeling that model and analyze the observed workloads using statistical summaries. Also, there are studies to analyze system behavior and procedures through simulation modeling. However, there are some disadvantages to using only one approach for analyzing the complex system more accurately. Therefore, this paper presents cooperation between data modeling and simulation modeling for performance analysis of the system. The target system, Hadoop, is one of the representative big data platforms that can demonstrate the complexity of the analysis. Firstly, we identify the characteristics of Hadoop and divide them into two parts according to their characteristics. Then, we model and integrate them using each modeling approach. This paper presents the experiment to show the advantages of the cooperative modeling in the accuracy, execution time, and modeling extensibility. Keywords cooperative modeling; performance analysis; data modeling; simulation modeling; Hadoop I. INTRODUCTION Performance analysis of a complex computing system requires a deep understanding of the system. The larger and more complex the system, the more cost and effort are needed for the performance analysis. Its results are very important because they help to predict the future behavior of the computing system and maximize the performance. Furthermore, such results can be used in resource planning, parameter tuning, and so on [1]. Meanwhile, demands for big data computing platforms such as Hadoop are explosive in the big data era. Hadoop is an open-source software framework used for distributed storage and processing of large data sets [2]. Hadoop consists of MapReduce and Hadoop Distributed File System (). While it is efficient and reliable for dataintensive computing, there are excessive parameters to configure a Hadoop cluster for efficient execution. Also, it is difficult to set up a physical Hadoop cluster to evaluate the scalability of an application up to one thousand nodes. Therefore, performance analysis of Hadoop is one of the most important issues to decide on a set of optimal parameters for a good performance. When modeling such a system for the performance analysis, it can be classified into a workload model and a system model, as shown in Figure 1. Generally, workload means the amount of work assigned to the system in a given time period [3]. Workload modeling is to create a statistical summary of these workloads through observation of the system. It can be applied to all the workload attributes such as CPU usage, memory usage, I/O behavior, and network traffic. Workload modeling can provide the ability to change model parameters and reduce the file size compared to the normal workload traces. System model means a conceptual model as a result of system modeling that describes and represents the structure, process, and characteristics of the system [4]. Depending on the abstraction level of the system or the purpose of the analysis, we can perform the performance analysis of a Hadoop system through all models or through each model. The performance analysis of a Hadoop framework has been addressed previously. Firstly, some research has been done on the performance analysis of Hadoop using workload models [5, 6]. It can be called data modeling using observed data (trace). They performed statistical analysis and modeling using real workload traces. They extracted features about jobs from the trace and generated realistic synthetic workloads used in prediction. The model predicts the workload patterns well. But it only considers the map and reduce performance, and it does not reflect platform models that include cluster information or hardware. It can be seen as a high abstraction level of MapReduce. On the other hand, there are some existing Hadoop simulators made using system knowledge [7]. This approach can be called simulation modeling using prior knowledge. HSim [8] and MRPerf [9] are representative simulators. They can simulate the dynamic behaviors of Hadoop clusters. Also, they can configure many Hadoop parameters, including hardware and cluster parameters. However, they did not consider the workload model built using data modeling. For example, the characteristics of each Hadoop application and disk I/O model that are difficult to model with low-level knowledge are simply reflected to these simulators. This can be detrimental to prediction accuracy and scalability in performance analysis. SummerSim-SPECTS, 2017 July 9-12, Bellevue, Washington, USA; 2017 Society for ing & Simulation International (SCS)

Real Workload Trace (Observed Data) Workload (Data ing) split1 Execution Analysis Workload (Data ing) Hadoop Framework for Performance Analysis System (Simulation ing) Complementary Cooperation Fig. 1. Concept of proposed cooperative model split2 split3 Map1 Map2 Map3 sort shuffle merge Observation Abstraction Structure & Process (Prior Knowledge) System (Simulation ing) In order to overcome the disadvantages of each approach, we need a way to get enhanced results through cooperation between data modeling of the workload model using actual data and simulation modeling of the system model using knowledge of the system components. In this paper, we propose a cooperative modeling of two modeling approaches for the performance analysis of Hadoop. We firstly perform a conceptual modeling and a partitioning according to the characteristics of Hadoop components. Then, we model each partitioned model using two modeling methods and then integrate them. Because we maximize the benefits of each modeling method, it is possible to improve prediction accuracy and scalability. This paper is organized as follows: Background knowledge about the Hadoop and each modeling method are briefly introduced. Then our proposed work about cooperation of two modeling methods are described. Finally, experiments are provided to show the contributions of the work. II. PRELIMINARIES A. Hadoop Overview Hadoop is a representative big data platform for reliable, scalable, and large-scale distributed computing [4]. MapReduce is a computing framework for large-scale distributed data processing based on the divide and conquer paradigm. It works by breaking the processing into map and reduce [10]. The MapReduce framework executes the map and reduce in parallel on the different machines within the Hadoop cluster. Map performs data filtering and sorting, and reduce performs summary operations. We can define map and reduce functions as well as types of input and output. Figure 2 shows the concept of MapReduce. Fig. 2. Concept of MapReduce Reduce1 Reduce2 result1 result2 is a distributed file system of Hadoop that stores data reliably using commodity clusters [11]. Input data stored on the are split into fixed-size blocks, and each block is allocated to a map task. The map processes each key-value pair in the block and outputs the result as a list of key-value pairs. Then, the output of the map is partitioned by the key, and they are transferred to each reduce, respectively. This process is called shuffle. The gathered records are merged and sorted at each reduce task of the node. The user-specified reduce reads and processes key-value pairs sequentially. Finally, the outputs of the reduce are written to the. B. Data ing and Simulation ing Data modeling is used to build models for complementing theory-based simulation models. A data modeling is used to determine the correlation between a system s inputs and outputs using a training data set that is representative of all the behaviors found in the system [12]. Once the model is learned, it can be tested using a data set to determine how well it can generalize to unseen data. Data modeling consists of the acquisition, modeling, validation, and prediction process. It has seen wide use in various fields, including science, engineering, economic, industries, and others in order to predict the future behavior of the system [12]. On the other hand, simulation modeling is a knowledge-based approach generally used in the simulation field. To build a model, it uses theories, physical laws, operational laws, and so on. Because the theory means a statement of what causes what and why, it is possible to clearly represent the causality between a set of inputs and outputs of the system contrary to the data modeling [13]. Data Real System (Data ) Data X Data Y Data X Predicted Ypre Training the model : Minimizing error ( Y Ypre ) <Data ing> Training Input X Knowledge Input X Real System (Sim. ) Abstract Building the model => : Abstraction of system <Simulation ing> Fig. 3. Definition of data modeling and simulation modeling Output Y Predicted Output Ypre They each have pros and cons. First, the simulation model enables a higher level of analysis, such as a prescriptive analysis through causal relationships, but the data model generally remains in predictive analysis through correlations between variables [13]. Also, simulation model is possible to represent the dynamic map of input and state to output, but the data model represents only a static map of the input variable to the output variable. For a valid prediction, the system structure should remain unchanged before and after training in case of data modeling. It is also difficult to reflect a system with an abnormal or non-existing system. Simulation modeling, on the other hand, requires sufficient knowledge of the system for a valid prediction, and it can be difficult to predict accurately due to various assumptions or constraints of the knowledge. In the next section, we present a cooperative modeling of Hadoop considering the features of two modeling methods.

III. PROPOSED COOPERATIVE HADOOP MODEL The proposed work is divided into four parts. The first one is conceptual modeling that identifies the overall characteristics of the Hadoop system. The second and third parts deal with a detailed description of using each modeling method: data modeling and simulation modeling. The last part presents an integration and implements a data model and simulation model. A. Conceptual ing: Overall Structure To model the Hadoop system, one must build a conceptual model that expresses the structure, abstraction level, and system elements according to the objectives of the analysis. The conceptual model should be partitioned into two models (data model and simulation model) according to the objective of the analysis and the acquisition level of data/knowledge. In the Hadoop, it can be divided into two types of models: workload model and system model. A workload model consists of an application model and a disk I/O model. And a system model consists of a MapReduce model, an model, and a platform model. The system model can be represented as the simulation model because knowledge about them can be obtained sufficiently. On the other hand, the workload model can be represented as the data model with the acquisition of environmental data. The classification of models may vary depending on the purpose of the analysis. Table 1 shows the model partitioning and description of each model. These partitioned models can be modeled using each modeling approach as follows: Workload System TABLE I. Disk I/O MapReduce Platform MODEL CLASSIFICATION OF HADOOP Description / Characteristics - Hadoop application program (WordCount, TeraSort, TestDFSIO, etc.) - Difficult to learn internal operations - Requiring many assumptions for modeling - Storage model for file write, read, and shuffle - Requiring low-level knowledge of storages - Possible to use existing simulators - MapReduce framework (load->sdf->sdf->sdf) - Enough knowledge for system modeling - Need to reflect elements after modeling (parameters, algorithms) - Operation of name node & data node - Structure of distributed file system - Data replacement algorithms - Enough knowledge for system modeling - Need to reflect elements after modeling - Structure of platform - Hardware model, including network - Fickle coupling relationships among master node and slave nodes B. Part of Data ing: Workload The workload model of Hadoop consists of an application model and a disk I/O model. The application model describes the Hadoop application program, for example, WordCount, TeraSort, TestDFSIO, and so on. It is required to understand their internal operation mechanisms, including the hardware performance to model such an application. But they are very complex and require a low level of knowledge. Also, because the simulation modeling of them demands a lot of assumptions, it can cause a drop in accuracy. Therefore, it is more appropriate to use the data modeling method in the application model than to use the simulation modeling method. The disk I/O model is similar to the application model. The modeling of disk I/O requires low-level knowledge of the storage system. It is possible to use existing simulators like DiskSim [14], but it can overload the simulation time or resource, which does not fit the purpose of the simulation. So the disk I/O model can also be created through the data modeling. In this paper, we use Artificial Neural Networks (ANNs) to perform data modeling (Figure 4). ANN is one of the representative data modeling approaches, which is inspired by biological neural networks of the human brain. It is composed of a large number of highly interconnected neurons. ANN models are made by training the network to represent the relationships and processes that are inherent within the data [15]. During the training, the strengths of neuron connection (called weights) are changed in order to calibrate the model. Input Layer Hidden Layer Output Layer I i W ij H j W jk O k in1 in2 in3 Fig. 4. Artificial Neural Networks (ANNs) Sets (Configurations) (WordCount / TeraSort) #of Nodes Size of Input Data # of s # of Files / Chunk Size #of Nodes Size of Input Data # of Files Chunk Size Data ing Training with ANNs: Minimizing error ( Y Ypre ) Workload 1 : Neural Network Workload 2 : I/O Neural Network Reducer Map Reduce Fig. 5. Process and result of data modeling using ANNs out1 out2 out3 Outputs (Hadoop execution results) Output Parameter (WordCount / TeraSort) SizeRatio ProcTime Variance SortSizeRatio SortProcTime ReducerSizeRatio ReducerProcTime Variance Output Parameter Shuffle Time per Node (sec/mb) Write (MB/sec) Read (MB/sec) To do data modeling using ANN, firstly, we identify the input and output parameters of the target models. Then, we collect and extract the environmental data from executions of

Hadoop application to use them as training data. After that, each data model of application and disk I/O is created through a training process using the acquired data set. Figure 5 presents the input/output parameter of each model. For the data modeling, we use the Lavenberg-Marquardt optimization technique as a learning algorithm [16] and mean squared error for a measurement of learning performance. C. Part of Simulation ing: System The MapReduce and models can be modeled using domain knowledge. MapReduce operations are performed through the map, shuffle, and reduce processes, and they run independently in parallel. A detailed description of each process is given in Section 2. In MapReduce, the unit of work that a client wants to perform is called a job, and it consists of input data, a MapReduce program, and configuration information. To control the job execution process, there are two kinds of nodes, including a job tracker and a few task trackers. The job tracker schedules the tasks to be performed by the task trackers so that all jobs are performed in the system as a whole. Task trackers perform each task and send progress reports to a job tracker that keeps the entire history of each job as one record. At this point, if the task fails, the job tracker reschedules it to another task tracker. operates in a master-slave fashion. has a name node as a master node and a data node as a slave node. The name node manages the namespace of the file system. It maintains a file system tree and metadata for all the files and directories in that tree. This information is persistently stored in two files on the local disk in the form of a namespace image and an edit log, and the name node knows which data nodes have all the blocks for a given file. A data node is responsible for actual operations of the file system, storing and retrieving blocks when requested by a client or a name node, and periodically reporting a list of stored blocks to the name node. The platform model includes a hardware model, such as a network model and a topology model, thus indicating connections between the clusters. System Master Node JobTracker NameNode Slave Node TaskTracker DataNode Reducer understanding the whole process is needed rather than simply data modeling through data acquisition. In this paper, we use Discrete Event System Specification (DEVS) formalism for the simulation modeling of these models [17]. It is a set theoretic specification of discrete event systems, which has been widely used for modeling many applications of science and engineering. The DEVS formalism is hierarchical, modular, and object-oriented, so it is suitable for the modeling of the system model of Hadoop. It largely consists of an atomic DEVS model representing the system behavior and a coupled DEVS model representing the structure of the system. A structure of the Hadoop system model using the DEVS formalism is shown in Figure 6. Each DEVS model for the internal components is shown in Figure 7. MasterNode Coupled NameNode Coupled Fig. 7. System model: Example of Hadoop DEVS model Input Data System Parameters (Parameters, Algorithms) JobTracker Coupled Message Switcher Atomic D. Integration and Implementation After modeling data model and simulation model, it is required to integrate them. Each model can be connected through predefined input/output relationships. They can be implemented each other in the heterogeneous environments and then interoperated using a middleware. Or they can be developed and integrated with the homogeneous environment. In this paper, we develop the Hadoop model in the same environment. The integrated model is illustrated in Figure 8. It shows the components and the connections among the models. NIC NIC Data Simulation Master Node Slave Node JobTracker DataNode Network NameNode NIC TaskTracker Client Fig. 6. Structure of Hadoop system model Disk Disk I/O I/O Client NIC LFS Client When these series of processes and structures are modeled by data modeling, these details can be highly abstracted. It makes it difficult to perform behavior analysis and structural change. Also, it is difficult to represent heterogeneous computing environments of the Hadoop platform with numerous nodes. Therefore, simulation modeling through Integrated Hadoop Network Fig. 8. Integrated Hadoop model Shuffler Reducer

IV. EXPERIMENT This section presents experiments of the Hadoop model made by the proposed modeling approach. It is applied to develop the Hadoop model, which cooperates with data modeling and simulation modeling approaches. To demonstrate the effectiveness of the proposed work, three experiments were designed, as shown in Table 2. The first one is an experiment to show the prediction accuracy using proposed work. We predict job completion time and throughput according to the number of data nodes and the size of input data. In the second experiment, we compare the real execution times of each simulation. The final experiment compares the model extensibility of the proposed method with those of the existing methods. A. Prediction Accuracy The most important thing that determines the performance of the simulation is the prediction accuracy of the output. Prediction accuracy can be compared through root mean squared error (RMSE). The RMSE is calculated by the difference between the real execution result and the simulation result. The smaller the value, the closer the predicted result is to the actual result. In this experiment, we compare the prediction accuracy of the proposed model with that of the control groups by simulating the job completion time according to the number of data nodes and the data size. The real execution of Hadoop was conducted on a homogeneous cluster of 16 nodes, which consisted of one master node and 15 data nodes. The parameters used in the experiment are shown in Table 4. Design Prediction Accuracy Execution Time Extension TABLE II. Objective Accuracy of output using RMSE Simulation execution time (Simulation speed) Extensibility and behavior analysis EXPERIMENTAL DESIGN X -# of data nodes -Total data size -# of data nodes -Data placement -algorithm Parameter Y -Job completion time -Simulation execution time -Job completion time TABLE IV. PARAMETERS USED IN EXPERIMENT A Value WordCount # of Map / Reduce 30 / 1 Chunk size 64 MB Total data size 0.5 ~ 16 GB # of data nodes 1 ~ 1024 For these experiments, models using only simulation modeling and data modeling are used as a control group. In the first model, which is created using only data modeling, the system model is created using the DEVS formalism in the same way as the proposed work, and the workload model is created through simulation modeling. The application model that constitutes the workload model is made into a simple simulation model through the abstraction process, and the disk model uses the existing DiskSim created by the domain expert [14]. On the other hand, the models made using only data modeling are simpler than the previous model. It is modeled at once using the entire input and output data of Hadoop, without distinction between the workload model and the system model. A detailed description of each is shown in Table 3. Job Completion Time (sec) Job Completion Time (sec) (a) # of Data Nodes Proposed Work TABLE III. MODELS USED IN THE EXPERIMENT Classification Workload System - Data modeling using ANNs - Simulation modeling using DEVS (b) Data Size (GB) Execution Result Data+Sim Data Fig. 9. Experimental Result: Prediction Accuracy Only Simulation ing Only Data ing - App. model: Abstracted - I/O : DiskSim - Simulation modeling using DEVS - Data modeling the entire system at once using ANNs TABLE V. COMPARISON OF PREDICTION ACCURACY Data + Sim. Data (a) # of Data Nodes 24.3 (lowest) 35.9 32.1 (b) Data Size 86.4 (lowest) 177.6 - Simulation

Figure 9 and Table 5 show the simulation results comparing the prediction accuracy. These results show that the proposed model has a lower RMSE than the other two models. In other words, it means that the cooperative model of data modeling and simulation modeling has enhanced the prediction accuracy compared to the other models. This is applied to both the number of data nodes and the data size. In the first experiment, we can see that the data model has some minus value of job completion time. This is because it is difficult to express the boundary conditions accurately with only the input/output data used for data modeling. The simulation model shows relatively accurate predictions, which can vary depending on how the application affects the overall process of the system. Because the application model in the simulation model uses an abstracted one, the less impact the application has on the overall system. As a result, we can see that cooperation between data modeling and simulation modeling can give better prediction results than using each modeling method alone. B. Simulation Execution Time Simulation execution time is also a very important factor in performance evaluation. As the number of nodes to be simulated or the number of experimental designs increases, the simulation time exponentially increases. It causes a loss of time resources. In this experiment, we compare the simulation time according to the number of nodes using the Hadoop simulation model, which is created only by simulation modeling, data modeling, and the proposed approach. The parameters used in this experiment are shown in Table 6. TABLE VI. # of Map / Reduce 30 / 1 Chunk size Total data size PARAMETERS USED IN THE EXPERIMENT B WordCount 64 MB 2 GB # of data nodes 1 ~ 1024 Value Figure 10 shows the execution times of each simulator. In the real system, as the number of data nodes increases, the execution time decreases. On the other hand, in the simulation, the run-time of the data nodes increases as the number of nodes increases when the data size is constant. This is because, as the number of nodes increases, the computing resources required for the simulation increase. The data model has the highest simulation speed because there is no consideration of the cluster topology inside it. The number of nodes, topology, and specifications are simply abstracted numerically inside the model. The low speed of the simulation model is caused by DiskSim replacing the I/O model. The proposed model has an intermediate speed between the two models. It is slower than the data model, but it has higher accuracy than the data model. C. Extensibility Data modeling and simulation modeling generally differ in purpose and features. One of them is related to model extensibility. As discussed earlier, the simulation model can use algorithms, object models, and so on, as well as parameters as inputs. It makes it easy to perform some experiments according to the changes of system algorithms or models. However, the data model can reflect only parameter changes. It is difficult to use algorithms or object models as inputs of the data model. In order to consider them in the data model, we need to collect new data and perform the data modeling process again. Additionally, it is difficult to analyze the system behavior, such as failure analysis and topology analysis, with the data model. However, the proposed model makes them possible with high extensibility. Since the model has the advantages of simulation modeling, it can use various types of inputs. In other words, in addition to the numerical parameters, it is possible to simulate the Hadoop system by changing algorithms and object models. In this experiment, we perform the simulations by changing the data placement algorithm as input. This can show that the proposed model is more scalable than the simulation model. We use Round-Robin and capacity algorithms as the data placement algorithm. Figure 11 shows experimental results. Simulation Execution Time (min) Execution Result Data+Sim. # of Data Nodes Data Sim. Fig. 10. Experimental Result: Execution Time Fig. 11. Experimental Result: Extensibility

V. CONCLUSION This paper presents cooperation between data modeling and simulation modeling for performance analysis of Hadoop. There are researches related to data modeling that analyze the observed workloads using statistical summaries. Also, there are studies to simulate system behavior and procedures through simulation modeling. However, because they each have disadvantages, we complement the shortcomings through the cooperation of them. For this, firstly, we identify the characteristics of Hadoop and classify into two parts according to their characteristics. Then, we model and integrate them using each modeling approach. This paper presents three experiments to show the characteristics of the cooperative modeling: prediction accuracy, execution time and modeling extensibility. From these experiments, we can see that cooperation of two modeling methods can give better prediction results than using each modeling method. We can also see that the proposed model has advantages in execution speed over simulation modeling, and in model extensibility over data modeling. For further work, we will add other components that we did not reflect in this paper. Also, we will research about the methodology to develop the cooperative model of various systems. REFERENCES [1] L. E. B. Villalpando, A. April, and A. Abran, "Performance analysis model for big data applications in cloud computing," Journal of Cloud Computing, vol. 3, no. 1, p. 19, 2014. [2] Apache Hadoop, http://hadoop.apache.org (last accessed: 10.03.17.) [3] D. G. Feitelson, "Workload modeling for performance evaluation," IFIP International Symposium on Computer Performance ing, Measurement and Evaluation, pp. 114-141, Springer Berlin Heidelberg, September, 2002. [4] H. Gronniger and B. Rumpe, "Definition of the System," UML 2 Semantics and s p. 61, 2009. [5] H. Yang, Z. Luan, W. Li, and D. Qian, "MapReduce workload modeling with statistical approach," Journal of grid computing, vol. 10, no. 2, pp. 279-310, 2012. [6] R. De, A. Thomas, A workload model for MapReduce, Diss. TU Delft, Delft University of Technology, 2012. [7] X. Wu, Y. Liu, and I. Gorton, Exploring performance models of Hadoop applications on cloud architecture, Proceedings of the 11th International ACM SIGSOFT Conference on Quality of Software Architectures, pp. 93-101, 2015. [8] Y. Liu, M. Li, N. K. Alham, and S. Hammoud, "HSim: a MapReduce simulator in enabling cloud computing," Future Generation Computer Systems, vol. 29, no. 1, pp. 300-308, 2013. [9] G. Wang, A. R. Butt, P. Pandey, and K. Gupta, "Using realistic simulation for performance analysis of mapreduce setups," Proceedings of the 1st ACM workshop on Large-Scale system and application performance, 2009. [10] J. Dean and S. Ghemawat, "MapReduce: Simplified data processing on large clusters," Communications of the ACM, vol. 51, no. 1, pp. 107-113, 2008. [11] K. Shvachko, H. Kuang, S. Radia, and R. Chansler, "The Hadoop Distributed File System," 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies, Incline Village, USA, May 3-7, 2010. [12] R. J. Abrahart, L. M. See, and D. P. Solomatine, Practical hydroinformatics: computational intelligence and technological developments in water applications, vol. 68, Springer Science & Business Media, 2008. [13] B. S. Kim, B. G. Kang, S. H. Choi, and T. G. Kim, "Data modeling versus simulation modeling in the big data era: case study of a greenhous control system," to appear in SIMULATION: Transaction of The Society for ing and Simulation International, 2017. [14] S. B. John, S. Jiri, W. S. Steven, R. G. Gregory, and Contributors, The DiskSim Simulation Environment Version 4.0 Reference Manual, Carnegie Mellon University, http://www.pdl.cmu.edu/disksim/, 2008. [15] K. S. Narendra and K. Parthasarathy, Identification and Control of Dynamical Systems Using Neural Networks, IEEE Transactions on Neural Networks, vol. 1, no. 1, pp 4-27, 1990. [16] D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, Journal of the society for Industrial and Applied Mathematics, vol. 11, no. 2, pp. 431-441, 1963. [17] B. P. Zeigler, H. Praehofer, and T. G. Kim, Theory of modeling and simulation, 2nd ed., Academic Press, 2001.