An Indian Journal FULL PAPER ABSTRACT KEYWORDS. Trade Science Inc. The study on magnanimous data-storage system based on cloud computing

Similar documents
Huge Data Analysis and Processing Platform based on Hadoop Yuanbin LI1, a, Rong CHEN2

New research on Key Technologies of unstructured data cloud storage

The Establishment of Large Data Mining Platform Based on Cloud Computing. Wei CAI

Processing Technology of Massive Human Health Data Based on Hadoop

A Fast and High Throughput SQL Query System for Big Data

Research Article Mobile Storage and Search Engine of Information Oriented to Food Cloud

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

The Design of Distributed File System Based on HDFS Yannan Wang 1, a, Shudong Zhang 2, b, Hui Liu 3, c

CS November 2017

HDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung

Improvements and Implementation of Hierarchical Clustering based on Hadoop Jun Zhang1, a, Chunxiao Fan1, Yuexin Wu2,b, Ao Xiao1

Open Access Apriori Algorithm Research Based on Map-Reduce in Cloud Computing Environments

High Performance Computing on MapReduce Programming Framework

ADAPTIVE HANDLING OF 3V S OF BIG DATA TO IMPROVE EFFICIENCY USING HETEROGENEOUS CLUSTERS

CS November 2018

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

A priority based dynamic bandwidth scheduling in SDN networks 1

Chapter 5. The MapReduce Programming Model and Implementation

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

A New Model of Search Engine based on Cloud Computing

The Analysis and Implementation of the K - Means Algorithm Based on Hadoop Platform

Introduction to Hadoop and MapReduce

Decision analysis of the weather log by Hadoop

CS427 Multicore Architecture and Parallel Computing

Distributed File Systems II

Embedded Technosolutions

big picture parallel db (one data center) mix of OLTP and batch analysis lots of data, high r/w rates, 1000s of cheap boxes thus many failures

Dynamic Data Placement Strategy in MapReduce-styled Data Processing Platform Hua-Ci WANG 1,a,*, Cai CHEN 2,b,*, Yi LIANG 3,c

Topics. Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples

SQL Query Optimization on Cross Nodes for Distributed System

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

EXTRACT DATA IN LARGE DATABASE WITH HADOOP

The Design and Implementation of Disaster Recovery in Dual-active Cloud Center

Data Clustering on the Parallel Hadoop MapReduce Model. Dimitrios Verraros

Global Journal of Engineering Science and Research Management

Map-Reduce. Marco Mura 2010 March, 31th

International Journal of Advance Engineering and Research Development. A Study: Hadoop Framework

MOHA: Many-Task Computing Framework on Hadoop

MI-PDB, MIE-PDB: Advanced Database Systems

Cloud Computing. Hwajung Lee. Key Reference: Prof. Jong-Moon Chung s Lecture Notes at Yonsei University

MapReduce: Simplified Data Processing on Large Clusters 유연일민철기

HiTune. Dataflow-Based Performance Analysis for Big Data Cloud

HADOOP FRAMEWORK FOR BIG DATA

Data Analysis Using MapReduce in Hadoop Environment

High Throughput WAN Data Transfer with Hadoop-based Storage

A Glimpse of the Hadoop Echosystem

Bigtable. Presenter: Yijun Hou, Yixiao Peng

Survey Paper on Traditional Hadoop and Pipelined Map Reduce

Hadoop An Overview. - Socrates CCDH

Lecture 11 Hadoop & Spark

Improve Security-Events-Center to the Cloud Platform

Big Data Programming: an Introduction. Spring 2015, X. Zhang Fordham Univ.

Evaluation of Apache Hadoop for parallel data analysis with ROOT

Batch Inherence of Map Reduce Framework

A Study of Cloud Computing Scheduling Algorithm Based on Task Decomposition

Improving the MapReduce Big Data Processing Framework

CA485 Ray Walshe Google File System

ABSTRACT I. INTRODUCTION

Bigtable: A Distributed Storage System for Structured Data by Google SUNNIE CHUNG CIS 612

Correlation based File Prefetching Approach for Hadoop

Top 25 Big Data Interview Questions And Answers

BioTechnology. An Indian Journal FULL PAPER. Trade Science Inc. Study on secure data storage based on cloud computing ABSTRACT KEYWORDS

Distributed Systems 16. Distributed File Systems II

HBase: Overview. HBase is a distributed column-oriented data store built on top of HDFS

MapReduce & BigTable

Research on Mass Image Storage Platform Based on Cloud Computing

The Analysis Research of Hierarchical Storage System Based on Hadoop Framework Yan LIU 1, a, Tianjian ZHENG 1, Mingjiang LI 1, Jinpeng YUAN 1

4th National Conference on Electrical, Electronics and Computer Engineering (NCEECE 2015)

Cloud Computing 2. CSCI 4850/5850 High-Performance Computing Spring 2018

Column Stores and HBase. Rui LIU, Maksim Hrytsenia

Analyzing and Improving Load Balancing Algorithm of MooseFS

Massive Online Analysis - Storm,Spark

Analysis of Extended Performance for clustering of Satellite Images Using Bigdata Platform Spark

Tutorial for Assignment 2.0

Oracle Big Data Connectors

Research on the Application of Bank Transaction Data Stream Storage based on HBase Xiaoguo Wang*, Yuxiang Liu and Lin Zhang

Yunfeng Zhang 1, Huan Wang 2, Jie Zhu 1 1 Computer Science & Engineering Department, North China Institute of Aerospace

A New HadoopBased Network Management System with Policy Approach

CLIENT DATA NODE NAME NODE

Stages of Data Processing

CSE 444: Database Internals. Lecture 23 Spark

MAPREDUCE FOR BIG DATA PROCESSING BASED ON NETWORK TRAFFIC PERFORMANCE Rajeshwari Adrakatti

Hadoop 2.x Core: YARN, Tez, and Spark. Hortonworks Inc All Rights Reserved

Scaling Up 1 CSE 6242 / CX Duen Horng (Polo) Chau Georgia Tech. Hadoop, Pig

Implementation and performance test of cloud platform based on Hadoop

Construction and Application of Cloud Data Center in University

Scalable, Reliable Marshalling and Organization of Distributed Large Scale Data Onto Enterprise Storage Environments *

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Data Informatics. Seon Ho Kim, Ph.D.

Accelerating Big Data: Using SanDisk SSDs for Apache HBase Workloads

Online Bill Processing System for Public Sectors in Big Data

A BigData Tour HDFS, Ceph and MapReduce

Distributed computing: index building and use

BESIII Physical Analysis on Hadoop Platform

Configuring and Deploying Hadoop Cluster Deployment Templates

Hadoop and HDFS Overview. Madhu Ankam

Oracle Data Integrator 12c: Integration and Administration

Research on Load Balancing in Task Allocation Process in Heterogeneous Hadoop Cluster

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Inria, Rennes Bretagne Atlantique Research Center

Transcription:

[Type text] [Type text] [Type text] ISSN : 0974-7435 Volume 10 Issue 11 BioTechnology 2014 An Indian Journal FULL PAPER BTAIJ, 10(11), 2014 [5368-5376] The study on magnanimous data-storage system based on cloud computing Dongping Cheng Information Department, The First Affiliated Hospital of Soochow University, Suzhou 215006, Jiangsu, (CHINA) ABSTRACT Cloud computing is a computing and a brand new business model, large amounts of data storage issues related to cloud computing emerged in this model. In order to better solve the storage problem of cloud computing, this paper studies on the cloud computing and storage system, focusing on analysis of the distributed computing framework of Apache open-source organization, called Hadoop. First, it focuses on the study of the storage infrastructure of HDFS file system which lays the foundation of the effective design of Hadoop; then it analyzes the programming model of Map Reduce and HBase, in order to exploring the methods of system designing; in the end, based on large amounts of models of data stored design rooted to could computing, it testing the system, and compares the system in this paper to traditional designed system in the area of CUP load and memory usage, and finds out if the effect on CPU caused by the system in this paper is in an acceptable range, this system will has a very excellent memory usage advantages in Name Node. KEYWORDS Cloud computing; Hadoop; System testing; CUP load; Memory usage. Trade Science Inc.

BTAIJ, 10(11) 2014 Dongping Cheng 5369 INTRODUCTION At the end of 2006, Google became the first company ever put forward the concept of Cloud in order to solve the storage issue of mass data in the Internet. As a calculation model for sharing resource, cloud computing has powerful information storage and processing ability which attracts relevant scholars did much research about it. This paper wants to explore a more convenient and effective system design for big data cloud computing based on the study of large amount of data storage technology. The studies of cloud computing and storage system consist of many people s efforts. They have promoting the development of cloud computing to a certain extent. Among them are: Tuo Shouheng [1], who designed a generic architecture of cloud computing after introduce the definition and character of the now existing cloud computing; he focused on the structural model of cloud computing and its storage system, analyzed the 2 new storage technology, and in-depth analysis the development trend of cloud computing and storage [1]. Xu Xiaoyuan etc. [2], they focused on the issue of big data storage caused by the widely use of internet, based on the cloud computing, analyzed the calculate model of Hadoop framework structure and the Map Reduce and put forward the mass data storage technology based on cloud computing architecture diagram, and further analyzed the storage layer [2]. Duan Wanlong etc. [3], they studied the magnanimous data-storage system for books based on cloud computing, and analyzed the main features of modern digital libraries as well as the application situation of cloud computing technology in library data management, and proposed appropriate guidance [3]. Based on previous studies, this paper conducted research on magnanimous data-storage system under the cloud computing, in order to explore the advantages of the system in this paper compared to the traditional system. OVERVIEW OF CLOUD COMPUTING SYSTEM AND THE STRUCTURE OF STORAGE SYSTEM Yuan Guojun [4] once pointed out, the cloud computing is not only a calculate model but also a entirely new business model, it is the development of distributed processing, parallel processing and grid computing, or it can be say as the realization of computer concepts [4]. As the platform of cloud computing is a powerful Cloud net that connects a large number of concurrent network computing and services, it can take advantage of virtualization technology to expand the ability of each server, collects their resources together through cloud computing platform, and provides them supercomputing and storage capacity, as shown in Figure 1, a generic cloud computing architecture. Figure 1 : A generic cloud computing architecture. In Figure 1: the cloud user s role is to provide cloud user a interface for requesting services, users can register, log in or customized services, configure and manage the users via a Web browser; service catalog refers to the user can select or customize the service list after obtaining the appropriate permissions; the management system and deployment tools are to provide management and services, the management include the cloud user, user authorization, authentication, landing and available computing resources and services, accept the requests sent by user and forwarded to the appropriate application base on the request, scheduling resources, intelligently deploy resources and application, dynamically deploy, configure, and recovery resources; monitoring and calculating the usage of cloud computing in

5370 The study on magnanimous data-storage system based on cloud computing BTAIJ, 10(11) 2014 order to react quickly, complete the node configuration synchronization, load balancing configuration and resource monitoring to ensure the allocation of resource would be able to fit the user; Server cluster refers to the virtual or physical server under the control of management system, responsible for the high concurrency users request, processing large computational complexity computing, Web application services, cloud data is stored through cutting algorithm, the upload and download of big data are done through parallel way. Wang Sehngchuan [5] pointed out, the user can choose required services through the client, the management systems will schedule the resources due to the request, then the deployment tool distributed the request and configuring Web application [5]. Tang Jian [6] pointed out; compared to the traditional storage devices, the cloud storage system is not only hardware, but a complex system composed by multi-sectorial, mainly include: Network devices, storage devices, servers, applications, company access interface, access network and client programs, etc. the core of these parts is storage device, provide the services of data storage and access to business due to user s request through some software application. Figure 2 shows us the 4 layers of cloud storage system mechanism. Figure 2 : Cloud storage system mechanism Yang Liting [7] pointed out, compared to cloud computing system, cloud storage system has one more storage layer in its architectural model, what s more, in basic management cloud storage also corresponding increase a lot of data related functions, like data management and data security, etc., the 2 systems have the same access layer and application interface layer. ARCHITECTURE ANALYSIS OF HADOOP Chen Yong [8] pointed out, Hadoop is a distributed computing framework of Apache open-source organization, you can run the application on a large number of low-cost hardware cluster, provides a stable and reliable interface for applications, aims to build a high reliability and good scalability Distributed Systems. Beside its open-source feature, Hadoop is also scalable, economical, reliable and efficient advantages. Hadoop is mainly composed of HDFS graphs and HBase these three parts, also it is the main component of architecture, the chapter above will focus on the analyzing of these three parts. System Analyses of HDFS HDFS is the abbreviation of the Hadoop distributed file system, is the storage basis of distributed computing, its design premise and the target as shown below: (1) Hardware error should not be seen as anomaly, but normal. (2) Batch processing for streaming data access, use batch processing instead of user interaction with the data, this way pays more attention to high throughput data access. (3) Large data sets, the typical HDFS file size may be levels in GD terabytes even huge amounts of data, so it can support large file storage, meanwhile, if the data sets have more than 100 node number, (4) the overall storage will provide strong enough guarantee.

BTAIJ, 10(11) 2014 Dongping Cheng 5371 Simple and consistent model of mobile computing costs are lower compared with mobile data, especially for large files, the advantage of mobile computing is even more obviously, portability across heterogeneous hardware and software platforms makes it convenient to promote the HDFS as the platform for massive data platform. HDFS showed the name of the file system space, in the form of files, the user can storage the files by splitting them into a block of data through namespace. Liu Peng [9] pointed out that the namespace implementation of the file system is done by Name Node, the Name Node will also determine the block of data and mapped them to a specific Database node, and Data Node is responsible to the read request of file system client [9]. The architecture of HDFS file system is shown in Figure 3. Figure 3 : The architecture of HDFS file system As shown in Figure 3, if the client wants to access a file, first, the client needs to get the data block position list of files from the Name Node, then directly read the file s data from the Data Node, the Name Node in charge of all HDFS metadata, always has nothing to do with the transmission of the data The Programming Model of Map Reduce Map Reduce framework using the map and reduce, these two functions to implement the data s partition and merging, Map Reduce model has the advantages of high abstractness, this model also including the map function, the reduce function and < key, value > these three core parts. The Map function: the Map function defined by users receives from the input data slice from < key, value >, then produce a series of intermediate < key, value > sets, Map Reduce collect each with the same intermediate key value together and then sent to Reduce function respectively. Reduce function: Reduce function receives a set of intermediate key and its corresponding value, at the same time by the iterator to the value of the key collection provided to the user's Reduce function, the user defined Reduce function summary and output all the value. Reduce are commonly used to summarize the data, programming the massive data to get smaller data, for example: Implement a "+" operation, returns the input data and the value list. < key, value > set : < key, value > set is consist of the data related to the task by a representative of the value and the key represents the value of the packet number these two parts, the value needs to be placed in the corresponding group to participate in the operation process. Therefore, < key, value > set can be regarded as a communication interface for programmers Zhao Chunyan [10] pointed out that the operation of a Map Reduce include mapping and reduction these two stage, in the mapping stage, the data input by user are separately into M fragment by Map Reduce, corresponding M Map tasks, each Map s operation is the input data fragments of * < key, value >set< K2, V1 > collection, Map operation call user defined the Map function, the output of an intermediate state * < key, value > set < K2, V2 > collection, then, according to the intermediate state, a

5372 The study on magnanimous data-storage system based on cloud computing BTAIJ, 10(11) 2014 new K2 will be ordered by the output data set, and generate a new < K2, the list (V2) > tuple, it can make all the values of the data corresponding to the same key together, finally, according to the scope of the K2 these tuples is divided into R a fragment, corresponding to the number of Reduce tasks The principle of how Hadoop operate Map Reduce is shown in Figure 4. Figure 4 : The principle of how Hadoop operate Map Reduce The Figure 4 shows that, the analysis of realization of Job Client submit Job () method to the job submission process: first, to request a new assignment job tracker ID by calling the job tracker get New Job Id (); Then, check the job output, division operations of the input, resource partitioning operations; Finally, the job is ready to perform. The operation execution process of Map Reduce is shown in Figure 5. Figure 5: The operation execution process of Map Reduce Figure 5: segmentation, first the input files are divided into M pieces by the Map Reduce function in the user s program, each piece is about 16 MB to 64 MB, then the program is copied; Assigned task is when the program has a Master programs and as a work of Master control program to allocate work units, the Master distribution Map/Reduce tasks to choose free work on board; Read is the Master machine assigned to the job, assigned a Map task, the work machine will processing input block related work, and < key, value > set will be passed to the user to specify the Map function, and the function of intermediate value are temporarily stored in memory; the Reduce machine is responsible for receiving the local writing information. Remote reading is when the Reduce machine receives the middle position, the Master calls a remote procedure from the Map work machine on the local hardware disk read buffer in the middle of the data, Reduce machine work after reading these intermediate value, it is the same in accordance with the key to sorting, will arrange together in the same key value, use external sort to relieve the

BTAIJ, 10(11) 2014 Dongping Cheng 5373 intermediate value too complicated; Write is in accordance with each key corresponding to the displacement in the middle of the sequence of intermediate data, Reduce machine work send the keywords and the corresponding intermediate data to the user's Reduce function, after the Reduce function treatment results output to a final output file; Complete all the Map/Reduce tasks, master control program will activate the user s program. Analysis of HBase HBase is an open source based on the storage model of distributed database, and it is an important part of Apache Hadoop project. As shown in TABLE 1, the store logical model of HBase is the same as the Big table, tables in HBase is Bigtable The series data lines that the users stored in the form contain the sortable keyword, an optional timestamp, and some may have data column. There are three basic types of definition in a data line, keyword, timestamp, line and column, line of keywords is a unique identifier in the table, the timestamp is the timestamp each corresponding correlation data operation, the column definitions for (< race > : < TAB >) through these two parts, a column of a data storage can be specified You need administrator rights if you want define or modify a column family, but tags can be added at any time, in accordance with the column family, HBase data is stored on disk, a column family of all items must be the same in read/write, HBase update timestamp, the client can get some point to the latest data by the timestamp or all of the data, such as TABLE 1 TABLE 1 : View list for the logical of database keywords timestamp line contents line anchor line mine com.cnn.www T9 anchor.cnnsi.com CNN T8 anchor.my.look.ca CNN.com T6 <html> Text/html T5 <html> T3 <html> ANALYSIS OF MASSIVE DATA BASED ON CLOUD COMPUTING Design philosophy The data storage requirements are shown below: (1) Require considerable storage capacity, and requires efficient storage process (2) Require the data storage model and calculation can reasonable use a cluster environment (3) Require to meet a variety of data format (4) Require to solve a hardware failure collect the data somehow. (5) Avoid the waste of time caused by operation error, require a high level of fault-tolerant. (6) Require the model has good maintainability, scalability and usability. The design philosophy of this paper is to solve the above requirements, the model of massive data storage model is shown in Figure 6.

5374 The study on magnanimous data-storage system based on cloud computing BTAIJ, 10(11) 2014 Figure 6 : Massive data storage model As shown in Figure 5, massive data storage model has many advantages such as collect jobs in order, collect DFS in system directly, to read the NFS backup in free time, and strong resilience, when operation fail it still can perform distribution, etc. System testing The configuration files of Hadoop is as shown in TABLE 2 TABLE 2 : List of documents Hadoop configuration filename format description Hadoop-env.sh Bash script Use environment variables in operation the script for the Hadoop Core-site.xml Hadoop configuration Core - site. such as HDFS and graphs are very common in the I/O XML Settings Hdfs-site.xml Hadoop configuration XML HDFS daemon configuration Settings Mapred-site.xml Hadoop configuration XML Map Reduce daemon configuration settings masters plain text Record the operation list of the second Name Node machine slaves plain text Machine list of Data Node and task tracker's record Hadoop-met rics.properties Java Properties how to control Hadoop to attributes released metrics Log4j.properties Java Properties The filename of system log file attributes, the Name Node audit log and task tracker log properties In the process of system testing, first download an SSH client through the official website, and then registered an account on http://hadoop.nchc.org.tw/index.php, by this account login remote HDFS cluster file system, using this to complete the test, the command used in the process as shown in TABLE 3. TABLE 3 : The basic command of the Hadoop list Basic command function Basic command function start-all.sh Start all the Hadoop environment stop-mapred.sh Stop MapReduce environment stop-all.sh Stop all Hadoop environment start-dfs.sh Start HDFS environment

BTAIJ, 10(11) 2014 Dongping Cheng 5375 start-mapred.sh Start MapReduce environment stop-dfs.sh Stop HDFS environment In this test we all together launch six Map task, using nnbench respectively test three sets of data, the test is mainly about read, write and operation of the three sets of data, analysis system s improved and the load of CPU and memory usage, the data of three tested groups are as shown in TABLE 4. TABLE 4 : The experimental data list group Group 1 Group 2 Group 3 number of documents file size aggregate capacity 2000 61MB 122G 100000 1MB 100G 1600000 64KB 100G The CUP load and memory usage in the traditional and system design in this paper s read operation. TABLE 5 : The CUP load and memory usage in the traditional and system design in this paper s read operation as shown in TABLE 5. group (nn memory) Nn memory (nncpu) nncpu (execution time) execution time Group 1 1.4GB 0.7GB 56% 54% 0.5min 0.7 min Group 2 1.2GB 1.1GB 77% 88% 4.0 min 5.0 min Group 3 0.9GB 0.8GB 79% 79% 6.0 min 7.0 min Note : (XX) refer to traditional one; XX refer to the design in this paper. The CUP load and memory usage in the traditional and system design in this paper s write operation shown in TABLE 6. TABLE 6 : The CUP load and memory usage in the traditional and system design in this paper s write operation group (nn memory) Nn memory (nncpu) nncpu (execution time) execution time Group 1 1.6GB 0.7GB 75% 84% 3.5min 4.0min Group 2 1.3GB 0.9GB 82% 88% 4.0min 5.0min Group 3 1.0GB 1.0GB 89% 94% 5.0mn 6.0mn Note : (XX) refer to traditional one; XX refer to the design in this paper. The system design in this paper Reduced memory usage of the Name Node, but caused a certain influence on the performance of the CPU compare to the traditional one. The influence on the performance of CUP is in an acceptable range. Reduce the use of Name Node can increase the extensibility of the whole system, improve the Name Node number of document processing overall. For the application of large files, the reduction of Name Node memory usage is more obvious CONCLUSION This paper first summarizes the cloud computing system and the structure of storage system and then focused on the analysis of Hadoop s framework, mainly studied the HDFS system, Map Reduce

5376 The study on magnanimous data-storage system based on cloud computing BTAIJ, 10(11) 2014 programming model and Hbase. Through the study of Hbase, operate the massive data storage design based on the 6 requests. The massive data storage model has many advantages such as collect jobs in order, collect DFS in system directly, to read the NFS backup in free time, and strong resilience, when operation fail it still can perform distribution, etc. On the basis of Hadoop configuration file information and basic commands in the Hadoop interpretation, the author test the design of this paper base on the massive data storage, and compare the result with the traditional store system, and found the advantages of the system design in this paper in the usage of Name Node. REFERENCE [1] Tuo Shouheng; Research on Cloud computing and Storage Transportation Technology [J]. The Development and Application of Computer, 23(9), 1-4 (2010). [2] Xu Xiaoyuan; The Mass Data Storage Based on Cloud Computing. [J]. Manufacturing Automation, 35(7), 82-85 (2013). [3] Duan Wanlong; Research on Mass Data Storage Library Based on Cloud Computing. [J] Computer Knowledge and Technology, 10(13), 2909-2931 (2014). [4] Yuan Guojun; Cloud Computing: Application and Development. [J]. Laboratory Science, 15(6), 320-322 (2009). [5] Wang Shengchuan; Research Based on the Cloud Storage Technology. [J] Computer Knowledge and Technology, 2, 53-55 (2011). [6] Tang Jian; Analysis and application research of Cloud Storage System [J]. Computer Knowledge and Technology, 7(5), 5337-5340 (2009). [7] Yang Liting; Research Based on Cloud Computing Data Storage Technology [D]. University of North Central China, (2011). [8] Chen Yong; The Communication Data Distributed Query Algorithm Based on Hadoop Platform's Design and Implementation [D]. Beijing Jiaotong University, (2009). [9] Liu Peng; Cloud Computing [M]. Electronic Industry press, (2010). [10] Zhao Chuanyan; Research and Implementation of Job Scheduling Algorithm Under the Cloud Computing [D]. Beijing Jiaotong University, (2009).