A Novel Architecture to Efficient utilization of Hadoop Distributed File Systems for Small Files

Similar documents
International Journal of Advance Engineering and Research Development. A Study: Hadoop Framework

Hadoop and HDFS Overview. Madhu Ankam

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

Distributed Filesystem

A Study of Comparatively Analysis for HDFS and Google File System towards to Handle Big Data

CLIENT DATA NODE NAME NODE

Enhanced Hadoop with Search and MapReduce Concurrency Optimization

HDFS Architecture. Gregory Kesden, CSE-291 (Storage Systems) Fall 2017

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

HADOOP FRAMEWORK FOR BIG DATA

CLOUD-SCALE FILE SYSTEMS

Distributed Systems 16. Distributed File Systems II

Google File System (GFS) and Hadoop Distributed File System (HDFS)

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

The Analysis Research of Hierarchical Storage System Based on Hadoop Framework Yan LIU 1, a, Tianjian ZHENG 1, Mingjiang LI 1, Jinpeng YUAN 1

The Google File System

The Design of Distributed File System Based on HDFS Yannan Wang 1, a, Shudong Zhang 2, b, Hui Liu 3, c

Distributed File Systems II

Mounica B, Aditya Srivastava, Md. Faisal Alam

The Google File System. Alexandru Costan

Hadoop/MapReduce Computing Paradigm

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Lecture 11 Hadoop & Spark

MapReduce. U of Toronto, 2014

The Google File System

April Final Quiz COSC MapReduce Programming a) Explain briefly the main ideas and components of the MapReduce programming model.

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Hadoop. copyright 2011 Trainologic LTD

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

The Google File System

MAPREDUCE FOR BIG DATA PROCESSING BASED ON NETWORK TRAFFIC PERFORMANCE Rajeshwari Adrakatti

QADR with Energy Consumption for DIA in Cloud

A Review Approach for Big Data and Hadoop Technology

CA485 Ray Walshe Google File System

GFS: The Google File System. Dr. Yingwu Zhu

Cloudera Exam CCA-410 Cloudera Certified Administrator for Apache Hadoop (CCAH) Version: 7.5 [ Total Questions: 97 ]

Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA {Shv, Hairong, SRadia,

BigData and Map Reduce VITMAC03

Dept. Of Computer Science, Colorado State University

Introduction to Hadoop and MapReduce

Survey on MapReduce Scheduling Algorithms

Dynamic Data Placement Strategy in MapReduce-styled Data Processing Platform Hua-Ci WANG 1,a,*, Cai CHEN 2,b,*, Yi LIANG 3,c

CSE 124: Networked Services Lecture-16

International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN

A brief history on Hadoop

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

High Performance Computing on MapReduce Programming Framework

CSE 124: Networked Services Fall 2009 Lecture-19

Survey Paper on Traditional Hadoop and Pipelined Map Reduce

Google File System. Arun Sundaram Operating Systems

THE COMPLETE GUIDE HADOOP BACKUP & RECOVERY

A BigData Tour HDFS, Ceph and MapReduce

Research Article Mobile Storage and Search Engine of Information Oriented to Food Cloud

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E.

Performance Enhancement of Data Processing using Multiple Intelligent Cache in Hadoop

goals monitoring, fault tolerance, auto-recovery (thousands of low-cost machines) handle appends efficiently (no random writes & sequential reads)

Processing Technology of Massive Human Health Data Based on Hadoop

MI-PDB, MIE-PDB: Advanced Database Systems

The Google File System

Google File System. By Dinesh Amatya

LOG FILE ANALYSIS USING HADOOP AND ITS ECOSYSTEMS

An Improved Performance Evaluation on Large-Scale Data using MapReduce Technique

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

CS60021: Scalable Data Mining. Sourangshu Bhattacharya

Distributed Face Recognition Using Hadoop

Google File System 2

Distributed Systems. Hajussüsteemid MTAT Distributed File Systems. (slides: adopted from Meelis Roos DS12 course) 1/25

A Micro Partitioning Technique in MapReduce for Massive Data Analysis

NPTEL Course Jan K. Gopinath Indian Institute of Science

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Cloud Computing. Hwajung Lee. Key Reference: Prof. Jong-Moon Chung s Lecture Notes at Yonsei University

Decision analysis of the weather log by Hadoop

Hadoop Distributed File System(HDFS)

Comparative Analysis of K means Clustering Sequentially And Parallely

HDFS Architecture Guide

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo

GFS: The Google File System

CLUSTERING BIG DATA USING NORMALIZATION BASED k-means ALGORITHM

Improving Small File Management in Hadoop

CS435 Introduction to Big Data FALL 2018 Colorado State University. 11/7/2018 Week 12-B Sangmi Lee Pallickara. FAQs

MapReduce: Simplified Data Processing on Large Clusters 유연일민철기

A Survey on Big Data

UNIT-IV HDFS. Ms. Selva Mary. G

Programming Models MapReduce

A Multilevel Secure MapReduce Framework for Cross-Domain Information Sharing in the Cloud

Map-Reduce. Marco Mura 2010 March, 31th

Optimizing Hadoop for Small File Management

HDFS: Hadoop Distributed File System. Sector: Distributed Storage System

DATA DEDUPLCATION AND MIGRATION USING LOAD REBALANCING APPROACH IN HDFS Pritee Patil 1, Nitin Pise 2,Sarika Bobde 3 1

CSE 444: Database Internals. Lectures 26 NoSQL: Extensible Record Stores

Cloud Computing CS

HADOOP BLOCK PLACEMENT POLICY FOR DIFFERENT FILE FORMATS

The Google File System (GFS)

Correlation based File Prefetching Approach for Hadoop

Top 25 Big Data Interview Questions And Answers

ABSTRACT I. INTRODUCTION

Google is Really Different.

References. What is Bigtable? Bigtable Data Model. Outline. Key Features. CSE 444: Database Internals

Analyzing and Improving Load Balancing Algorithm of MooseFS

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

Transcription:

A Novel Architecture to Efficient utilization of Hadoop Distributed File Systems for Small Files Vaishali 1, Prem Sagar Sharma 2 1 M. Tech Scholar, Dept. of CSE., BSAITM Faridabad, (HR), India 2 Assistant Professor, Dept. of CSE, BSAITM Faridabad, (HR), India Abstract: Hadoop is known as an open source distributed computing platform and HDFS is defined as Hadoop Distributed File System having powerful data storage capacity therefore suitable for cloud storage system. HDFS was designed for streaming access on large software and it has low storage efficiency for massive small files. For this problem, the HDFS file storage process is improved and therefore files are judged before uploading to HDFS clusters. If the file is of small size then it is merged and its index information is stored in index file in the form of key-value pairs else it will directly go to HDFS Client. Also if all files are processed and no file left to be merged then the merged files go to the HDFS.[7] Keywords: hadoop, hdfs, small files storage, file processing, file storing I. INTRODUCTION Hadoop is known as an open source distributed computing platforms. Its design is basically proposed for managing the big data. It changes the way that any organizations store, process and analyze data. The architecture of Hadoop is scalable, reliable and flexible. It allows data to store and analyze at very high speed. It provides services such as data processing, data access, data governance, security.[1] HDFS is defined as Hadoop Distributed File Systems. As the name specifies it is distributed file-system that stores data on commodity machines which provides very high bandwidth across cluster. It is high fault tolerance and gives native support of large data sets as well as it stores data on commodity hardware. [1] Basically it is specially designed file system for storing huge datasets with cluster of commodity hardware with streaming access patterns. Here streaming access patterns means that write once and read any number of times but content of file should not be changed. HDFS has powerful data storage capacity such that it is suitable for cloud storage systems. HDFS was originally developed for large software and therefore it has low storage efficiency for large number of small files. A. Hdfs Architecture HDFS architecture (shown in fig-1) is suitable for distributed processing and storage as well as it provides file authentication and permissions. HDFS is comprised of NameNode, Secondary NameNode, Job Tracker, DataNode, Task Tracker. MASTER SERVICES SLAVE SERVICES NameNode Data Node Secondary NameNode Job Tracker Task Tracker Fig-1: Architecture of HDFS 1934

The communication between master services and slave services is done as it is shown in figure. As all master services can communicate with each other, similarly slave services can communicate with each other. NameNode can communicate with Data Node and vice-versa as well as Job Tracker can communicate with Task Tracker and vice-versa. 1) NameNode: NameNode seems like a manager. It creates and maintains the Metadata of the data. It tells client that store data in the available spaces. NameNode maintains three replications of the data including the original copy of data. If any replica is lost due to any failure, in that case, NameNode will create another replica of it. 2) Secondary NameNode: It is just a helper node for NameNode so that it helps in better functioning of NameNode. Its purpose is to have a checkpoint of file system Metadata present on NameNode. Checkpoint node is another name of it. 3) Job Tracker: Job Tracker basically monitors everything and it knows very well that how many jobs are running. Job Tracker assigns tasks to Task Tracker. 4) DataNode: DataNode basically gives block reports and heartbeat to NameNode. If DataNode do not send heartbeat then in that case NameNode will assume that it is dead. 5) Task Tracker: Task Tracker is used to receive tasks from Job Tracker. All Task Tracker give some heartbeat back to the Job Tracker every three seconds to tell they are still alive. If they does not inform Job Tracker then in that case Job Tracker will assume that either they are working very slowly or may be dead. II. LITERATURE SURVEY Table-1: Summary of Various research papers S.No Paper Title Author Analysis Findings 1 A Novel Approach to improve the performance of Hadoop in Handling of Small Files.[1] Parth Gohil, Bakul Panchal, J.S. Dhobi Drawback of HAR approach is removed by eliminating big files in archiving. Uses Indexing for sequential file created. Improves the performance by ignoring the files whose size is larger than the block size of Hadoop. 2 An improved HDFS for small file 3 Dealing with small files problem in Hadoop Distributed File System 4 Hadoop Architecture and Applications Liu changtong china, Sachin Bendea, Rajashree Shedbeg, Kusum Munde, Musrat Jahan Small file problem of original HDFS is eliminated by judging them before uploading to HDFS clusters. If the file is a small file, it is merged and the index information of the small file is stored in the index file with the form of key- value pairs Comparative study of possible solutions for small file problem. Hadoop has designed to manage Big Data so that it changes the way the enterprises store, process and analyze the data. Hadoop provides services for large data sets such as data processing, data access, data governance, security. Access efficiency of NameNode is increased. CombinedFileInput- Format provides best performance. Hadoop supports distributed storage and distributed computing. Data is stored in HDFS which uses block replicas so it is fault tolerant also. 1935

5 An approach to solve a Small File problem in Hadoop by using Dynamic Merging and Indexing Scheme 6 Analysis of Hadoop over SAP software solutions 7 Google File System and Hadoop Distributed File System- An Analogy 8 Data Security in Hadoop Distributed File System Shubham Bhandari, Suraj Chougale, Deepak Pandit, Suraj Sawat Veena V Deolankar, Nupoor Deshpande, Mandar Lokhande Dr. A.P. Mittal, Dr. Vanita Jain, Tanuj Ahuja Sharifnawaj Y. Inamdar, Ajit H. Jadhav, Rohit B. Desai, Pravin S. Shinde, Indrajeet M. Ghadage, Amit A. Gaikwad. In this paper, instead of worn one NameNode for shop the metadata, NHAR uses manifold NameNodes so that NHAR lessen the freight of a sincere NameNode in symbol amount. In this paper, attempt has been made to prove that Hadoop can be used for large data processing as compared to SAP software solutions. With ever-increasing data, a reliable and easy to use storage solution has become a major concern for computing. Distributed File Systems tries to address this issue and provides means to efficiently store and process these huge datasets. In this, security of HDFS is implemented using encryption of file which is to be stored at HDFS so a real-time encryption algorithm is used. Encryption using AES results into growing of file size to double of original file and hence file upload time also increases so this technique removes this drawback. NHAR uses manifold NameNodes due to which the Load/NameNode is way fall. This paper provides the brief introduction of Hadoop and disadvantages of using SAP on large scale industries. To effectively handle hard drive failures, power failures, router failures, network maintenance, bad memory, rack moves, misconfigurations, datacenter migrations across hundreds of thousands or millions of machines requires significantly better error monitoring, tooling and auto recovery. Encryption/decryption, authentication and authorization are the techniques those much supportive to secure information at Hadoop Distributed File System. In future work, subject prompts produce Hadoop with a wide range of security techniques for securing information and additionally secure execution of job. A. Limitation in Existing System 1) Decreasing the performance by accepting the small files and large files (i.e. whose size is smaller and larger than the block size of Hadoop respectively). 2) Access time for reading a file is greatly high. 3) Access efficiency is low of NameNode. 4) There is no automatic fault tolerance for NameNode failure. 1936

5) As compared to GFS, hadoop cluster provides less error monitoring, tooling, and auto-recovery to effectively handle hard drive failures, power failures, router failures, network maintenance, bad memory, rack moves, misconfigurations, datacenter migrations across hundreds of thousands or millions of machines. 6) Hadoop not have any sort of security system so it uses techniques like encryption/decryption, authentication and authorization to secure information at Hadoop Distributed File System. III. PROPOSED ARCHITECTURE FOR SMALL FILE STORAGE In the improved architecture of HDFS (shown in fig-2), it basically has three layers: user layer, data processing layer and the storage layer based on HDFS. User Layer Client File Processing Unit File Judging Unit Proceeded files Data Processing Layer File Merging Unit Data Compute Pror Remaining File HDFS Client Storage Layer HDFS Cluster Data Node Data Node Name Node Data Node Fig-2: Proposed architecture of HDFS for small files 1) User Layer: This layer considered as the entry of the input of the whole store system which gives an interface to the user to upload file, browse file and download file 2) Storage Layer: This is the place where data resides and also it is the most critical layer of the whole store system. It have HDFS server which provides reliable and persistent storage capabilities. A. Processing Layer HDFS has the limited capability to support small file, this layer is designed basically for the same purpose. It mainly has four functional units: 1) File Judging Unit 2) File Processing Unit 3) File Merging Unit 4) HDFS Client. B. Index File for Files 1937

1) File Judging Unit: Its main purpose is to determine the size of the file and checks if there is any need of merging process. It the file of user is of small size then there is need of merging process and that file is sent to the file processing unit. Otherwise, the file is directly sent to HDFS Client.[7] 2) File Processing Unit: The functions of file processing is to receive files from the file judging unit, it counts the size of the files, on the basis of order and size of small files it form an incremental offset from start and generate temporary index file (TempIndex). When the size and offset of small files is stored, it sends the small files and the corresponding temporary index sequentially to the file merging unit.[7] 3) File Merging Unit: The main function of this unit is merging the files. It merges the small files according to the order of small files. The temporary index files are merges generating a merged index file as shown fig-4. The actual content of small files is stored in the merged file. It records the merged index files of small files by <key, value> format. There is a unique value for retrieving small files which records the critical information of the small file and it is known as key. This records the offset of small file and length of small file. The end position of small file can be derived as key_value + offset_length. When the small file is read then the key in merged index file is obtained according to the name of the small file then value is got by the key and resolved to obtain the offset and length of small file so the end position of small file can be derived. The small file in the merged file can be got by the start and end position of small file. There also exist a block which check out the proceeded files and hence update the counter by reporting to file judging unit that the files are processed and side by side it computes the remaining files so that in any case if number of files finishes then it directly sends them to the HDFS client otherwise files will go to file judging unit. 4) HDFS Client: HDFS write the received data in the stored layer. It establishes a connection between NameNode and DataNode with distributed file system instance. It notifies NameNode that which DataNode is used to write data block. The data writing operation is completed by calling the relevant document operating API provided by HDFS. If the file is of large size then the processing is same as writing otherwise the merged file and the corresponding merged index file are stored together on the same DataNode. The id of merged file and merged index file is same and the merged index file is protected by DataNode which is transparent for NameNode. When merging index file is successful then merged file is loaded into the memory. To speed up the read speed after first reading of small files, the merging index file is loaded into the cache. The merge block 1938

index file is very small and the number of data blocks on the DataNode is limited as compared with entire cluster therefore these index files occupies very little memory of DataNode.[7] File processing flow in processing layer is shown in figure-3. Client Get Data/File Big Compute the size of file Small Merging Process All files processed No HDFS Client Yes Fig-3: Processing flow in Data Processing Layer The system takes the input from the user so that it gets data/file from the user. After get data/file from the user, compute the size of file. If the size of file is small then send this for merging process, if size of file is big then directly send data to the HDFS Client. Now, when small files merge by the merging process, so it will check for the condition that all the files are processed or not. If all files processed then it directly sends them to the HDFS Client else it will send them to the size computation unit of file and then again checks for the size of file and performs merges operation if needed. PROCEDURE: Data Processing Layer [For all Files of the client] STEP-1: Read i th file/doc of client size_of_file i = filejudging_unit(file i ) [File Processing Unit] STEP 2: Check Condition for Merging If (size_of_file i < threshold) // File is small Then: else if(allfileprocessed!=true) Then merged_block_index = FileMergingUnit(file i, size_of_file i ) data= merged_block_index; return(data) 1939

End Else if End If else do data=file i ; STEP 3: [HDFS Client is to write the received data to the HDFS in the stored layer] hdfs_client (data); End Else STEP 4: Repeat Steps 1-3 while all files are not proceed. STEP 5: Exit. STEP-1: Read the document file of the client so that file goes to the File Judging Unit to check the size of file so that it can go for further processing to File Processing Data. STEP-2: Now it check condition for merging If threshold value is greater than size of file then it means that file is small. If all files are not processed then merged_block_index get the file and also the size of file and data is stored in merged_block_index. Otherwise if size of file is greater than the threshold value then file of client will directly stored in the data. STEP-3: Now HDFS Client is to write the received data to the HDFS in the stored layer that is hdfs_client (data). STEP-4: Repeat the step from 1-3 until all files are processed. STEP-5: Exit. C. Comparative Study and Analysis Table 2 shows the comparative analysis of five methods to deal with small files problem in HDFS. The very first method HAR provides high scalability by reducing namespace usage and reading efficiency of files. There is a drastic change in reduce operation before and after archiving files which shows that there is increase in performance time. With proposed architecture for efficient utilization of Hadoop Distributed File System writing and accessing performance of small files greatly increases and the average memory usage ratio of proposed architecture of HDFS is decreases as compared to original HDFS. Paper Name Parameters Used Reduction of data at Namenode in HDFS using Harballing Technique [41] Archive- Based Table-2: Comparison & Analysis An improved small file processing method for HDFS[42] Efficient Way for Handling Small Files using Extended HDFS[43] Improving Performance of small-file Accessing in Hadoop[44] An Innovative Strategy for Improved Processing of Small Files in Hadoop[45] InputFormat- Based Name Node Method Index- Index- Archive- Based Based Based Positioning Name Node Data Node Name NameNode Node Memory Usage Very Low Low Moderate Slightly High High Reading Efficiency / Moderate Moderate High High Very high Addressing Time Performance Moderate Moderate High High Very high Overhead Slight High Low Low Slight 1940

Proposed HDFS architecture allows for greater utilization of HDFS resources by providing more efficient metadata management for small files. Proposed Architecture only maintains the file metadata for each small file and not the block metadata. The block metadata is maintained by the NameNode for the single combined file alone and not for every single small file. This accounts for the reduced memory usage in the Proposed HDFS Architecture. It can improve the efficiency of accessing small files and reduces the metadata footprint in NameNode s main memory. IV. CONCLUSION AND FUTURE WORK This paper gives an insight of the architecture of storage of small files in Hadoop Distributed File Systems by providing efficient metadata management. In this paper the storage of file done on three layers of HDFS architecture. The HDFS file stored process is improved. It maintains the file metadata for each small file and not the block metadata. The block metadata is maintained by the NameNode for the single combined file alone and not for every single small file. This accounts for the reduced memory usage in the Proposed HDFS Architecture. It provides better access and storage efficiency of small files. In future work, the implementation of the above will be done. REFERENCES [1] Munde, Kusum; Jahan, Nusrat; Hadoop Architecture and Applications, IJIRSET (International Journal of Innovative Research in Science, Engineering and Technology), Nov-2016, ISSN(Online): 2319-8753, ISSN(Print): 2347-6710, pp19090-19094. [2] Bhandari, Shubham; Chougale, Suraj; Pandit, Deepak; Sawat, Suraj; An approach to solve a Small File problem in Hadoop by using Dynamic Merging and Indexing Scheme, IJRITCC (International Journal on Recent and Innovation Trends in Computing and Communication), Nov-2016, ISSN: 2321-8169, pp227-230. [3] Y. Inamdar, Sharifnawaj; H.Jadhav, Ajit; B.Desai, Rohit; S.Shinde, Pravin; M.Ghadage, Indrajeet; A. Gaikwad, Amit; Data Security in Hadoop Distributed File System, IRJET (International Research Journal of Engineering and Technology), Apr-2016, e-issn: 2395-0056, p-issn: 2395-0072, pp939-944. [4] Mittal, A.P.; Jain, Vanita; Ahuja, Tanuj; Google File System and Hadoop Distributed File System-An Analogy, IJIACS (International Journal of Innovations & Advancement in Computer Science), March-2015, ISSN 2347-8616, pp626-636. [5] Dhaulakhandi, Prachi, A Study of Hadoop Ecosystem, IJRSM (International Journal of Research & Management), Aug-2016, ISSN: 2349-5197, pp9-12. [6] V.Deolankar, Veena; Deshpande, Nupoor; Lokhande, Mandar; Analysis of Hadoop over SAP Software Solutions, IJRSR (International Journal of Recent Scientific Research), Mar-2016, ISSN: 0976-3031, pp9212-9215. [7] Changtong, Liu, An improved HDFS for small file, ICACT, Feb-3, 2016, pp478-481. [8] Gohil P, Panchal B, Dhobi J S., A novel approach to improve the performance of Hadoop in handling of small files, Electrical, Computer and Communication Technologies (ICECCT), 2015 IEEE International Conference on. IEEE, pp. 1-5. 1941