TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

Similar documents
MI-PDB, MIE-PDB: Advanced Database Systems

HADOOP. K.Nagaraju B.Tech Student, Department of CSE, Sphoorthy Engineering College, Nadergul (Vill.), Sagar Road, Saroonagar (Mdl), R.R Dist.T.S.

Cloudera Exam CCA-410 Cloudera Certified Administrator for Apache Hadoop (CCAH) Version: 7.5 [ Total Questions: 97 ]

International Journal of Advance Engineering and Research Development. A Study: Hadoop Framework

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Distributed File Systems II

Chapter 5. The MapReduce Programming Model and Implementation

Lecture 11 Hadoop & Spark

Hortonworks HDPCD. Hortonworks Data Platform Certified Developer. Download Full Version :

Distributed Filesystem

A BigData Tour HDFS, Ceph and MapReduce

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

HADOOP FRAMEWORK FOR BIG DATA

Clustering Lecture 8: MapReduce

CCA-410. Cloudera. Cloudera Certified Administrator for Apache Hadoop (CCAH)

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

BigData and Map Reduce VITMAC03

Cloud Computing. Hwajung Lee. Key Reference: Prof. Jong-Moon Chung s Lecture Notes at Yonsei University

MapReduce. U of Toronto, 2014

CA485 Ray Walshe Google File System

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

A Review Paper on Map Reducing Using Hadoop

Distributed Systems 16. Distributed File Systems II

The Google File System. Alexandru Costan

Database Applications (15-415)

Big Data Programming: an Introduction. Spring 2015, X. Zhang Fordham Univ.

Hadoop MapReduce Framework

Top 25 Hadoop Admin Interview Questions and Answers

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

Introduction to MapReduce

Map-Reduce. Marco Mura 2010 March, 31th

Hadoop An Overview. - Socrates CCDH

Hadoop/MapReduce Computing Paradigm

itpass4sure Helps you pass the actual test with valid and latest training material.

CS370 Operating Systems

The MapReduce Framework

Dept. Of Computer Science, Colorado State University

Hadoop. copyright 2011 Trainologic LTD

Hadoop and HDFS Overview. Madhu Ankam

HDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung

CS370 Operating Systems

Big Data Hadoop Course Content

Introduction to Map Reduce

Implementing Mapreduce Algorithms In Hadoop Framework Guide : Dr. SOBHAN BABU

HDFS Architecture Guide

CSE Lecture 11: Map/Reduce 7 October Nate Nystrom UTA

Map Reduce Group Meeting

Introduction to MapReduce. Instructor: Dr. Weikuan Yu Computer Sci. & Software Eng.

50 Must Read Hadoop Interview Questions & Answers

MapReduce: Simplified Data Processing on Large Clusters 유연일민철기

2/26/2017. For instance, consider running Word Count across 20 splits

Vendor: Cloudera. Exam Code: CCA-505. Exam Name: Cloudera Certified Administrator for Apache Hadoop (CCAH) CDH5 Upgrade Exam.

Google File System (GFS) and Hadoop Distributed File System (HDFS)

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E.

Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA {Shv, Hairong, SRadia,

CLOUD-SCALE FILE SYSTEMS

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

STATS Data Analysis using Python. Lecture 7: the MapReduce framework Some slides adapted from C. Budak and R. Burns

Introduction to MapReduce

The Hadoop Distributed File System Konstantin Shvachko Hairong Kuang Sanjay Radia Robert Chansler

Distributed Computation Models

CS 345A Data Mining. MapReduce

Map Reduce & Hadoop Recommended Text:

Programming Models MapReduce

FLAT DATACENTER STORAGE. Paper-3 Presenter-Pratik Bhatt fx6568

April Final Quiz COSC MapReduce Programming a) Explain briefly the main ideas and components of the MapReduce programming model.

Improving the MapReduce Big Data Processing Framework

Introduction to Hadoop and MapReduce

MapReduce-style data processing

A brief history on Hadoop

MapReduce, Hadoop and Spark. Bompotas Agorakis

Distributed Face Recognition Using Hadoop

Cloud Programming. Programming Environment Oct 29, 2015 Osamu Tatebe

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

Distributed Computations MapReduce. adapted from Jeff Dean s slides

A Survey on Job and Task Scheduling in Big Data

The MapReduce Abstraction

Cluster Setup. Table of contents

Map Reduce. Yerevan.

Cloud Computing 2. CSCI 4850/5850 High-Performance Computing Spring 2018

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

Introduction to MapReduce

Topics. Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples

Hadoop File Management System

The amount of data increases every day Some numbers ( 2012):

2/26/2017. The amount of data increases every day Some numbers ( 2012):

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment.

Mixing and matching virtual and physical HPC clusters. Paolo Anedda

Parallel Computing: MapReduce Jin, Hai

Introduction to MapReduce (cont.)

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2016

1. Introduction (Sam) 2. Syntax and Semantics (Paul) 3. Compiler Architecture (Ben) 4. Runtime Environment (Kurry) 5. Testing (Jason) 6. Demo 7.

Your First Hadoop App, Step by Step

BeoLink.org. Design and build an inexpensive DFS. Fabrizio Manfredi Furuholmen. FrOSCon August 2008

Data Clustering on the Parallel Hadoop MapReduce Model. Dimitrios Verraros

Cloud Computing CS

The Google File System

Transcription:

TITLE: Implement sort algorithm and run it using HADOOP PRE-REQUISITE Preliminary knowledge of clusters and overview of Hadoop and its basic functionality. THEORY 1. Introduction to Hadoop The Apache Hadoop is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named Map/Reduce, where the application is divided into many small fragments of work, each of which may be executed or re-executed on any node in the cluster. In addition, it provides a distributed file system (HDFS) that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both Map/Reduce and the Hadoop Distributed File System are designed so that node failures are automatically handled by the framework. Data and Process are two words driving the computation when it comes to Big data the most popular word is Hadoop! Hadoop is a software framework that is used to process huge amounts of data. When we say huge, we are talking in magnitudes of terabytes to petabytes of raw data. Hadoop is capable of storing, analyzing huge amounts of data using commodity hardware, for example Facebook uses a 1100-machines cluster with 8800 cores and about 12 PB raw storage for hadoop processing and each (commodity) node has 8 cores and 12 TB of storage. checkout http://wiki.apache.org/hadoop/poweredby for more examples of hadoop setup. There are several customized distributions of Hadoop, some of which are listed below: Apache Hadoop Cloudera s Distribution including Apache Hadoop (that s the official name) IBM Distribution of Apache Hadoop DataStax Brisk Amazon Elastic MapReduce Apache Hadoop is by far the most popular Hadoop distribution available currently. Hadoop primarily runs on 3 modes: Stand-alone-mode : In this mode, all the daemons (processes) for map-reduce run on a single jvm. (not for production use) Pseudo-distributed: In this mode, daemons run on different jvms on one local machine, much like a pseudo cluster and hence has this name. (not for production use) Fully Distributed: In this mode, several daemons run on a cluster of machines. (for production use) 2. Cluster A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system. The components of a cluster are usually connected to each other through fast local area networks, each node running its own instance on an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high speed networks, and software for high performance distributed computing.

Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. Computer clusters have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as the K computer. 3. HDFS (Hadoop Distributed File System) HDFS is a distributed, scalable, and portable filesystem written in Java for the Hadoop framework. Each node in a Hadoop instance typically has a single DataNode; a cluster of DataNodes form the HDFS cluster. The situation is typical because each node does not require a DataNode to be present. Each DataNode serves up blocks of data over the network using a block protocol specific to HDFS. The filesystem uses the TCP/IP layer for communication; clients use RPC to communicate between each other. HDFS stores large files (an ideal file size is a multiple of 64 MB), across multiple machines. It achieves reliability by replicating the data across multiple hosts, and hence does not require RAID storage on hosts. With the default replication value, 3, data is stored on three nodes: two on the same rack, and one on a different rack. Data nodes can talk to each other to rebalance data, to move copies around, and to keep the replication of data high. HDFS is not fully POSIX compliant because the requirements for a POSIX filesystem differ from the target goals for a Hadoop application. The tradeoff of not having a fully POSIX compliant filesystem is increased performance for data throughput. HDFS was designed to handle very large files. HDFS has recently added high-availability capabilities, allowing the main Meta data server (the Namenode) to be manually failed over to a backup in the event of failure. Automatic failover is being developed as well. Additionally, the filesystem includes what is called a Secondary Namenode, which misleads some people into thinking that when the Primary Namenode goes offline, the Secondary Namenode takes over. In fact, the Secondary Namenode regularly connects with the Primary Namenode and builds snapshots of the Primary Namenode's directory information, which is then saved to local/remote directories. These checkpointed images can be used to restart a failed Primary Namenode without having to replay the entire journal of filesystem actions, then edit the log to create an up-to-date directory structure. Since Namenode is the single point for storage and management of metadata, this can be a bottleneck for supporting huge number of files, especially large number of small files. HDFS Federation is a new addition which aims to tackle this problem to a certain extent by allowing multiple namespaces served by separate Namenodes. An advantage of using HDFS is data awareness between the jobtracker and tasktracker. The jobtracker schedules map/reduce jobs to tasktrackers with an awareness of the data location. An example of this would be if node A contained data (x,y,z) and node B contained data (a,b,c). The jobtracker will schedule node B to perform map/reduce tasks on (a,b,c) and node A would be scheduled to perform map/reduce tasks on (x,y,z). This reduces the amount of traffic that goes over the network and prevents unnecessary data transfer. When Hadoop is used with other filesystems this advantage is not always available. This can have a significant impact on the performance of job completion times, which has been demonstrated when running data intensive jobs. Another limitation of HDFS is that it cannot be directly mounted by an existing operating system. Getting data into and out of the HDFS file system, an action that often needs to be performed before and after executing a job, can be inconvenient. A Filesystem in Userspace (FUSE) virtual file system has been developed to address this problem, at least for Linux and some other Unix systems. File access can be achieved through the native Java API, the Thrift API to generate a client in the language of the users' choosing (C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, Smalltalk, and OCaml), the command-line interface, or browsed through the HDFS-UI webapp over HTTP.

3.1 Data node A DataNode stores data in the HDFS. A functional filesystem has more than one DataNode, with data replicated across them. On startup, a DataNode connects to the NameNode; spinning until that service comes up. It then responds to requests from the NameNode for filesystem operations. Client applications can talk directly to a DataNode, once the NameNode has provided the location of the data. Similarly, MapReduce operations farmed out to TaskTracker instances near a DataNode, talk directly to the DataNode to access the files. TaskTracker instances can, indeed should, be deployed on the same servers that host DataNode instances, so that MapReduce operations are performed close to the data. 3.2 NameNode The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files in the file system, and tracks where across the cluster the file data is kept. It does not store the data of these files itself. Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives. The NameNode is a Single Point of Failure for the HDFS Cluster. HDFS is not currently a High Availability system. When the NameNode goes down, the file system goes offline. There is an optional SecondaryNameNode that can be hosted on a separate machine. It only creates checkpoints of the namespace by merging the edits file into the fsimage file and does not provide any real redundancy. Hadoop 0.21+ has a BackupNameNode that is part of a plan to have an HA name service, but it needs active contributions from the people who want it (i.e. you) to make it Highly Available. 3.3 Task Tracker A TaskTracker is a node in the cluster that accepts tasks - Map, Reduce and Shuffle operations - from a JobTracker. Every TaskTracker is configured with a set of slots, these indicate the number of tasks that it can accept. When the JobTracker tries to find somewhere to schedule a task within the MapReduce operations, it first looks for an empty slot on the same server that hosts the DataNode containing the data, and if not, it looks for an empty slot on a machine in the same rack. The TaskTracker spawns a separate JVM processes to do the actual work; this is to ensure that process failure does not take down the task tracker. The TaskTracker monitors these spawned processes, capturing the output and exit codes. When the process finishes, successfully or not, the tracker notifies the JobTracker. The TaskTrackers also send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated. 3.4 Job Tracker The JobTracker is the service within Hadoop that farms out MapReduce tasks to specific nodes in the cluster, ideally the nodes that have the data, or at least are in the same rack. 1. Client applications submit jobs to the Job tracker. 2. The JobTracker talks to the NameNode to determine the location of the data 3. The JobTracker locates TaskTracker nodes with available slots at or near the data 4. The JobTracker submits the work to the chosen TaskTracker nodes.

5. The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker. 6. A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable. 7. When the work is completed, the JobTracker updates its status. 8. Client applications can poll the JobTracker for information. The JobTracker is a point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted. 4 Hadoop Map/Reduce 4.1 Programming model and execution framework Map/Reduce is a programming paradigm that expresses a large distributed computation as a sequence of distributed operations on data sets of key/value pairs. The Hadoop Map/Reduce framework harnesses a cluster of machines and executes user defined Map/Reduce jobs across the nodes in the cluster. A Map/Reduce computation has two phases, a map phase and a reduce phase. The input to the computation is a data set of key/value pairs. In the map phase, the framework splits the input data set into a large number of fragments and assigns each fragment to a map task. The framework also distributes the many map tasks across the cluster of nodes on which it operates. Each map task consumes key/value pairs from its assigned fragment and produces a set of intermediate key/value pairs. For each input key/value pair (K,V), the map task invokes a user defined map function that transmutes the input into a different key/value pair (K',V'). Following the map phase the framework sorts the intermediate data set by key and produces a set of (K',V'*) tuples so that all the values associated with a particular key appear together. It also partitions the set of tuples into a number of fragments equal to the number of reduce tasks. In the reduce phase, each reduce task consumes the fragment of (K',V'*) tuples assigned to it. For each such tuple it invokes a user-defined reduce function that transmutes the tuple into an output key/value pair (K,V). Once again, the framework distributes the many reduce tasks across the cluster of nodes and deals with shipping the appropriate fragment of intermediate data to each reduce task. Tasks in each phase are executed in a fault-tolerant manner, if node(s) fail in the middle of a computation the tasks assigned to them are re-distributed among the remaining nodes. Having many map and reduce tasks enables good load balancing and allows failed tasks to be re-run with small runtime overhead. 5. Map-Reduce MapReduce is a framework for processing highly distributable problems across huge datasets using a large number of computers (nodes), collectively referred to as a cluster (if all nodes use the same hardware) or a grid (if the nodes use different hardware). Computational processing can occur on data stored either in a filesystem (unstructured) or in a database (structured). "Map" step: The master node takes the input, partitions it up into smaller sub-problems, and distributes them to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes the smaller problem, and passes the answer back to its master node. "Reduce" step: The master node then collects the answers to all the sub-problems and combines them in some way to form the output the answer to the problem it was originally trying to solve. MapReduce allows for distributed processing of the map and reduction operations. Provided each mapping operation is independent of the others, all maps can be performed in parallel though in practice it is limited by the number of independent data sources and/or the number of CPUs near each source. Similarly, a set of 'reducers' can perform the reduction phase - provided all outputs of the map operation that share the same key are presented to the same reducer at the same time. While this process can often appear inefficient compared to algorithms that are more sequential, MapReduce can be applied to significantly larger datasets than "commodity" servers can handle a large server farm can use MapReduce to sort a petabyte of data in only a few hours. The parallelism also offers some

possibility of recovering from partial failure of servers or storage during the operation: if one mapper or reducer fails, the work can be rescheduled assuming the input data is still available. The frozen part of the MapReduce framework is a large distributed sort. The hot spots, which the application defines, are: An Input Reader A Map Function A Partition Function A Compare Function A Reduce Function An Output Writer 5.1 Input Reader The input reader divides the input into appropriate size 'splits' (in practice typically 16 MB to 128 MB) and the framework assigns one split to each Map function. The input reader reads data from stable storage (typically a distributed file system) and generates key/value pairs. A common example will read a directory full of text files and return each line as a record. 5.2 Map Function Each Map function takes a series of key/value pairs, processes each, and generates zero or more output key/value pairs. The input and output types of the map can be (and often are) different from each other. If the application is doing a word count, the map function would break the line into words and output a key/value pair for each word. Each output pair would contain the word as the key and "1" as the value. 5.3 Partition Function Each Map function output is allocated to a particular reducer by the application's partition function for sharing purposes. The partition function is given the key and the number of reducers and returns the indexes of the desired reduce. A typical default is to hash the key and modulo the number of reducers. It is important to pick a partition function that gives an approximately uniform distribution of data per shard for load balancing purposes, otherwise the MapReduce operation can be held up waiting for slow reducers to finish. Between the map and reduce stages, the data is shuffled (parallel-sorted / exchanged between nodes) in order to move the data from the map node that produced it to the shard in which it will be reduced. The shuffle can sometimes take longer than the computation time depending on network bandwidth, CPU speeds, data produced and time taken by map and reduce computations. 5.4 Comparison Function The input for each Reduce is pulled from the machine where the Map ran and sorted using the application's comparison function. 5.5 Reduce Function The framework calls the application's Reduce function once for each unique key in the sorted order. The Reduce can iterate through the values that are associated with that key and output 0 or more values. In the word count example, the Reduce function takes the input values, sums them and generates a single output of the word and the final sum. 5.6 Output Writer The Output Writer writes the output of the Reduce to stable storage, usually a distributed file system. 6.CONCLUSION: Thus we have Implemented sort algorithm and executed it using HADOOP