Cloud Computing CS

Similar documents
Programming Models MapReduce

Database Applications (15-415)

Introduction to Map Reduce

ECE5610/CSC6220 Introduction to Parallel and Distribution Computing. Lecture 6: MapReduce in Parallel Computing

Lecture 11 Hadoop & Spark

Cloud Computing CS

Hadoop Map Reduce 10/17/2018 1

UNIT V PROCESSING YOUR DATA WITH MAPREDUCE Syllabus

Where We Are. Review: Parallel DBMS. Parallel DBMS. Introduction to Data Management CSE 344

MapReduce Spark. Some slides are adapted from those of Jeff Dean and Matei Zaharia

Announcements. Parallel Data Processing in the 20 th Century. Parallel Join Illustration. Introduction to Database Systems CSE 414

Introduction to MapReduce

Clustering Lecture 8: MapReduce

BigData and Map Reduce VITMAC03

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

Data Clustering on the Parallel Hadoop MapReduce Model. Dimitrios Verraros

MapReduce. U of Toronto, 2014

Introduction to Map/Reduce. Kostas Solomos Computer Science Department University of Crete, Greece

Data-Intensive Computing with MapReduce

HDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung

Distributed Computations MapReduce. adapted from Jeff Dean s slides

FLAT DATACENTER STORAGE. Paper-3 Presenter-Pratik Bhatt fx6568

Introduction to Data Management CSE 344

CS-2510 COMPUTER OPERATING SYSTEMS

Introduction to Data Management CSE 344

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

MapReduce Algorithm Design

Big Data XML Parsing in Pentaho Data Integration (PDI)

Mitigating Data Skew Using Map Reduce Application

Announcements. Optional Reading. Distributed File System (DFS) MapReduce Process. MapReduce. Database Systems CSE 414. HW5 is due tomorrow 11pm

Introduction to MapReduce

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

Dept. Of Computer Science, Colorado State University

Database Systems CSE 414

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 10 Parallel Programming Models: Map Reduce and Spark

CompSci 516: Database Systems

Camdoop Exploiting In-network Aggregation for Big Data Applications Paolo Costa

2/26/2017. For instance, consider running Word Count across 20 splits

Developing MapReduce Programs

CS 61C: Great Ideas in Computer Architecture. MapReduce

Announcements. Reading Material. Map Reduce. The Map-Reduce Framework 10/3/17. Big Data. CompSci 516: Database Systems

Parallel Computing: MapReduce Jin, Hai

Introduction to MapReduce

The amount of data increases every day Some numbers ( 2012):

2/26/2017. The amount of data increases every day Some numbers ( 2012):

Big Data 7. Resource Management

Cloud Programming. Programming Environment Oct 29, 2015 Osamu Tatebe

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Hadoop. copyright 2011 Trainologic LTD

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

CS427 Multicore Architecture and Parallel Computing

Information Retrieval

Survey on MapReduce Scheduling Algorithms

Hadoop/MapReduce Computing Paradigm

2/4/2019 Week 3- A Sangmi Lee Pallickara

A brief history on Hadoop

Principles of Data Management. Lecture #16 (MapReduce & DFS for Big Data)

MapReduce. Stony Brook University CSE545, Fall 2016

Distributed File Systems II

CS / Cloud Computing. Recitation 3 September 9 th & 11 th, 2014

Improving the MapReduce Big Data Processing Framework

CCA-410. Cloudera. Cloudera Certified Administrator for Apache Hadoop (CCAH)

Motivation. Map in Lisp (Scheme) Map/Reduce. MapReduce: Simplified Data Processing on Large Clusters

Ghislain Fourny. Big Data 6. Massive Parallel Processing (MapReduce)

MapReduce Design Patterns

CS 345A Data Mining. MapReduce

Hadoop An Overview. - Socrates CCDH

CISC 7610 Lecture 2b The beginnings of NoSQL

Voldemort. Smruti R. Sarangi. Department of Computer Science Indian Institute of Technology New Delhi, India. Overview Design Evaluation

Ghislain Fourny. Big Data Fall Massive Parallel Processing (MapReduce)

MAPREDUCE FOR BIG DATA PROCESSING BASED ON NETWORK TRAFFIC PERFORMANCE Rajeshwari Adrakatti

Map Reduce Group Meeting

Big Data for Engineers Spring Resource Management

Hadoop MapReduce Framework

Programming Systems for Big Data

Outline. Distributed File System Map-Reduce The Computational Model Map-Reduce Algorithm Evaluation Computing Joins

The MapReduce Abstraction

itpass4sure Helps you pass the actual test with valid and latest training material.

Distributed Computation Models

Distributed Filesystem

Parallel Nested Loops

Parallel Partition-Based. Parallel Nested Loops. Median. More Join Thoughts. Parallel Office Tools 9/15/2011

CSE6331: Cloud Computing

Introduction to MapReduce (cont.)

MapReduce-style data processing

MI-PDB, MIE-PDB: Advanced Database Systems

CSE 344 MAY 2 ND MAP/REDUCE

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

TP1-2: Analyzing Hadoop Logs

Survey Paper on Traditional Hadoop and Pipelined Map Reduce

HADOOP FRAMEWORK FOR BIG DATA

Introduction to MapReduce. Instructor: Dr. Weikuan Yu Computer Sci. & Software Eng.

Hortonworks HDPCD. Hortonworks Data Platform Certified Developer. Download Full Version :

Map Reduce. Yerevan.

MapReduce. Cloud Computing COMP / ECPE 293A

Laarge-Scale Data Engineering

Real-time Scheduling of Skewed MapReduce Jobs in Heterogeneous Environments

Introduction to Data Management CSE 344

Chapter 5. The MapReduce Programming Model and Implementation

A BigData Tour HDFS, Ceph and MapReduce

Transcription:

Cloud Computing CS 15-319 Programming Models- Part III Lecture 6, Feb 1, 2012 Majd F. Sakr and Mohammad Hammoud 1

Today Last session Programming Models- Part II Today s session Programming Models Part III Announcement: Project update is due today 2

Objectives Discussion on Programming Models MapReduce Pregel, Dryad, and GraphLab Why parallelism? Parallel computer architectures Traditional models of parallel programming Examples of parallel processing Message Passing Interface (MPI) Last 2 Sessions 3

MapReduce In this part, the following concepts of MapReduce will be described: Basics A close look at MapReduce data flow Additional functionality Scheduling and fault-tolerance in MapReduce Comparison with existing techniques and models 4

MapReduce In this part, the following concepts of MapReduce will be described: Basics A close look at MapReduce data flow Additional functionality Scheduling and fault-tolerance in MapReduce Comparison with existing techniques and models 5

Problem Scope MapReduce is a programming model for data processing The power of MapReduce lies in its ability to scale to 100s or 1000s of computers, each with several processor cores How large an amount of work? Web-scale data on the order of 100s of GBs to TBs or PBs It is likely that the input data set will not fit on a single computer s hard drive Hence, a distributed file system (e.g., Google File System- GFS) is typically required 6

Commodity Clusters MapReduce is designed to efficiently process large volumes of data by connecting many commodity computers together to work in parallel A theoretical 1000-CPU machine would cost a very large amount of money, far more than 1000 single-cpu or 250 quad-core machines MapReduce ties smaller and more reasonably priced machines together into a single cost-effective commodity cluster 7

Isolated Tasks MapReduce divides the workload into multiple independent tasks and schedule them across cluster nodes A work performed by each task is done in isolation from one another The amount of communication which can be performed by tasks is mainly limited for scalability and fault tolerance reasons The communication overhead required to keep the data on the nodes synchronized at all times would prevent the model from performing reliably and efficiently at large scale 8

Data Distribution In a MapReduce cluster, data is distributed to all the nodes of the cluster as it is being loaded in An underlying distributed file systems (e.g., GFS) splits large data files into chunks which are managed by different nodes in the cluster Input data: A large file Node 1 Chunk of input data Node 2 Chunk of input data Node 3 Chunk of input data Even though the file chunks are distributed across several machines, they form a single namesapce 9

MapReduce: A Bird s-eye View In MapReduce, chunks are processed in isolation by tasks called Mappers The outputs from the mappers are denoted as intermediate outputs (IOs) and are brought into a second set of tasks called Reducers Map Phase chunks mappers C0 C1 C2 C3 M0 M1 M2 M3 IO0 IO1 IO2 IO3 The process of bringing together IOs into a set of Reducers is known as shuffling process Reduce Phase Shuffling Data Reducers R0 R1 The Reducers produce the final outputs (FOs) FO0 FO1 Overall, MapReduce breaks the data flow into two phases, map phase and reduce phase 10

Keys and Values The programmer in MapReduce has to specify two functions, the map function and the reduce function that implement the Mapper and the Reducer in a MapReduce program In MapReduce data elements are always structured as key-value (i.e., (K, V)) pairs The map and reduce functions receive and emit (K, V) pairs Input Splits Intermediate Outputs Final Outputs (K, V) Pairs Map Function (K, V ) Pairs Reduce Function (K, V ) Pairs 11

Partitions In MapReduce, intermediate output values are not usually reduced together All values with the same key are presented to a single Reducer together More specifically, a different subset of intermediate key space is assigned to each Reducer These subsets are known as partitions Different colors represent different keys (potentially) from different Mappers Partitions are the input to Reducers 12

Network Topology In MapReduce MapReduce assumes a tree style network topology Nodes are spread over different racks embraced in one or many data centers A salient point is that the bandwidth between two nodes is dependent on their relative locations in the network topology For example, nodes that are on the same rack will have higher bandwidth between them as opposed to nodes that are off-rack 13

MapReduce In this part, the following concepts of MapReduce will be described: Basics A close look at MapReduce data flow Additional functionality Scheduling and fault-tolerance in MapReduce Comparison with existing techniques and models 14

Hadoop Since its debut on the computing stage, MapReduce has frequently been associated with Hadoop Hadoop is an open source implementation of MapReduce and is currently enjoying wide popularity Hadoop presents MapReduce as an analytics engine and under the hood uses a distributed storage layer referred to as Hadoop Distributed File System (HDFS) HDFS mimics Google File System (GFS) 15

Hadoop MapReduce: A Closer Look Node 1 Node 2 Files loaded from local HDFS store Files loaded from local HDFS store InputFormat InputFormat file Split Split Split Split Split Split file file file RecordReaders RR RR RR RR RR RR RecordReaders Input (K, V) pairs Input (K, V) pairs Map Map Map Map Map Map Intermediate (K, V) pairs Partitioner Shuffling Process Partitioner Intermediate (K, V) pairs Sort Reduce Intermediate (K,V) pairs exchanged by all nodes Sort Reduce Final (K, V) pairs Final (K, V) pairs Writeback to local OutputFormat OutputFormat Writeback to local HDFS store HDFS store 16

Input Files Input files are where the data for a MapReduce task is initially stored The input files typically reside in a distributed file system (e.g. HDFS) The format of input files is arbitrary file Line-based log files Binary files Multi-line input records Or something else entirely file 17

InputFormat How the input files are split up and read is defined by the InputFormat InputFormat is a class that does the following: Files loaded from local HDFS store Selects the files that should be used for input file Defines the InputSplits that break file a file Provides a factory for RecordReader objects that read the file InputFormat 18

InputFormat Types Several InputFormats are provided with Hadoop: InputFormat Description Key Value TextInputFormat Default format; reads lines of text files The byte offset of the line The line contents KeyValueInputFormat Parses lines into (K, V) pairs Everything up to the first tab character The remainder of the line SequenceFileInputFormat A Hadoop-specific high-performance binary format user-defined user-defined 19

Input Splits An input split describes a unit of work that comprises a single map task in a MapReduce program By default, the InputFormat breaks a file up into 64MB splits By dividing the file into splits, we allow several map tasks to operate on a single file in parallel If the file is very large, this can improve performance significantly through parallelism Files loaded from local HDFS store InputFormat file Split Split Split file Each map task corresponds to a single input split 20

RecordReader The input split defines a slice of work but does not describe how to access it The RecordReader class actually loads data from its source and converts it into (K, V) pairs suitable for reading by Mappers The RecordReader is invoked repeatedly on the input until the entire split is consumed Files loaded from local HDFS store InputFormat Each invocation of the RecordReader leads to another call of the map function defined by the programmer file file Split Split Split RR RR RR 21

Mapper and Reducer The Mapper performs the user-defined work of the first phase of the MapReduce program Files loaded from local HDFS store A new instance of Mapper is created for each split The Reducer performs the user-defined work of the second phase of the MapReduce program file file InputFormat Split Split Split A new instance of Reducer is created for each partition For each key in the partition assigned to a Reducer, the Reducer is called once RR RR RR Map Map Map Partitioner Sort Reduce 22

Partitioner Each mapper may emit (K, V) pairs to any partition Files loaded from local HDFS store Therefore, the map nodes must all agree on where to send different pieces of intermediate data The partitioner class determines which partition a given (K,V) pair will go to file file InputFormat Split Split Split RR RR RR The default partitioner computes a hash value for a given key and assigns it to a partition based on this result Map Map Map Partitioner Sort Reduce 23

Sort Each Reducer is responsible for reducing the values associated with (several) intermediate keys Files loaded from local HDFS store InputFormat The set of intermediate keys on a single node is automatically sorted by MapReduce before they are presented to the Reducer file file Split Split Split RR RR RR Map Map Map Partitioner Sort Reduce 24

OutputFormat The OutputFormat class defines the way (K,V) pairs produced by Reducers are written to output files Files loaded from local HDFS store InputFormat The instances of OutputFormat provided by Hadoop write to files on the local disk or in HDFS Several OutputFormats are provided by Hadoop: OutputFormat Description TextOutputFormat Default; writes lines in "key \t value" format file file Split Split Split RR RR RR Map Map Map Partitioner SequenceFileOutputFormat NullOutputFormat Writes binary files suitable for reading into subsequent MapReduce jobs Generates no output files Sort Reduce OutputFormat 25

MapReduce In this part, the following concepts of MapReduce will be described: Basics A close look at MapReduce data flow Additional functionality Scheduling and fault-tolerance in MapReduce Comparison with existing techniques and models 26

Combiner Functions MapReduce applications are limited by the bandwidth available on the cluster It pays to minimize the data shuffled between map and reduce tasks Hadoop allows the user to specify a combiner function (just like the reduce R R function) to be run on a map output N N N N MT MT MT MT MT MT RT MT Map output (Y, T) (1950, 0) (1950, 20) (1950, 10) LEGEND: Combiner output R = Rack N = Node MT = Map Task RT = Reduce Task Y = Year T = Temperature (1950, 20) 27

MapReduce In this part, the following concepts of MapReduce will be described: Basics A close look at MapReduce data flow Additional functionality Scheduling and fault-tolerance in MapReduce Comparison with existing techniques and models 28

Task Scheduling in MapReduce MapReduce adopts a master-slave architecture TT The master node in MapReduce is referred to as Job Tracker (JT) Each slave node in MapReduce is referred to as Task Tracker (TT) JT Tasks Queue T0 T1 T2 Task Slots TT MapReduce adopts a pull scheduling strategy rather than a push one Task Slots I.e., JT does not push map and reduce tasks to TTs but rather TTs pull them by making pertaining requests 29

Map and Reduce Task Scheduling Every TT sends a heartbeat message periodically to JT encompassing a request for a map or a reduce task to run I. Map Task Scheduling: JT satisfies requests for map tasks via attempting to schedule mappers in the vicinity of their input splits (i.e., it considers locality) II. Reduce Task Scheduling: However, JT simply assigns the next yet-to-run reduce task to a requesting TT regardless of TT s network location and its implied effect on the reducer s shuffle time (i.e., it does not consider locality) 30

Job Scheduling in MapReduce In MapReduce, an application is represented as a job A job encompasses multiple map and reduce tasks MapReduce in Hadoop comes with a choice of schedulers: The default is the FIFO scheduler which schedules jobs in order of submission There is also a multi-user scheduler called the Fair scheduler which aims to give every user a fair share of the cluster capacity over time 31

Fault Tolerance in Hadoop MapReduce can guide jobs toward a successful completion even when jobs are run on a large cluster where probability of failures increases The primary way that MapReduce achieves fault tolerance is through restarting tasks If a TT fails to communicate with JT for a period of time (by default, 1 minute in Hadoop), JT will assume that TT in question has crashed If the job is still in the map phase, JT asks another TT to re-execute all Mappers that previously ran at the failed TT If the job is in the reduce phase, JT asks another TT to re-execute all Reducers that were in progress on the failed TT 32

Speculative Execution A MapReduce job is dominated by the slowest task MapReduce attempts to locate slow tasks (stragglers) and run redundant (speculative) tasks that will optimistically commit before the corresponding stragglers This process is known as speculative execution Only one copy of a straggler is allowed to be speculated Whichever copy (among the two copies) of a task commits first, it becomes the definitive copy, and the other copy is killed by JT 33

Locating Stragglers How does Hadoop locate stragglers? Hadoop monitors each task progress using a progress score between 0 and 1 If a task s progress score is less than (average 0.2), and the task has run for at least 1 minute, it is marked as a straggler T1 Not a straggler PS= 2/3 T2 A straggler PS= 1/12 Time 34

MapReduce In this part, the following concepts of MapReduce will be described: Basics A close look at MapReduce data flow Additional functionality Scheduling and fault-tolerance in MapReduce Comparison with existing techniques and models 35

Comparison with Existing Techniques- Condor (1) Performing computation on large volumes of data has been done before, usually in a distributed setting by using Condor (for instance) Condor is a specialized workload management system for compute-intensive jobs Users submit their serial or parallel jobs to Condor and Condor: Places them into a queue Chooses when and where to run the jobs based upon a policy Carefully monitors their progress, and ultimately informs the user upon completion 36

Comparison with Existing Techniques- Condor (2) Condor does not automatically distribute data A separate storage area network (SAN) is typically incorporated in addition to the compute cluster Furthermore, collaboration between multiple compute nodes must be managed with a message passing system such as MPI Condor is not easy to work with and can lead to the introduction of subtle errors Compute Node Compute Node Compute Node Compute Node LAN File Server Database Server SAN 37

What Makes MapReduce Unique? MapReduce is characterized by: 1. Its simplified programming model which allows the user to quickly write and test distributed systems 2. Its efficient and automatic distribution of data and workload across machines 3. Its flat scalability curve. Specifically, after a Mapreduce program is written and functioning on 10 nodes, very little-if any- work is required for making that same program run on 1000 nodes 4. Its fault tolerance approach 38

Comparison With Traditional Models Aspect Shared Memory Message MapReduce Passing Communication Implicit (via Explicit Messages Limited and Implicit loads/stores) Synchronization Explicit Implicit (via messages) Immutable (K, V) Pairs Hardware Support Typically Required None None Development Effort Lower Higher Lowest Tuning Effort Higher Lower Lowest 39

Next Class Discussion on Programming Models MapReduce Pregel, Dryad, and GraphLab Why parallelism? Parallel computer architectures Traditional models of parallel programming Examples of parallel processing Message Passing Interface (MPI) Programming Models- Part IV 40