Distributed Computations MapReduce. adapted from Jeff Dean s slides

Similar documents
MapReduce Spark. Some slides are adapted from those of Jeff Dean and Matei Zaharia

The MapReduce Abstraction

Parallel Computing: MapReduce Jin, Hai

MapReduce: Simplified Data Processing on Large Clusters

Google: A Computer Scientist s Playground

Google: A Computer Scientist s Playground

CS 61C: Great Ideas in Computer Architecture. MapReduce

Agenda. Request- Level Parallelism. Agenda. Anatomy of a Web Search. Google Query- Serving Architecture 9/20/10

Motivation. Map in Lisp (Scheme) Map/Reduce. MapReduce: Simplified Data Processing on Large Clusters

Cloud Programming. Programming Environment Oct 29, 2015 Osamu Tatebe

Parallel Programming Concepts

ECE5610/CSC6220 Introduction to Parallel and Distribution Computing. Lecture 6: MapReduce in Parallel Computing

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

Database Systems CSE 414

Announcements. Optional Reading. Distributed File System (DFS) MapReduce Process. MapReduce. Database Systems CSE 414. HW5 is due tomorrow 11pm

Where We Are. Review: Parallel DBMS. Parallel DBMS. Introduction to Data Management CSE 344

L22: SC Report, Map Reduce

Big Data Management and NoSQL Databases

Concurrency for data-intensive applications

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

CSE Lecture 11: Map/Reduce 7 October Nate Nystrom UTA

Introduction to Data Management CSE 344

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

Introduction to Data Management CSE 344

Announcements. Parallel Data Processing in the 20 th Century. Parallel Join Illustration. Introduction to Database Systems CSE 414

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 10 Parallel Programming Models: Map Reduce and Spark

MapReduce: Recap. Juliana Freire & Cláudio Silva. Some slides borrowed from Jimmy Lin, Jeff Ullman, Jerome Simeon, and Jure Leskovec

Introduction to Data Management CSE 344

Distributed Computation Models

Introduction to Map Reduce

In the news. Request- Level Parallelism (RLP) Agenda 10/7/11

MapReduce. Kiril Valev LMU Kiril Valev (LMU) MapReduce / 35

MapReduce: Simplified Data Processing on Large Clusters 유연일민철기

CS MapReduce. Vitaly Shmatikov

MapReduce: A Programming Model for Large-Scale Distributed Computation

MI-PDB, MIE-PDB: Advanced Database Systems

MapReduce & Resilient Distributed Datasets. Yiqing Hua, Mengqi(Mandy) Xia

Principles of Data Management. Lecture #16 (MapReduce & DFS for Big Data)

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Today s Lecture. CS 61C: Great Ideas in Computer Architecture (Machine Structures) Map Reduce

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

Introduction to MapReduce

The MapReduce Framework

Map Reduce Group Meeting

Parallel Nested Loops

Parallel Partition-Based. Parallel Nested Loops. Median. More Join Thoughts. Parallel Office Tools 9/15/2011

CS427 Multicore Architecture and Parallel Computing

Introduction to MapReduce (cont.)

MapReduce-style data processing

CSE 344 MAY 2 ND MAP/REDUCE

Introduction to MapReduce

MapReduce. Simplified Data Processing on Large Clusters (Without the Agonizing Pain) Presented by Aaron Nathan

1. Introduction to MapReduce

Database System Architectures Parallel DBs, MapReduce, ColumnStores

Part A: MapReduce. Introduction Model Implementation issues

MapReduce, Hadoop and Spark. Bompotas Agorakis

CS 138: Google. CS 138 XVI 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

CompSci 516: Database Systems

CS 138: Google. CS 138 XVII 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

Database Applications (15-415)

Introduction to MapReduce

Overview. Why MapReduce? What is MapReduce? The Hadoop Distributed File System Cloudera, Inc.

Announcements. Reading Material. Map Reduce. The Map-Reduce Framework 10/3/17. Big Data. CompSci 516: Database Systems

Introduction to MapReduce. Instructor: Dr. Weikuan Yu Computer Sci. & Software Eng.

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

Part II: Software Infrastructure in Data Centers: Distributed Execution Engines

Clustering Lecture 8: MapReduce

Flat Datacenter Storage. Edmund B. Nightingale, Jeremy Elson, et al. 6.S897

Advanced Database Technologies NoSQL: Not only SQL

MapReduce. Stony Brook University CSE545, Fall 2016

Introduction to MapReduce

Batch Processing Basic architecture

Advanced Data Management Technologies

CS 345A Data Mining. MapReduce

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Distributed Systems. 18. MapReduce. Paul Krzyzanowski. Rutgers University. Fall 2015

Big Data Programming: an Introduction. Spring 2015, X. Zhang Fordham Univ.

Large-Scale GPU programming

Cloud Computing CS

Practice and Applications of Data Management CMPSCI 345. Lecture 18: Big Data, Hadoop, and MapReduce

6.830 Lecture Spark 11/15/2017

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

CS /5/18. Paul Krzyzanowski 1. Credit. Distributed Systems 18. MapReduce. Simplest environment for parallel processing. Background.

CS-2510 COMPUTER OPERATING SYSTEMS

Parallel Data Processing with Hadoop/MapReduce. CS140 Tao Yang, 2014

MapReduce. Cloud Computing COMP / ECPE 293A

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval

Dremel: Interactive Analysis of Web-Scale Database

CSE 414: Section 7 Parallel Databases. November 8th, 2018

Review. Agenda. Request-Level Parallelism (RLP) Google Query-Serving Architecture 8/8/2011

Lessons Learned While Building Infrastructure Software at Google

Parallel Processing - MapReduce and FlumeJava. Amir H. Payberah 14/09/2018

MapReduce in Erlang. Tom Van Cutsem

Outline. Distributed File System Map-Reduce The Computational Model Map-Reduce Algorithm Evaluation Computing Joins

ILO3:Algorithms and Programming Patterns for Cloud Applications (Hadoop)

CA485 Ray Walshe Google File System

RESILIENT DISTRIBUTED DATASETS: A FAULT-TOLERANT ABSTRACTION FOR IN-MEMORY CLUSTER COMPUTING

Clustering Documents. Case Study 2: Document Retrieval

Map Reduce. Yerevan.

Transcription:

Distributed Computations MapReduce adapted from Jeff Dean s slides

What we ve learnt so far Basic distributed systems concepts Consistency (sequential, eventual) Fault tolerance (recoverability, availability) What are distributed systems good for? Better fault tolerance Better security? Increased storage/serving capacity Storage systems, email clusters Parallel (distributed) computation (Today s topic)

Why distributed computations? How long to sort 1 TB on one computer? One computer can read ~30MB from disk Takes ~2 days!! Google indexes 20 billion+ web pages 20 * 10^9 pages * 20KB/page = 400 TB Large Hadron Collider is expected to produce 15 PB every year!

Solution: use many nodes! Cluster computing Hundreds or thousands of PCs connected by high speed LANs Grid computing Hundreds of supercomputers connected by high speed net 1000 nodes potentially give 1000X speedup

Distributed computations are difficult to program Sending data to/from nodes Coordinating among nodes Recovering from node failure Optimizing for locality Debugging Same for all problems

MapReduce A programming model for large-scale computations Process large amounts of input, produce output No side-effects or persistent state (unlike file system) MapReduce is implemented as a runtime library: automatic parallelization load balancing locality optimization handling of machine failures

MapReduce design Input data is partitioned into M splits Map: extract information on each split Each Map produces R partitions Shuffle and sort Bring M partitions to the same reducer Reduce: aggregate, summarize, filter or transform Output is in R result files

More specifically Programmer specifies two methods: map(k, v) <k', v'>* reduce(k', <v'>*) <k', v'>* All v' with same k' are reduced together, in order. Usually also specify: partition(k, total partitions) -> partition for k often a simple hash of the key allows reduce operations for different k to be parallelized

Example: Count word frequencies in web pages Input is files with one doc per record Map parses documents into words key = document URL value = document contents Output of map: doc1, to be or not to be to, 1 be, 1 or, 1

Example: word frequencies Reduce: computes sum for a key key = be values = 1, 1 key = not values = 1 key = or values = 1 key = to values = 1, 1 2 1 1 Output of reduce saved 2 be, 2 not, 1 or, 1 to, 2

Example: Pseudo-code Map(String input_key, String input_value): //input_key: document name //input_value: document contents for each word w in input_values: EmitIntermediate(w, "1"); Reduce(String key, Iterator intermediate_values): //key: a word, same for input and output //intermediate_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

MapReduce is widely applicable Distributed grep Document clustering Web link graph reversal Detecting approx. duplicate web pages

MapReduce implementation Input data is partitioned into M splits Map: extract information on each split Each Map produces R partitions Shuffle and sort Bring M partitions to the same reducer Reduce: aggregate, summarize, filter or transform Output is in R result files

MapReduce scheduling One master, many workers Input data split into M map tasks (e.g. 64 MB) R reduce tasks Tasks are assigned to workers dynamically Often: M=200,000; R=4,000; workers=2,000

MapReduce scheduling Master assigns a map task to a free worker Prefers close-by workers when assigning task Worker reads task input (often from local disk!) Worker produces R local files containing intermediate k/v pairs Master assigns a reduce task to a free worker Worker reads intermediate k/v pairs from map workers Worker sorts & applies user s Reduce op to produce the output

Parallel MapReduce Input data Map Map Map Map Master Shuffle Shuffle Shuffle Reduce Reduce Reduce Partitioned output

WordCount Internals Input data is split into M map jobs Each map job generates in R local partitions doc1, to be or not to be to, 1 be, 1 or, 1 not, 1 to, 1 Hash( to ) % R to, 1, 1 be, 1 not, 1 or, 1 R local partitions doc234, do not be silly do, 1 not, 1 be, 1 silly, 1 do, 1 not, 1 be, 1 R local partitions

WordCount Internals Shuffle brings same partitions to same reducer to, 1, 1 be, 1 not, 1 or, 1 R local partitions do, 1 to, 1, 1 be, 1, 1 do, 1 be, 1 not, 1 R local partitions not, 1, 1 or, 1

WordCount Internals Reduce aggregates sorted key values pairs do, 1 to, 1, 1 do, 1 to, 2 be, 1, 1 be, 2 not, 1, 1 or, 1 not, 2 or, 1

The importance of partition function partition(k, total partitions) -> partition for k e.g. hash(k ) % R What is the partition function for sort?

Load Balance and Pipelining Fine granularity tasks: many more map tasks than machines Minimizes time for fault recovery Can pipeline shuffling with map execution Better dynamic load balancing Often use 200,000 map/5000 reduce tasks w/ 2000 machines

Fault tolerance via re-execution On worker failure: Re-execute completed and in-progress map tasks Re-execute in progress reduce tasks Task completion committed through master On master failure: State is checkpointed to GFS: new master recovers & continues

Avoid straggler using backup tasks Slow workers significantly lengthen completion time Other jobs consuming resources on machine Bad disks with soft errors transfer data very slowly Weird things: processor caches disabled (!!) An unusually large reduce partition? Solution: Near end of phase, spawn backup copies of tasks Whichever one finishes first "wins" Effect: Dramatically shortens job completion time

MapReduce Sort Performance 1TB (100-byte record) data to be sorted 1700 machines M=15000 R=4000

MapReduce Sort Performance When can shuffle start? When can reduce start?