MapReduce Spark. Some slides are adapted from those of Jeff Dean and Matei Zaharia

Similar documents
Distributed Computations MapReduce. adapted from Jeff Dean s slides

Parallel Computing: MapReduce Jin, Hai

The MapReduce Abstraction

MapReduce: Simplified Data Processing on Large Clusters

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 10 Parallel Programming Models: Map Reduce and Spark

Distributed Computation Models

CS 61C: Great Ideas in Computer Architecture. MapReduce

MapReduce & Resilient Distributed Datasets. Yiqing Hua, Mengqi(Mandy) Xia

MapReduce, Hadoop and Spark. Bompotas Agorakis

Where We Are. Review: Parallel DBMS. Parallel DBMS. Introduction to Data Management CSE 344

Announcements. Optional Reading. Distributed File System (DFS) MapReduce Process. MapReduce. Database Systems CSE 414. HW5 is due tomorrow 11pm

Database Systems CSE 414

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

Resilient Distributed Datasets

Google: A Computer Scientist s Playground

Google: A Computer Scientist s Playground

Announcements. Parallel Data Processing in the 20 th Century. Parallel Join Illustration. Introduction to Database Systems CSE 414

Cloud Programming. Programming Environment Oct 29, 2015 Osamu Tatebe

RESILIENT DISTRIBUTED DATASETS: A FAULT-TOLERANT ABSTRACTION FOR IN-MEMORY CLUSTER COMPUTING

CompSci 516: Database Systems

Announcements. Reading Material. Map Reduce. The Map-Reduce Framework 10/3/17. Big Data. CompSci 516: Database Systems

Introduction to Data Management CSE 344

ECE5610/CSC6220 Introduction to Parallel and Distribution Computing. Lecture 6: MapReduce in Parallel Computing

Motivation. Map in Lisp (Scheme) Map/Reduce. MapReduce: Simplified Data Processing on Large Clusters

Introduction to Data Management CSE 344

Parallel Programming Concepts

Introduction to Data Management CSE 344

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

Big Data Management and NoSQL Databases

Agenda. Request- Level Parallelism. Agenda. Anatomy of a Web Search. Google Query- Serving Architecture 9/20/10

Apache Spark is a fast and general-purpose engine for large-scale data processing Spark aims at achieving the following goals in the Big data context

L22: SC Report, Map Reduce

CSE Lecture 11: Map/Reduce 7 October Nate Nystrom UTA

Lecture 11 Hadoop & Spark

CSE 444: Database Internals. Lecture 23 Spark

Spark: A Brief History.

Today s content. Resilient Distributed Datasets(RDDs) Spark and its data model

Part II: Software Infrastructure in Data Centers: Distributed Execution Engines

Concurrency for data-intensive applications

Massive Online Analysis - Storm,Spark

1. Introduction to MapReduce

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

CSE 344 MAY 2 ND MAP/REDUCE

Introduction to Map Reduce

Batch Processing Basic architecture

Spark. In- Memory Cluster Computing for Iterative and Interactive Applications

Introduction to MapReduce

Discretized Streams. An Efficient and Fault-Tolerant Model for Stream Processing on Large Clusters

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

6.830 Lecture Spark 11/15/2017

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Chapter 4: Apache Spark

Shark: SQL and Rich Analytics at Scale. Michael Xueyuan Han Ronny Hajoon Ko

MI-PDB, MIE-PDB: Advanced Database Systems

Introduction to MapReduce (cont.)

Principles of Data Management. Lecture #16 (MapReduce & DFS for Big Data)

MapReduce: A Programming Model for Large-Scale Distributed Computation

Spark, Shark and Spark Streaming Introduction

Map Reduce Group Meeting

Spark Overview. Professor Sasu Tarkoma.

The MapReduce Framework

Fast, Interactive, Language-Integrated Cluster Computing

MapReduce: Simplified Data Processing on Large Clusters 유연일민철기

Page 1. Goals for Today" Background of Cloud Computing" Sources Driving Big Data" CS162 Operating Systems and Systems Programming Lecture 24

In the news. Request- Level Parallelism (RLP) Agenda 10/7/11

Overview. Why MapReduce? What is MapReduce? The Hadoop Distributed File System Cloudera, Inc.

CS 138: Google. CS 138 XVII 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

2/26/2017. Originally developed at the University of California - Berkeley's AMPLab

CS 138: Google. CS 138 XVI 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Spark. In- Memory Cluster Computing for Iterative and Interactive Applications

MapReduce: Recap. Juliana Freire & Cláudio Silva. Some slides borrowed from Jimmy Lin, Jeff Ullman, Jerome Simeon, and Jure Leskovec

MapReduce-style data processing

SparkStreaming. Large scale near- realtime stream processing. Tathagata Das (TD) UC Berkeley UC BERKELEY

Database Applications (15-415)

Database System Architectures Parallel DBs, MapReduce, ColumnStores

CS MapReduce. Vitaly Shmatikov

Cloud Computing CS

Part A: MapReduce. Introduction Model Implementation issues

Utah Distributed Systems Meetup and Reading Group - Map Reduce and Spark

Distributed Computing with Spark and MapReduce

CS427 Multicore Architecture and Parallel Computing

Parallel Nested Loops

Parallel Partition-Based. Parallel Nested Loops. Median. More Join Thoughts. Parallel Office Tools 9/15/2011

Large-Scale GPU programming

Outline. CS-562 Introduction to data analysis using Apache Spark

Introduction to MapReduce

MapReduce. Kiril Valev LMU Kiril Valev (LMU) MapReduce / 35

Distributed Computing with Spark

Shark: SQL and Rich Analytics at Scale. Yash Thakkar ( ) Deeksha Singh ( )

Advanced Data Management Technologies

MapReduce. Simplified Data Processing on Large Clusters (Without the Agonizing Pain) Presented by Aaron Nathan

Flat Datacenter Storage. Edmund B. Nightingale, Jeremy Elson, et al. 6.S897

Advanced Database Technologies NoSQL: Not only SQL

Data Platforms and Pattern Mining

Putting it together. Data-Parallel Computation. Ex: Word count using partial aggregation. Big Data Processing. COS 418: Distributed Systems Lecture 21

L3: Spark & RDD. CDS Department of Computational and Data Sciences. Department of Computational and Data Sciences

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

Spark. Cluster Computing with Working Sets. Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica.

Programming Systems for Big Data

Parallel Data Processing with Hadoop/MapReduce. CS140 Tao Yang, 2014

Transcription:

MapReduce Spark Some slides are adapted from those of Jeff Dean and Matei Zaharia

What have we learnt so far? Distributed storage systems consistency semantics protocols for fault tolerance Paxos, Raft, Viewstamp Transactional Online processing distributed transactions Today: Offline batch processing

Why distributed computations? How long to sort 1 TB on one computer? One computer can read ~50MB from disk Takes 5.5 hours! Google indexes 60 trillion web pages 60 * 10^12 pages * 10KB/page = 600 PB Large Hadron Collider is expected to produce 15 PB every year!

Solution: use many nodes! Data Centers at Amazon/Facebook/Google Hundreds of thousands of PCs connected by high speed LANs Cloud computing Any programmer can rent nodes in Data Centers for cheap The promise: 1000 nodes è 1000X speedup

Distributed computations are difficult to program Sending data to/from nodes Coordinating among nodes Recovering from node failure Optimizing for locality Debugging Same for all problems

The world before MapReduce comes along Dominant philosophy in systems research programming many machines should be the same that of a single multi-core machine distributed shared memory automatic parallelization of existing programs MPI for high performance computing a collection of communication/synchronization primitives to simplify message passing No systems handle failures

MapReduce A programming model for large-scale computations Process large amounts of input, produce output No side-effects or persistent state (unlike file system) MapReduce is implemented as a runtime library: automatic parallelization load balancing locality optimization handling of machine failures

MapReduce design Input data is partitioned into M splits Map: extract information on each split Each Map produces R partitions Shuffle and sort Bring M partitions to the same reducer Reduce: aggregate, summarize, filter or transform Output is in R result files

More specifically Programmer specifies two methods: map(k, v) <k', v'>* reduce(k', <v'>*) <k', v'>* All v' with same k' are reduced together Usually also specify: partition(k, total partitions) -> partition for k often a simple hash of the key allows reduce operations for different k to be parallelized

Example: Count word frequencies in web pages Input is files with one doc per record Map parses documents into words key = document URL value = document contents Output of map: doc1, to be or not to be to, 1 be, 1 or, 1

Example: word frequencies Reduce: computes sum for a key key = be values = 1, 1 key = not values = 1 key = or values = 1 key = to values = 1, 1 2 1 1 Output of reduce saved 2 be, 2 not, 1 or, 1 to, 2

Example: Pseudo-code Map(String input_key, String input_value): //input_key: document name //input_value: document contents for each word w in input_values: EmitIntermediate(w, "1"); Reduce(String key, Iterator intermediate_values): //key: a word, same for input and output //intermediate_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

MapReduce is widely applicable Distributed grep Document clustering Web link graph reversal Detecting duplicate web pages

MapReduce implementation Input data is partitioned into M splits Map: extract information on each split Each Map produces R partitions Shuffle and sort Bring M partitions to the same reducer Reduce: aggregate, summarize, filter or transform Output is in R result files, stored in a replicated, distributed file system (GFS).

MapReduce scheduling One master, many workers Input data split into M map tasks R reduce tasks Tasks are assigned to workers dynamically

MapReduce scheduling Master assigns a map task to a free worker Prefers close-by workers when assigning task Worker reads task input (often from local disk!) Worker produces R local files containing intermediate k/v pairs Master assigns a reduce task to a free worker Worker reads intermediate k/v pairs from map workers Worker sorts & applies user s Reduce op to produce the output

Parallel MapReduce Input data Map Map Map Map Master Shuffle Shuffle Shuffle Reduce Reduce Reduce Partitioned output

WordCount Internals Input data is split into M map jobs Each map job generates in R local partitions doc1, to be or not to be to, 1 be, 1 or, 1 not, 1 to, 1 partionfunction to, 1, 1 be, 1 not, 1 or, 1 R local partitions doc234, do not be silly do, 1 not, 1 be, 1 silly, 1 do, 1 not, 1 be, 1 R local partitions

WordCount Internals Shuffle brings same partitions to same reducer to, 1, 1 be, 1 not, 1 or, 1 R local partitions do, 1 to, 1, 1 be, 1, 1 do, 1 be, 1 not, 1 R local partitions not, 1, 1 or, 1

WordCount Internals Reduce aggregates sorted key values pairs do, 1 to, 1, 1 do, 1 to, 2 be, 1, 1 be, 2 not, 1, 1 or, 1 not, 2 or, 1

The importance of partition function partition(k, total partitions) -> partition for k e.g. hash(k ) % R What is the partition function for sort?

Load Balance and Pipelining Fine granularity tasks: many more map tasks than machines Minimizes time for fault recovery Can pipeline shuffling with map execution Better dynamic load balancing Often use 200,000 map/5000 reduce tasks w/ 2000 machines

Fault tolerance What are the potential failure cases? Lost packets Temporary network disconnect Servers crash and rebooted Servers fail permanently (disk wipe)

Fault tolerance via re-execution On master failure: Lab3 does not require handing master failure On worker failure: Re-execute in-progress map tasks Re-execute in-progress reduce tasks Task completion committed through master Is it possible a task is executed twice?

How to handle stragglers Ideal speedup on N Machines? Why no ideal speedup in practice? Straggler: Slow workers drastically increase completion time Other jobs consuming resources on machine Bad disks with soft errors transfer data very slowly Weird things: processor caches disabled (!!) An unusually large reduce partition Solution: Near end of phase, spawn backup copies of tasks Whichever one finishes first "wins"

MapReduce Sort Performance 1TB (100-byte record) data to be sorted 1700 machines M=15000 R=4000

MapReduce Sort Performance When can shuffle start? When can reduce start?

Big Data Computation Storm Spark-Streaming Naid Pregel GraphLab GraphX Map Reduce Hadoop Dryad LINQ Spark Hive Dremel Shark 2005 2010 2015 28

Spark s motivation More Complex Analytics multi-stage processing iterative machine learning iterative graph processing Better performance lots of application s dataset can fit in the aggregate memory of many machines

What MapReduce lacks Efficient data sharing primitive for multistaging processing output of the previous stage is stored on GFS input of the current stage is read from GFS

Multi-stage MapReduce job

Spark s goal

Spark s solution Restricted form of distributed shared memory Immutable, partitioned collections of records Can only be built through coarse-grained deterministic transformations (map, filter, join,...) Efficient fault recovery using lineage Log one operation to apply to many elements Recompute lost partitions on failure

RDD recovery re-run

Spark API DryadLINQ-like API in Scala language

Example: log mining

Fault recovery

Another example: PageRank

Optimizing Placement

PageRank Optimization

Summary MapReduce The interface Map + Reduce let programmers write applications that can be automatically parallelized/distributed Re-execution to handle failure / stragglers Spark Enable multi-stage MR jobs to pass data via memory RDD handles fault-tolerance at a coarsegranularity by tracking lineage.