Data-Intensive Distributed Computing

Similar documents
Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

Developing MapReduce Programs

MapReduce Algorithm Design

MapReduce Algorithms

Algorithms for MapReduce. Combiners Partition and Sort Pairs vs Stripes

Basic MapReduce Algorithm Design

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Data-Intensive Distributed Computing

MapReduce Design Patterns

Map-Reduce Applications: Counting, Graph Shortest Paths

Data-Intensive Computing with MapReduce

MapReduce Algorithm Design

Map-Reduce Applications: Counting, Graph Shortest Paths

Clustering Lecture 8: MapReduce

Map Reduce. Yerevan.

Laarge-Scale Data Engineering

Introduction to MapReduce

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

TI2736-B Big Data Processing. Claudia Hauff

Lecture 7: MapReduce design patterns! Claudia Hauff (Web Information Systems)!

Information Retrieval Processing with MapReduce

CMSC 723: Computational Linguistics I Session #12 MapReduce and Data Intensive NLP. University of Maryland. Wednesday, November 18, 2009

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Data-Intensive Computing with MapReduce

Map Reduce. MCSN - N. Tonellotto - Distributed Enabling Platforms

Data-Intensive Distributed Computing

Informa)on Retrieval and Map- Reduce Implementa)ons. Mohammad Amir Sharif PhD Student Center for Advanced Computer Studies

Introduction to MapReduce

Map Reduce.

Parallel Computing: MapReduce Jin, Hai

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Batch Processing Basic architecture

Principles of Data Management. Lecture #16 (MapReduce & DFS for Big Data)

MapReduce: Recap. Juliana Freire & Cláudio Silva. Some slides borrowed from Jimmy Lin, Jeff Ullman, Jerome Simeon, and Jure Leskovec

CS MapReduce. Vitaly Shmatikov

ECE5610/CSC6220 Introduction to Parallel and Distribution Computing. Lecture 6: MapReduce in Parallel Computing

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

Big Data Management and NoSQL Databases

Data-Intensive Distributed Computing

with MapReduce The ischool University of Maryland JHU Summer School on Human Language Technology Wednesday, June 17, 2009

Jordan Boyd-Graber University of Maryland. Thursday, March 3, 2011

Introduction to Map Reduce

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

MapReduce: Simplified Data Processing on Large Clusters 유연일민철기

MapReduce Spark. Some slides are adapted from those of Jeff Dean and Matei Zaharia

CS-2510 COMPUTER OPERATING SYSTEMS

CSE Lecture 11: Map/Reduce 7 October Nate Nystrom UTA

Data Partitioning and MapReduce

Motivation. Map in Lisp (Scheme) Map/Reduce. MapReduce: Simplified Data Processing on Large Clusters

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Overview. Why MapReduce? What is MapReduce? The Hadoop Distributed File System Cloudera, Inc.

Databases 2 (VU) ( / )

Distributed Computations MapReduce. adapted from Jeff Dean s slides

CS 345A Data Mining. MapReduce

Introduction to Spark

University of Maryland. Tuesday, March 2, 2010

Introduction to MapReduce

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

MapReduce in Erlang. Tom Van Cutsem

Parallel Processing - MapReduce and FlumeJava. Amir H. Payberah 14/09/2018

Improving the MapReduce Big Data Processing Framework

The MapReduce Abstraction

Programming Systems for Big Data

MI-PDB, MIE-PDB: Advanced Database Systems

ML from Large Datasets

Voldemort. Smruti R. Sarangi. Department of Computer Science Indian Institute of Technology New Delhi, India. Overview Design Evaluation

MapReduce. Stony Brook University CSE545, Fall 2016

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Clustering Documents. Case Study 2: Document Retrieval

Announcements. Optional Reading. Distributed File System (DFS) MapReduce Process. MapReduce. Database Systems CSE 414. HW5 is due tomorrow 11pm

Laboratory Session: MapReduce

Database Systems CSE 414

Map-Reduce and Adwords Problem

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval

CS427 Multicore Architecture and Parallel Computing

Map Reduce Group Meeting

1. Introduction to MapReduce

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 10 Parallel Programming Models: Map Reduce and Spark

WHITE PAPER. Apache Spark: RDD, DataFrame and Dataset. API comparison and Performance Benchmark in Spark 2.1 and Spark 1.6.3

Map-Reduce. Marco Mura 2010 March, 31th

MapReduce Locality Sensitive Hashing GPUs. NLP ML Web Andrew Rosenberg

Shark. Hive on Spark. Cliff Engle, Antonio Lupher, Reynold Xin, Matei Zaharia, Michael Franklin, Ion Stoica, Scott Shenker

MapReduce: A Programming Model for Large-Scale Distributed Computation

University of Waterloo Midterm Examination Sample Solution

Concurrency for data-intensive applications

Cloud Computing CS

Anurag Sharma (IIT Bombay) 1 / 13

MongoDB DI Dr. Angelika Kusel

Distributed Systems. COS 418: Distributed Systems Lecture 1. Mike Freedman. Backrub (Google) Google The Cloud is not amorphous

April Final Quiz COSC MapReduce Programming a) Explain briefly the main ideas and components of the MapReduce programming model.

Cloud Programming. Programming Environment Oct 29, 2015 Osamu Tatebe

Shuffling, Partitioning, and Closures. Parallel Programming and Data Analysis Heather Miller

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

Parallel Programming Concepts

Data-Intensive Distributed Computing

Transcription:

Data-Intensive Distributed Computing CS 451/651 431/631 (Winter 2018) Part 1: MapReduce Algorithm Design (4/4) January 16, 2018 Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo These slides are available at http://lintool.github.io/bigdata-2018w/ This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

Source: Wikipedia (The Scream)

Source: Wikipedia (Japanese rock garden)

Perfect X What s the point? More details: Lee et al. The Unified Logging Infrastructure for Data Analytics at Twitter. PVLDB, 5(12):1771-1780, 2012.

MapReduce Algorithm Design How do you express everything in terms of m, r, c, p? Toward design patterns

MapReduce Source: Google

MapReduce Programmer specifies four functions: map (k 1, v 1 ) List[(k 2, v 2 )] reduce (k 2, List[v 2 ]) List[(k 3, v 3 )] All values with the same key are sent to the same reducer partition (k', p) 0... p-1 Often a simple hash of the key, e.g., hash(k') mod n Divides up key space for parallel reduce operations combine (k 2, List[v 2 ]) List[(k 2, v 2 )] Mini-reducers that run in memory after the map phase Used as an optimization to reduce network traffic The execution framework handles everything else

k 1 v 1 k 2 v 2 k 3 v 3 k 4 v 4 k 5 v 5 k 6 v 6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 combine combine combine combine a 1 b 2 c 9 a 5 c 2 b 7 c 8 partition partition partition partition group values by key a 1 5 b 2 7 c 2 9 8 * * * reduce reduce reduce r 1 s 1 r 2 s 2 r 3 s 3 * Important detail: reducers process keys in sorted order

Everything Else Handles scheduling Assigns workers to map and reduce tasks Handles data distribution Moves processes to data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles errors and faults Detects worker failures and restarts

But You have limited control over data and execution flow! All algorithms must be expressed in m, r, c, p You don t know: Where mappers and reducers run When a mapper or reducer begins or finishes Which input a particular mapper is processing Which intermediate key a particular reducer is processing

Tools for Synchronization Preserving state in mappers and reducers Capture dependencies across multiple keys and values Cleverly-constructed data structures Bring partial results together Define custom sort order of intermediate keys Control order in which reducers process keys

Two Practical Tips Avoid object creation (Relatively) costly operation Garbage collection Avoid buffering Limited heap size Works for small datasets, but won t scale!

Importance of Local Aggregation Ideal scaling characteristics: Twice the data, twice the running time Twice the resources, half the running time Why can t we achieve this? Synchronization requires communication Communication kills performance Thus avoid communication! Reduce intermediate data via local aggregation Combiners can help

Distributed Group By in MapReduce Mapper merged spills (on disk) intermediate files (on disk) Combiner Reducer circular buffer (in memory) Combiner spills (on disk) other reducers other mappers

Word Count: Baseline class Mapper { def map(key: Long, value: String) = { for (word <- tokenize(value)) { emit(word, 1) class Reducer { def reduce(key: String, values: Iterable[Int]) = { for (value <- values) { sum += value emit(key, sum) What s the impact of combiners?

Word Count: Mapper Histogram class Mapper { def map(key: Long, value: String) = { val counts = new Map() for (word <- tokenize(value)) { counts(word) += 1 for ((k, v) <- counts) { emit(k, v) Are combiners still needed?

Performance Word count on 10% sample of Wikipedia (on Altiscale) Baseline Histogram Running Time ~140s ~140s # Pairs 246m 203m

Can we do even better?

k 1 v 1 k 2 v 2 k 3 v 3 k 4 v 4 k 5 v 5 k 6 v 6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 combine combine combine combine a 1 b 2 c 9 a 5 c 2 b 7 c 8 partition partition partition partition group values by key a 1 5 b 2 7 c 2 9 8 * * * reduce reduce reduce Logical view r 1 s 1 r 2 s 2 r 3 s 3 * Important detail: reducers process keys in sorted order

MapReduce API* Mapper<K in,v in,k out,v out > void setup(mapper.context context) Called once at the start of the task void map(k in key, V in value, Mapper.Context context) Called once for each key/value pair in the input split void cleanup(mapper.context context) Called once at the end of the task Reducer<K in,v in,k out,v out >/Combiner<K in,v in,k out,v out > void reduce(k in void setup(reducer.context context) Called once at the start of the task key, Iterable<V in > values, Reducer.Context context) Called once for each key void cleanup(reducer.context context) Called once at the end of the task *Note that there are two versions of the API!

Preserving State Mapper object state one object per task Reducer object state setup map cleanup API initialization hook one call per input key-value pair one call per intermediate key API cleanup hook setup reduce close

Pseudo-Code class Mapper { def setup() = {... def map(key: Long, value: String) = {... def cleanup() = {...

Word Count: Preserving State class Mapper { val counts = new Map() def map(key: Long, value: String) = { for (word <- tokenize(value)) { counts(word) += 1 def cleanup() = { for ((k, v) <- counts) { emit(k, v) Are combiners still needed?

Design Pattern for Local Aggregation In-mapper combining Fold the functionality of the combiner into the mapper by preserving state across multiple map calls Advantages Speed Why is this faster than actual combiners? Disadvantages Explicit memory management required Potential for order-dependent bugs

Performance Word count on 10% sample of Wikipedia (on Altiscale) Baseline Histogram IMC Running Time ~140s ~140s ~80s # Pairs 246m 203m 5.5m

Combiner Design Combiners and reducers share same method signature Sometimes, reducers can serve as combiners Often, not Remember: combiner are optional optimizations Should not affect algorithm correctness May be run 0, 1, or multiple times Example: find average of integers associated with the same key

Computing the Mean: Version 1 class Mapper { def map(key: String, value: Int) = { emit(key, value) class Reducer { def reduce(key: String, values: Iterable[Int]) { for (value <- values) { sum += value cnt += 1 emit(key, sum/cnt) Why can t we use reducer as combiner?

Computing the Mean: Version 2 class Mapper { def map(key: String, value: Int) = context.write(key, value) class Combiner { def reduce(key: String, values: Iterable[Int]) = { for (value <- values) { sum += value cnt += 1 emit(key, (sum, cnt)) class Reducer { def reduce(key: String, values: Iterable[Pair]) = { for ((s, c) <- values) { sum += s cnt += c emit(key, sum/cnt) Why doesn t this work?

Computing the Mean: Version 3 class Mapper { def map(key: String, value: Int) = context.write(key, (value, 1)) class Combiner { def reduce(key: String, values: Iterable[Pair]) = { for ((s, c) <- values) { sum += s cnt += c emit(key, (sum, cnt)) class Reducer { def reduce(key: String, values: Iterable[Pair]) = { for ((s, c) <- values) { sum += s cnt += c emit(key, sum/cnt) Fixed?

Computing the Mean: Version 4 class Mapper { val sums = new Map() val counts = new Map() def map(key: String, value: Int) = { sums(key) += value counts(key) += 1 def cleanup() = { for (key <- counts.keys) { context.write(key, (sums(key), counts(key))) Are combiners still needed?

Performance 200m integers across three char keys Java Scala V1 ~120s ~120s V3 ~90s ~120s V4 ~60s ~90s (default HashMap) ~70s (optimized HashMap)

MapReduce API* Mapper<K in,v in,k out,v out > void setup(mapper.context context) Called once at the start of the task void map(k in key, V in value, Mapper.Context context) Called once for each key/value pair in the input split void cleanup(mapper.context context) Called once at the end of the task Reducer<K in,v in,k out,v out >/Combiner<K in,v in,k out,v out > void reduce(k in void setup(reducer.context context) Called once at the start of the task key, Iterable<V in > values, Reducer.Context context) Called once for each key void cleanup(reducer.context context) Called once at the end of the task *Note that there are two versions of the API!

Algorithm Design: Running Example Term co-occurrence matrix for a text collection M = N x N matrix (N = vocabulary size) M ij : number of times i and j co-occur in some context (for concreteness, let s say context = sentence) Why? Distributional profiles as a way of measuring semantic distance Semantic distance useful for many language processing tasks Applications in lots of other domains

MapReduce: Large Counting Problems Term co-occurrence matrix for a text collection = specific instance of a large counting problem A large event space (number of terms) A large number of observations (the collection itself) Goal: keep track of interesting statistics about the events Basic approach Mappers generate partial counts Reducers aggregate partial counts How do we aggregate partial counts efficiently?

First Try: Pairs Each mapper takes a sentence: Generate all co-occurring term pairs For all pairs, emit (a, b) count Reducers sum up counts associated with these pairs Use combiners!

Pairs: Pseudo-Code class Mapper { def map(key: Long, value: String) = { for (u <- tokenize(value)) { for (v <- neighbors(u)) { emit((u, v), 1) class Reducer { def reduce(key: Pair, values: Iterable[Int]) = { for (value <- values) { sum += value emit(key, sum)

Pairs: Pseudo-Code One more thing class Partitioner { def getpartition(key: Pair, value: Int, numtasks: Int): Int = { return key.left % numtasks

Pairs Analysis Advantages Easy to implement, easy to understand Disadvantages Lots of pairs to sort and shuffle around (upper bound?) Not many opportunities for combiners to work

Another Try: Stripes Idea: group together pairs into an associative array (a, b) 1 (a, c) 2 (a, d) 5 (a, e) 3 (a, f) 2 a { b: 1, c: 2, d: 5, e: 3, f: 2 Each mapper takes a sentence: Generate all co-occurring term pairs For each term, emit a { b: count b, c: count c, d: count d Reducers perform element-wise sum of associative arrays + a { b: 1, d: 5, e: 3 a { b: 1, c: 2, d: 2, f: 2 a { b: 2, c: 2, d: 7, e: 3, f: 2

Stripes: Pseudo-Code class Mapper { def map(key: Long, value: String) = { for (u <- tokenize(value)) { val map = new Map() for (v <- neighbors(u)) { map(v) += 1 emit(u, map) a { b: 1, c: 2, d: 5, e: 3, f: 2 class Reducer { def reduce(key: String, values: Iterable[Map]) = { val map = new Map() for (value <- values) { map += value emit(key, map) + a { b: 1, d: 5, e: 3 a { b: 1, c: 2, d: 2, f: 2 a { b: 2, c: 2, d: 7, e: 3, f: 2

Stripes Analysis Advantages Far less sorting and shuffling of key-value pairs Can make better use of combiners Disadvantages More difficult to implement Underlying object more heavyweight Overhead associated with data structure manipulations Fundamental limitation in terms of size of event space

Cluster size: 38 cores Data Source: Associated Press Worldstream (APW) of the English Gigaword Corpus (v3), which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed)

Stripes >> Pairs? Important tradeoffs Developer code vs. framework CPU vs. RAM vs. disk vs. network Number of key-value pairs: sorting and shuffling data across the network Size and complexity of each key-value pair: de/serialization overhead Cache locality and the cost of manipulating data structures Additional issues Opportunities for local aggregation (combining) Load imbalance

Tradeoffs Pairs: Generates a lot more key-value pairs Less combining opportunities More sorting and shuffling Simple aggregation at reduce Stripes: Generates fewer key-value pairs More opportunities for combining Less sorting and shuffling More complex (slower) aggregation at reduce

Relative Frequencies How do we estimate relative frequencies from counts? f(b A) = N(A, B) N(A) = N(A, B) P B N(A, B 0 ) 0 Why do we want to do this? How do we do this with MapReduce?

f(b A): Stripes a {b 1 :3, b 2 :12, b 3 :7, b 4 :1, Easy! One pass to compute (a, *) Another pass to directly compute f(b A)

f(b A): Pairs What s the issue? Computing relative frequencies requires marginal counts But the marginal cannot be computed until you see all counts Buffering is a bad idea! Solution: What if we could get the marginal count to arrive at the reducer first?

f(b A): Pairs (a, *) 32 (a, b 1 ) 3 (a, b 2 ) 12 (a, b 3 ) 7 (a, b 4 ) 1 Reducer holds this value in memory (a, b 1 ) 3 / 32 (a, b 2 ) 12 / 32 (a, b 3 ) 7 / 32 (a, b 4 ) 1 / 32 For this to work: Emit extra (a, *) for every b n in mapper Make sure all a s get sent to same reducer (use partitioner) Make sure (a, *) comes first (define sort order) Hold state in reducer across different key-value pairs

Order Inversion Common design pattern: Take advantage of sorted key order at reducer to sequence computations Get the marginal counts to arrive at the reducer before the joint counts Additional optimization Apply in-memory combining pattern to accumulate marginal counts

Synchronization: Pairs vs. Stripes Approach 1: turn synchronization into an ordering problem Sort keys into correct order of computation Partition key space so each reducer receives appropriate set of partial results Hold state in reducer across multiple key-value pairs to perform computation Illustrated by the pairs approach Approach 2: data structures that bring partial results together Each reducer receives all the data it needs to complete the computation Illustrated by the stripes approach

Secondary Sorting MapReduce sorts input to reducers by key Values may be arbitrarily ordered What if we want to sort value also? E.g., k (v 1, r), (v 3, r), (v 4, r), (v 8, r)

Secondary Sorting: Solutions Solution 1 Buffer values in memory, then sort Why is this a bad idea? Solution 2 Value-to-key conversion : form composite intermediate key, (k, v 1 ) Let the execution framework do the sorting Preserve state across multiple key-value pairs to handle processing Anything else we need to do?

Recap: Tools for Synchronization Preserving state in mappers and reducers Capture dependencies across multiple keys and values Cleverly-constructed data structures Bring partial results together Define custom sort order of intermediate keys Control order in which reducers process keys

Issues and Tradeoffs Important tradeoffs Developer code vs. framework CPU vs. RAM vs. disk vs. network Number of key-value pairs: sorting and shuffling data across the network Size and complexity of each key-value pair: de/serialization overhead Cache locality and the cost of manipulating data structures Additional issues Opportunities for local aggregation (combining) Local imbalance

Debugging at Scale Works on small datasets, won t scale why? Memory management issues (buffering and object creation) Too much intermediate data Mangled input records Real-world data is messy! There s no such thing as consistent data Watch out for corner cases Isolate unexpected behavior, bring local

Questions? Source: Wikipedia (Japanese rock garden)