with MapReduce The ischool University of Maryland JHU Summer School on Human Language Technology Wednesday, June 17, 2009

Similar documents
CMSC 723: Computational Linguistics I Session #12 MapReduce and Data Intensive NLP. University of Maryland. Wednesday, November 18, 2009

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

Information Retrieval Processing with MapReduce

University of Maryland. Tuesday, March 2, 2010

Developing MapReduce Programs

Introduction to MapReduce

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

Map Reduce. Yerevan.

Data-Intensive Computing with MapReduce

Graph Data Processing with MapReduce

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Graph Algorithms. Revised based on the slides by Ruoming Kent State

Jordan Boyd-Graber University of Maryland. Thursday, March 3, 2011

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

MapReduce Algorithms

Data-Intensive Distributed Computing

Introduction to MapReduce

COSC 6339 Big Data Analytics. Graph Algorithms and Apache Giraph

Link Analysis in the Cloud

Map Reduce.

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Clustering Lecture 8: MapReduce

Map-Reduce Applications: Counting, Graph Shortest Paths

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

Overview. Why MapReduce? What is MapReduce? The Hadoop Distributed File System Cloudera, Inc.

Data-Intensive Distributed Computing

Map-Reduce Applications: Counting, Graph Shortest Paths

Parallel Computing: MapReduce Jin, Hai

Hadoop Distributed File System(HDFS)

What is this course about?

MapReduce Algorithm Design

Algorithms for MapReduce. Combiners Partition and Sort Pairs vs Stripes

Distributed Computations MapReduce. adapted from Jeff Dean s slides

Introduction to MapReduce

Laarge-Scale Data Engineering

MapReduce: Simplified Data Processing on Large Clusters

MapReduce. U of Toronto, 2014

BigData and Map Reduce VITMAC03

CLOUD-SCALE FILE SYSTEMS

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

MapReduce Spark. Some slides are adapted from those of Jeff Dean and Matei Zaharia

ΕΠΛ 602:Foundations of Internet Technologies. Cloud Computing

ECE5610/CSC6220 Introduction to Parallel and Distribution Computing. Lecture 6: MapReduce in Parallel Computing

The Google File System

The MapReduce Abstraction

MapReduce Patterns. MCSN - N. Tonellotto - Distributed Enabling Platforms

CA485 Ray Walshe Google File System

Map-Reduce. Marco Mura 2010 March, 31th

Distributed Filesystem

CS-2510 COMPUTER OPERATING SYSTEMS

Google File System (GFS) and Hadoop Distributed File System (HDFS)

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Announcements. Parallel Data Processing in the 20 th Century. Parallel Join Illustration. Introduction to Database Systems CSE 414

CS 61C: Great Ideas in Computer Architecture. MapReduce

Where We Are. Review: Parallel DBMS. Parallel DBMS. Introduction to Data Management CSE 344

The Google File System. Alexandru Costan

MapReduce Design Patterns

Introduction to MapReduce

Distributed File Systems II

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

MapReduce, Hadoop and Spark. Bompotas Agorakis

Distributed computing: index building and use

The Google File System

The Google File System

Big Data Management and NoSQL Databases

The Google File System

The MapReduce Framework

CS 345A Data Mining. MapReduce

Lecture 11: Graph algorithms! Claudia Hauff (Web Information Systems)!

Database Systems CSE 414

Announcements. Optional Reading. Distributed File System (DFS) MapReduce Process. MapReduce. Database Systems CSE 414. HW5 is due tomorrow 11pm

MapReduce-style data processing

Advanced Database Technologies NoSQL: Not only SQL

Introduction to MapReduce

goals monitoring, fault tolerance, auto-recovery (thousands of low-cost machines) handle appends efficiently (no random writes & sequential reads)

The Google File System (GFS)

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

MapReduce: A Programming Model for Large-Scale Distributed Computation

Parallel Nested Loops

Parallel Partition-Based. Parallel Nested Loops. Median. More Join Thoughts. Parallel Office Tools 9/15/2011

CSE 124: Networked Services Fall 2009 Lecture-19

Data-Intensive Computing with MapReduce

Parallel Programming Concepts

Introduction to Data Management CSE 344

Introduction to Data Management CSE 344

TI2736-B Big Data Processing. Claudia Hauff

Introduction to Map Reduce

TI2736-B Big Data Processing. Claudia Hauff

CS 138: Google. CS 138 XVI 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

CS 138: Google. CS 138 XVII 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

Distributed computing: index building and use

CSE 344 MAY 2 ND MAP/REDUCE

Lecture 7: MapReduce design patterns! Claudia Hauff (Web Information Systems)!

Cloud Computing CS

The Google File System

Distributed Systems. 18. MapReduce. Paul Krzyzanowski. Rutgers University. Fall 2015

Cloud Programming. Programming Environment Oct 29, 2015 Osamu Tatebe

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 10 Parallel Programming Models: Map Reduce and Spark

Transcription:

Data-Intensive Text Processing with MapReduce Jimmy Lin The ischool University of Maryland JHU Summer School on Human Language Technology Wednesday, June 17, 2009 This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

No data like more data! s/knowledge/data/g; /d / How do we get here if we re not Google? (Banko and Brill, ACL 2001) (Brants et al., EMNLP 2007)

cheap commodity clusters (or utility computing) + simple, distributed programming models = data-intensive computing for the masses!

Why is this different?

Divide and Conquer Work Partition w 1 w 2 w 3 worker worker worker r 1 r 2 r 3 Result Combine

It s a bit more complex Fundamental issues scheduling, data distribution, synchronization, inter-process communication, robustness, fault tolerance, Different programming models Message Passing Shared Memory Memory Architectural issues Flynn s taxonomy (SIMD, MIMD, etc.), network typology, bisection bandwidth UMA vs. NUMA, cache coherence Common problems livelock, deadlock, data starvation, priority inversion dining i philosophers, h sleeping barbers, b cigarette smokers, P 1 P 2 P 3 P 4 P 5 P 1 P 2 P 3 P 4 P 5 Different programming constructs mutexes, conditional variables, barriers, masters/slaves, producers/consumers, work queues, The reality: programmer shoulders the burden of managing concurrency

Source: Ricardo Guimarães Herrmann

Source: MIT Open Courseware

Source: Harper s (Feb, 2008)

Typical Problem Iterate over a large number of records Extract something of interest est from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Key idea: provide a functional abstraction for these two operations (Dean and Ghemawat, OSDI 2004)

Map f f f f f Map Fold g g g g g Reduce

MapReduce Programmers specify two functions: map (k, v) <k, v >* reduce (k, v ) <k, v >* All values with the same key are reduced together Usually, programmers also specify: partition (k, number of partitions) partition for k Often a simple hash of the key, e.g. hash(k ) mod n Allows reduce operations for different keys in parallel combine (k, v ) <k, v >* Mini-reducers that run in memory after the map phase Used as an optimization to reducer network traffic Implementations: Google has a proprietary implementation in C++ Hadoop is an open source implementation in Java

k 1 v 1 k 2 v 2 k 3 v 3 k 4 v 4 k 5 v 5 k 6 v 6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 9 Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 3 6 9 reduce reduce reduce r 1 s 1 r 2 s 2 r 3 s 3

MapReduce Runtime Handles scheduling Assigns workers to map and reduce tasks Handles data distribution Moves the process to the data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles faults Detects worker failures and restarts Everything happens on top of a distributed FS (later)

Hello World : Word Count Map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_values: EmitIntermediate(w, "1"); Reduce(String key, Iterator intermediate_values): // key: a word, same for input and output // intermediate_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

User Program (1) fork (1) fork (1) fork Master (2) assign map (2) assign reduce worker split 0 split 1 split 2 split 3 split 4 (3) read worker (4) local write (5) remote read worker worker (6) write output file 0 output file 1 worker Input files Map phase Intermediate files (on local disk) Reduce phase Output files Redrawn from (Dean and Ghemawat, OSDI 2004)

How do we get data to the workers? NAS Compute Nodes SAN What s the problem here?

Distributed File System Don t move data to workers Move workers to the data! Store data on the local disks for nodes in the cluster Start up the workers on the node that has the data local Why? Not enough RAM to hold all the data in memory Disk access is slow, disk throughput is good A distributed file system is the answer GFS (Google File System) HDFS for Hadoop (= GFS clone)

GFS: Assumptions Commodity hardware over exotic hardware High component failure rates Inexpensive commodity components fail all the time Modest number of HUGE files Files are write-once, mostly appended to Perhaps concurrently Large streaming reads over random access High sustained throughput over low latency GFS slides adapted from material by (Ghemawat et al., SOSP 2003)

GFS: Design Decisions Files stored as chunks Fixed size (64MB) Reliability through replication Each chunk replicated across 3+ chunkservers Single master to coordinate access, keep metadata Simple centralized management No data caching Little benefit due to large data sets, streaming reads Simplify the API Push some of the issues onto the client

Application GSF Client (file name, chunk index) (chunk handle, chunk location) GFS master File namespace /foo/bar chunk 2ef0 Instructions to chunkserver (chunk handle, byte range) chunk data Chunkserver state GFS chunkserver GFS chunkserver Linux file system Linux file system Redrawn from (Ghemawat et al., SOSP 2003)

Master s Responsibilities Metadata storage Namespace management/locking age e g Periodic communication with chunkservers Chunk creation, re-replication, replication rebalancing Garbage Collection

Questions?

Case study: graph algorithms

Graph Algorithms: Topics Introduction to graph algorithms and graph representations Single Source Shortest Path (SSSP) problem PageRank

What s a graph? G = (V,E), where V represents the set of vertices (nodes) E represents the set of edges (links) Both vertices and edges may contain additional information Different types of graphs: Directed vs. undirected edges Presence or absence of cycles...

Some Graph Problems Finding shortest paths Routing Internet traffic and UPS trucks Finding minimum spanning trees Telco laying down fiber Finding Max Flow Airline scheduling Identify special nodes and communities Breaking up terrorist cells, spread of avian flu Bipartite matching Monster.com, Match.com And of course... PageRank

Representing Graphs G = (V, E) Two common o representations ese s Adjacency matrix Adjacency list

Adjacency Matrices Represent a graph as an n x n square matrix M n = V M ij = 1 means a link from node i to j 1 2 3 4 2 1 0 1 0 1 1 2 1 0 1 1 3 1 0 0 0 3 4 1 0 1 0 4

Adjacency Lists Take adjacency matrices and throw away all the zeros 1 2 3 4 1 0 1 0 1 2 1 0 1 1 3 1 0 0 0 4 1 0 1 0 1: 2, 4 2: 1, 3, 4 3: 1 4: 1, 3

Single Source Shortest Path Problem: find shortest path from a source node to one or more target nodes Single processor machine: Dijkstra s Algorithm MapReduce: parallel Breadth-First Search (BFS)

Finding the Shortest Path First, consider equal edge weights Solution o to the problem can be defined ed inductively Here s the intuition: DistanceTo(startNode) = 0 For all nodes n directly reachable from startnode, DistanceTo(n) = 1 For all nodes n reachable from some other set of nodes S, DistanceTo(n) = 1 + min(distanceto(m), m S)

From Intuition to Algorithm A map task receives Key: node n Value: D (distance from start), points-to (list of nodes reachable from n) p points-to: to: emit (p, D+1) The reduce task gathers possible distances to a given p and selects the minimum one

Multiple Iterations Needed This MapReduce task advances the known frontier by one hop Subsequent iterations include more reachable nodes as frontier advances Multiple iterations are needed to explore entire graph Feed output back into the same MapReduce task Preserving graph structure: Problem: Where did the points-to list go? Solution: Mapper emits (n, points-to) as well

Visualizing Parallel BFS 1 2 3 2 2 3 3 3 4 4

Weighted Edges Now add positive weights to the edges Simple pe change: points-to to list in map task includes a weight w for each pointed-to node emit (p, D+w p ) instead of (p, D+1) for each node p

Comparison to Dijkstra Dijkstra s algorithm is more efficient At any step it only pursues edges from the minimum-cost path inside the frontier MapReduce explores all paths in parallel

Random Walks Over the Web Model: User starts at a random Web page User randomly clicks on links, surfing from page to page PageRank = the amount of time that will be spent on any given page

PageRank: Defined Given page x with in-bound links t 1 t n, where C(t) is the out-degree of t α is probability of random jump N is the total number of nodes in the graph 1 PR( x) = α + (1 α) N n i i= 1 C( ti ) PR( t ) t 1 X t 2 t n

Computing PageRank Properties of PageRank Can be computed iteratively Effects at each iteration is local Sketch of algorithm: Start with seed PR i values Each page distributes PR i credit to all pages it links to Each target page adds up credit from multiple in-bound links to compute PR i+1 Iterate until values converge

PageRank in MapReduce Map: distribute PageRank credit to link targets Reduce: gather up PageRank credit from multiple sources to compute new PageRank value Iterate until convergence...

PageRank: Issues Is PageRank guaranteed to converge? How quickly? What is the correct value of α, and how sensitive is the algorithm to it? What about dangling links? How do you know when to stop?

Graph Algorithms in MapReduce General approach: Store graphs as adjacency lists Each map task receives a node and its outlinks (adjacency list) Map task compute some function of the link structure, emits value with target as the key Reduce task collects keys (target nodes) and aggregates Iterate multiple MapReduce cycles until some termination condition Remember to pass graph structure from one iteration to next

Questions?

MapReduce Algorithm Design Adapted from work reported in (Lin, EMNLP 2008)

Managing Dependencies Remember: Mappers run in isolation You have no idea in what order the mappers run You have no idea on what node the mappers run You have no idea when each mapper finishes Tools for synchronization: Ability to hold state in reducer across multiple key-value pairs Sorting function for keys Partitioner Cleverly-constructed data structures

Motivating Example Term co-occurrence matrix for a text collection M = N x N matrix (N = vocabulary size) M ij : number of times i and j co-occur in some context (for concreteness, let s say context = sentence) Why? Distributional profiles as a way of measuring semantic distance Semantic distance useful for many language processing tasks

MapReduce: Large Counting Problems Term co-occurrence matrix for a text collection = specific instance of a large counting problem A large event space (number of terms) A large number of observations (the collection itself) Goal: keep track of interesting statistics about the events Basic approach Mappers generate partial counts Reducers aggregate partial counts How do we aggregate partial counts efficiently?

First Try: Pairs Each mapper takes a sentence: Generate all co-occurring term pairs For all pairs, emit (a, b) count Reducers sums up counts associated with these pairs Use combiners!

Pairs Analysis Advantages Easy to implement, easy to understand Disadvantages Lots of pairs to sort and shuffle around (upper bound?)

Another Try: Stripes Idea: group together pairs into an associative array (a, b) 1 (a, c) 2 (a, d) 5 a { b: 1, c: 2, d: 5, e: 3, f: 2 } (a, e) 3 (a, f) 2 Each mapper takes a sentence: Generate all co-occurring term pairs For each term, emit a { b: count b, c: count c, d: count d } Reducers perform element-wise sum of associative arrays + a { b: 1, d: 5, e: 3 } a { b: 1, c: 2, d: 2, f: 2 } a { b: 2, c: 2, d: 7, e: 3, f: 2 }

Stripes Analysis Advantages Far less sorting and shuffling of key-value pairs Can make better use of combiners Disadvantages More difficult to implement Underlying object is more heavyweight Fundamental limitation in terms of size of event space

Cluster size: 38 cores Data Source: Associated Press Worldstream (APW) of the English Gigaword Corpus (v3), which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed)

Conditional Probabilities How do we estimate conditional probabilities from counts? P( B A) = count( A, B) count( A) = B' count( A, B) count( A, B') Why do we want to do this? How do we do this with MapReduce?

P(B A): Stripes a {b 1 :3, b 2 :12, b 3 :7, b 4 :1, } Easy! One pass to compute (a, *) Another pass to directly compute P(B A)

P(B A): Pairs (a, *) 32 (a, b 1 ) 3 (a, b 2 ) 12 (a, b 3 ) 7 (a, b 4 4) 1 Reducer holds this value in memory (a, b 1 ) 3/32 32 (a, b 2 ) 12 / 32 (a, b 3 ) 7 / 32 (a, b 4 4) 1 / 32 For this to work: Must emit extra (a, *) for every b n in mapper Must make sure all a s asget sent to same reducer (use partitioner) Must make sure (a, *) comes first (define sort order) Must hold state in reducer across different key-value pairs

Synchronization in Hadoop Approach 1: turn synchronization into an ordering problem Sort keys into correct order of computation Partition key space so that each reducer gets the appropriate set of partial results Hold state in reducer across multiple key-value pairs to perform computation Illustrated by the pairs approach Approach 2: construct data structures that bring the pieces together Each reducer receives all the data it needs to complete the computation Illustrated by the stripes approach

Issues and Tradeoffs Number of key-value pairs Object creation overhead Time for sorting and shuffling pairs across the network Size of each key-value pair De/serialization overhead Combiners make a big difference! RAM vs. disk and network Arrange data to maximize opportunities to aggregate partial results

Questions?

Case study: statistical machine translation

Statistical Machine Translation Conceptually simple: (translation from foreign f into English e) eˆ = arg max e P( f e) P( e) Difficult in practice! Phrase-Based Machine Translation (PBMT) : Break up source sentence into little pieces (phrases) Translate each phrase individually Dyer et al. (Third ACL Workshop on MT, 2008)

Translation as a Tiling Problem Maria no dio una bofetada a la bruja verde Mary not give a slap to the witch green did not a slap by green witch no did not give slap to the to the slap the witch Example from Koehn (2006)

MT Architecture Training Data Word Alignment Phrase Extraction i saw the small table vi la mesa pequeña Parallel Sentences (vi, i saw) (la mesa pequeña, the small table) he sat at the table the service was good Target-Language Text Language Model Translation Model Decoder maria no daba una bofetada a la bruja verde Foreign Input Sentence mary did not slap the green witch English Output Sentence

The Data Bottleneck

MT Architecture There are MapReduce Implementations of these two components! Training Data Word Alignment Phrase Extraction i saw the small table vi la mesa pequeña Parallel Sentences (vi, i saw) (la mesa pequeña, the small table) he sat at the table the service was good Target-Language Text Language Model Translation Model Decoder maria no daba una bofetada a la bruja verde Foreign Input Sentence mary did not slap the green witch English Output Sentence

HMM Alignment: Giza Single-core commodity server

HMM Alignment: MapReduce Single-core commodity server 38 processor cluster

HMM Alignment: MapReduce 38 processor cluster 1/38 Single-core commodity server

What s the point? The optimally-parallelized version doesn t exist! It s all about the right level e of abstraction act Goldilocks argument

Questions?

What s next? Web-scale text processing: luxury necessity Fortunately, the technology is becoming more accessible MapReduce is a nice hammer: Whack it on everything in sight! MapReduce is only the beginning Alternative programming models Fundamental breakthroughs in algorithm design

Applications (NLP, IR, ML, etc.) Programming Models (MapReduce ) Systems (architecture, network, etc.)

Questions? Comments? Thanks to the organizations who support our work: