Introduction to MapReduce

Similar documents
Introduction to MapReduce

Introduction to MapReduce

Introduction to MapReduce

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Introduction to MapReduce

Clustering Lecture 8: MapReduce

What is this course about?

Laarge-Scale Data Engineering

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

CMSC 723: Computational Linguistics I Session #12 MapReduce and Data Intensive NLP. University of Maryland. Wednesday, November 18, 2009

Map Reduce. Yerevan.

ΕΠΛ 602:Foundations of Internet Technologies. Cloud Computing

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

CS 345A Data Mining. MapReduce

Google File System (GFS) and Hadoop Distributed File System (HDFS)

Information Retrieval Processing with MapReduce

MI-PDB, MIE-PDB: Advanced Database Systems

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

CS-2510 COMPUTER OPERATING SYSTEMS

Distributed Filesystem

Informa)on Retrieval and Map- Reduce Implementa)ons. Mohammad Amir Sharif PhD Student Center for Advanced Computer Studies

Map Reduce.

Hadoop Distributed File System(HDFS)

Map-Reduce and Related Systems

The MapReduce Abstraction

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

Where We Are. Review: Parallel DBMS. Parallel DBMS. Introduction to Data Management CSE 344

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

Map-Reduce. Marco Mura 2010 March, 31th

Big Data Management and NoSQL Databases

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

NoSQL. Based on slides by Mike Franklin and Jimmy Lin

MapReduce. U of Toronto, 2014

Outline. Distributed File System Map-Reduce The Computational Model Map-Reduce Algorithm Evaluation Computing Joins

Lecture 11 Hadoop & Spark

A BigData Tour HDFS, Ceph and MapReduce

Overview. Why MapReduce? What is MapReduce? The Hadoop Distributed File System Cloudera, Inc.

BigData and Map Reduce VITMAC03

Distributed File Systems II

Cloud Computing. Big Data. Dell Zhang Birkbeck, University of London 2017/18

CS60021: Scalable Data Mining. Sourangshu Bhattacharya

Introduction to MapReduce. Instructor: Dr. Weikuan Yu Computer Sci. & Software Eng.

CLOUD-SCALE FILE SYSTEMS

HADOOP FRAMEWORK FOR BIG DATA

Clustering Documents. Case Study 2: Document Retrieval

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing

MapReduce: Recap. Juliana Freire & Cláudio Silva. Some slides borrowed from Jimmy Lin, Jeff Ullman, Jerome Simeon, and Jure Leskovec

Introduction to Data Management CSE 344

The Google File System. Alexandru Costan

The Google File System

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

CS370 Operating Systems

Big Data Programming: an Introduction. Spring 2015, X. Zhang Fordham Univ.

Distributed Systems 16. Distributed File Systems II

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Programming Systems for Big Data

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval

Announcements. Optional Reading. Distributed File System (DFS) MapReduce Process. MapReduce. Database Systems CSE 414. HW5 is due tomorrow 11pm

Distributed Computation Models

Database Systems CSE 414

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

The amount of data increases every day Some numbers ( 2012):

2/26/2017. The amount of data increases every day Some numbers ( 2012):

Chapter 5. The MapReduce Programming Model and Implementation

Introduction to Data Management CSE 344

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Distributed computing: index building and use

The MapReduce Framework

Map Reduce Group Meeting

Hadoop/MapReduce Computing Paradigm

The Google File System

CS427 Multicore Architecture and Parallel Computing

Data Clustering on the Parallel Hadoop MapReduce Model. Dimitrios Verraros

Parallel Computing: MapReduce Jin, Hai

CS 345A Data Mining. MapReduce

PARALLEL DATA PROCESSING IN BIG DATA SYSTEMS

CA485 Ray Walshe Google File System

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

The Google File System

STATS Data Analysis using Python. Lecture 7: the MapReduce framework Some slides adapted from C. Budak and R. Burns

Introduction to MapReduce

Lecture 12 DATA ANALYTICS ON WEB SCALE

CS /5/18. Paul Krzyzanowski 1. Credit. Distributed Systems 18. MapReduce. Simplest environment for parallel processing. Background.

Introduction to Map Reduce

Distributed Computations MapReduce. adapted from Jeff Dean s slides

CSE 124: Networked Services Fall 2009 Lecture-19

Hadoop. copyright 2011 Trainologic LTD

MapReduce, Hadoop and Spark. Bompotas Agorakis

Announcements. Parallel Data Processing in the 20 th Century. Parallel Join Illustration. Introduction to Database Systems CSE 414

Announcements. Reading Material. Map Reduce. The Map-Reduce Framework 10/3/17. Big Data. CompSci 516: Database Systems

Hadoop and HDFS Overview. Madhu Ankam

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 10 Parallel Programming Models: Map Reduce and Spark

L22: SC Report, Map Reduce

CS 61C: Great Ideas in Computer Architecture. MapReduce

MapReduce. Cloud Computing COMP / ECPE 293A

Cloud Programming. Programming Environment Oct 29, 2015 Osamu Tatebe

MapReduce: A Programming Model for Large-Scale Distributed Computation

MapReduce-style data processing

with MapReduce The ischool University of Maryland JHU Summer School on Human Language Technology Wednesday, June 17, 2009

Transcription:

Introduction to MapReduce Based on slides by Jimmy Lin, Jeff Ullman, Jerome Simeon, and Jure Leskovec

How much data? Google processes 20 PB a day (2008) Internet Archive Wayback Machine has 3 PB + 100 TB/month (3/2009) -- 455 billion web pages saved over time Facebook has 2.5 PB of user data + 15 TB/day (4/2009) ebay has 6.5 PB of user data + 50 TB/day (5/2009) CERN s LHC will generate 15 PB a year (??) NYC Open Data: 1,200+ data sets NYC Taxi Data: over 1 billion trips

What to do with more data? Machine learning algorithms really don t matter, all that matters is the amount of data you have, Banko and Brill, 2001 More data à higher accuracy More data à accuracy of different algorithms converge Create data-driven models E.g., taxi ride sharing [Ota et al., IEEE Big Data 2015] Understand interactions between the components of a city through their data exhaust

What to do with more data? Machine learning algorithms really don t matter, all that matters is the amount of data you have, Banko and Brill, 2001 More data à higher accuracy More data à accuracy of different algorithms converge Create data-driven models E.g., taxi ride sharing [Ota et al., IEEE Big Data 2015] Understand interactions between the components of a city through their data exhaust

What to do with more data? Answering factoid questions Pattern matching on the Web Works amazingly well Who shot Abraham Lincoln? X shot Abraham Lincoln Learning relations Start with seed instances Search for patterns on the Web Using patterns to find more instances Wolfgang Amadeus Mozart (1756-1791) Einstein was born in 1879 Birthday-of(Mozart, 1756) Birthday-of(Einstein, 1879) PERSON (DATE DATE) PERSON was born in DATE

Motivation: Google Example 20+ billion web pages x 20KB = 400+ TB 1 computer reads 30-35 MB/sec from disk ~4 months to read the web ~1,000 hard drives to store the web Takes even more to do something useful with the data! A standard architecture for such problems has emerged Cluster of commodity Linux nodes Commodity network (ethernet) to connect them

Cluster Architecture 1 Gbps between any pair of nodes in a rack Switch 2-10 Gbps backbone between racks Switch Switch CPU CPU CPU CPU Mem Mem Mem Mem Disk Disk Disk Disk Each rack contains 16-64 nodes In 2011 it was guestimated that Google had 1M machines, http://bit.ly/shh0ro

Big J. Leskovec, Data Spring A. 2016 Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 8

Parallelize Work: Divide and Conquer Work Partition w 1 w 2 w 3 worker worker worker r 1 r 2 r 3 Result Combine

Parallelization Challenges How do we assign work units to workers? What if we have more work units than workers? What if workers need to share partial results? How do we aggregate partial results? How do we know all the workers have finished? What if workers die? What if hardware fails? What is the common theme of all of these problems?

Common Theme? Parallelization problems arise from: Communication between workers (e.g., to exchange state) Access to shared resources (e.g., data) Thus, we need a synchronization mechanism

Managing Multiple Workers Difficult because We don t know the order in which workers run We don t know when workers interrupt each other We don t know the order in which workers access shared data Thus, we need: Semaphores (lock, unlock) Conditional variables (wait, notify, broadcast) Barriers Still, lots of problems: Deadlock, livelock, race conditions... Dining philosophers, sleeping barbers, cigarette smokers... Moral of the story: be careful!

Current Tools Programming models Shared memory (pthreads) Message passing (MPI) Design Patterns Master-slaves Producer-consumer flows Shared work queues Shared Memory P 1 P 2 P 3 P 4 P 5 Memory Message Passing P 1 P 2 P 3 P 4 P 5 master producer consumer work queue slaves producer consumer

Where the rubber meets the road Concurrency is difficult to reason about Concurrency is even more difficult to reason about At the scale of datacenters (even across datacenters) In the presence of failures In terms of multiple interacting services Not to mention debugging The reality: Lots of one-off solutions, custom code Write you own dedicated library, then program with it Burden on the programmer to explicitly manage everything

What s the point? It s all about the right level of abstraction The von Neumann architecture has served us well, but is no longer appropriate for the multi-core/cluster environment Hide system-level details from the developers No more race conditions, lock contention, etc. Separating the what from how Developer specifies the computation that needs to be performed Execution framework ( runtime ) handles actual execution

Cloud (Utility) Computing The data center is the computer Computing resources as a metered service ( pay as you go ) Ability to dynamically provision virtual machines Scalability: infinite capacity I think there is a world Elasticity: scale up or down on demand MapReduce computational model abstracts many of the complexities market for about five computers. Thomas J. Watson

Big Ideas: Scale Out vs. Scale Up Scale up: small number of high-end servers Symmetric multi-processing (SMP) machines, large shared memory Not cost-effective cost of machines does not scale linearly; and no single SMP machine is big enough Scale out: Large number of commodity low-end servers is more effective for data-intensive applications 8 128-core machines vs. 128 8-core machines low-end server platform is about 4 times more cost efficient than a high-end shared memory platform from the same vendor, Barroso and Hölzle, 2009

Why commodity machines? Source: Barroso and Urs Hölzle (2009); performance figures from late 2007

Big Ideas: Failures are Common Suppose a cluster is built using machines with a mean-time between failures (MTBF) of 1000 days For a 10,000 server cluster, there are on average 10 failures per day! MapReduce implementation copes with failures Automatic task restarts Store files multiple times for reliability

Big Ideas: Move Processing to Data Supercomputers often have processing nodes and storage nodes Computationally expensive tasks High-capacity interconnect to move data around Many data-intensive applications are not very processor-demanding Data movement leads to a bottleneck in the network! New idea: move processing to where the data reside In MapReduce, processors and storage are colocated Leverage locality

Big Ideas: Avoid Random Access Disk seek times are determined by mechanical factors Read heads can only move so fast and platters can only spin so rapidly

Big Ideas: Avoid Random Access Example: 1 TB database containing 10 10 100 byte records Random access: each update takes ~30ms (seek,read, write) Updating 1% of the records takes ~35 days Sequential access: 100MB/s throughput Reading the whole database and rewriting all the records, takes 5.6 hours MapReduce was designed for batch processing: organize computations into long streaming operations

Numbers Everyone Should Know* L1 cache reference Branch mispredict L2 cache reference Mutex lock/unlock Main memory reference Send 2K bytes over 1 Gbps network Read 1 MB sequentially from memory Round trip within same datacenter Disk seek Read 1 MB sequentially from disk Send packet CA Netherlands CA 0.5 ns 5 ns 7 ns 25 ns 100 ns 20,000 ns 250,000 ns 500,000 ns 10,000,000 ns 20,000,000 ns 150,000,000 ns Jeff Dean http://static.googleusercontent.com/media/research.google.com/en//people/jeff/stanford-295-talk.pdf

Big Ideas: Abstract System-Level Details Developing distributed software is very challenging Manage multiple threads, processes, machines Concurrency control and synchronization Race conditions, deadlocks a nightmare to debug! MapReduce isolates developers from these details Programmer defines what computations are to be performed MapReduce execution framework takes care of how the computations are carried out

MapReduce

Typical Large-Data Problem Iterate over a large number of records Map Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Reduce Key idea: provide a functional abstraction for these two operations (Dean and Ghemawat, OSDI 2004)

Map and Reduce The idea of Map and Reduce is 40+ year old - Present in all Functional Programming Languages. - See, e.g., APL, Lisp and ML Alternate names for Map: Apply-All Higher Order Functions - take function definitions as arguments, or - return a function as output Map and Reduce are higher-order functions.

Map Examples in Haskell ñ map (+1) [1,2,3,4,5] == [2, 3, 4, 5, 6] ñ map (tolower) "abcdefg12!@# == "abcdefg12!@# ñ map (`mod` 3) [1..10] == [1, 2, 0, 1, 2, 0, 1, 2, 0, 1]

MapReduce Programmers specify two functions: map (k 1, v 1 ) [(k 2, v 2 )] reduce (k 2, [v 2 ]) [(k 3, v 3 )] All values with the same key are sent to the same reducer The execution framework handles everything else

Hello World : Word Count Map(String docid, String text): for each word w in text: Emit(w, 1); k 1 v 1 k 2 v 2 map (docid, text) [(word, 1)] Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value); k 2 [v 2 ] k 3 v 3 reduce (word, [1,, 1]) [(word, count)]

Word Counting with MapReduce M 1 Documents Key Value Key Value Doc1 Doc2 Doc3 Doc4 Doc5 Financial, IMF, Eco nomics, Crisis Financial, IMF, Cris is Documents Economics, Harry Financial, Harry, Po tter, Film Crisis, Harry, Potter Map Map ` ` Financial 1 IMF 1 Economics 1 ` Crisis 1 Financial 1 ` IMF 1 Crisis 1 Economics 1 ` Harry 1 Financial 1 Harry 1 ` Potter 1 Film 1 Crisis 1 Harry 1 ` Potter 1 M 2 Big Data Kyuseok Spring 2016 Shim (VLDB 2012 TUTORIAL)

Word Counting with MapReduce Doc1 Doc2 Doc3 Doc4 Doc5 Documents Financial, IMF, Eco nomics, Crisis Financial, IMF, Cris is Documents Economics, Harry Financial, Harry, Po tter, Film Crisis, Harry, Potter Map Map Key KeyValue Key Value list Financial Financial1 Crisis 1 1, 1, 1 Financial IMF 1 Crisis 1 1, 1 Financial Economics 1 Crisis 1 1, 1 IMF Crisis 1 Harry 1 1, 1, 1 IMF Harry 1 Harry 1 1, 1, 1 Economics Film 1 Harry 1 1 Economics Potter 1 Film 1 1, 1 Potter 1 Potter 1 Value Reduce Key Financial 3 IMF 2 Economics 2 Crisis 3 Harry 3 Film 1 Potter 2 Before reduce functions are called, for each distinct key, the list of its values is generated Kyuseok Shim (VLDB 2012 TUTORIAL) Reduce ` ` Value

MapReduce k 1 v 1 k 2 v 2 k 3 v 3 k 4 v 4 k 5 v 5 k 6 v 6 Input chunks read from DFS map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 3 6 8 reduce reduce reduce Group by is applied to intermediate keys r 1 s 1 r 2 s 2 r 3 s 3 Output written to DFS

MapReduce Runtime Handles scheduling Assigns workers to map and reduce tasks Handles data distribution Moves processes to data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles errors and faults Detects worker failures and restarts Everything happens on top of a distributed FS (later)

MapReduce Programmers specify two functions: map (k 1, v 1 ) [(k 2, v 2 )] reduce (k 2, [v 2 ]) [(k 3, v 3 )] All values with the same key are reduced together Mappers and reducers can specify arbitrary computations: Be careful with access to external resources! Not quite usually, programmers also specify: partition (k 2, number of partitions) partition for k 2 Often a simple hash of the key, e.g., hash(k 2 ) mod n Divides up key space for parallel reduce operations combine (k 2, v 2 ) [(k 2, v 2 )] Mini-reducers that run in memory after the map phase: push some of the reducer work to the mapper Used as an optimization to reduce network traffic The execution framework handles everything else

Word Counting with MapReduce M 1 Documents Key Value Key Value Doc1 Doc2 Financial, IMF, Eco nomics, Crisis Financial, IMF, Cris is Map Financial 12 IMF 12 Economics 1 Economics 1 Harry 13 Financial 1 Crisis 12 Harry Potter 12 Doc3 Doc4 Doc5 Documents Economics, Harry Financial, Harry, Po tter, Film Crisis, Harry, Potter Map Financial 1 IMF 1 Crisis 1 Potter Film 1 Film Crisis 1 Crisis 1 Harry 1 Potter 1 M 2 Kyuseok Shim (VLDB 2012 TUTORIAL)

MapReduce: The Complete Picture k 1 v 1 k 2 v 2 k 3 v 3 k 4 v 4 k 5 v 5 k 6 v 6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 combine combine combine combine a 1 b 2 c 9 a 5 c 2 b 7 c 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 93 86 8 reduce reduce reduce r 1 s 1 r 2 s 2 r 3 s 3

Two more details Barrier between map and reduce phases But we can begin copying intermediate data earlier Keys arrive at each reducer in sorted order No enforced ordering across reducers

MapReduce can refer to The programming model The execution framework (aka runtime ) The specific implementation Usage is usually clear from context!

MapReduce Implementations Google has a proprietary implementation in C++ Bindings in Java, Python Hadoop is an open-source implementation in Java Development led by Yahoo, used in production Now an Apache project Rapidly expanding software ecosystem Lots of custom research implementations For GPUs, cell processors, etc.

MapReduce: Diving into Details User Program (1) submit User program forks Master and worker processes Master (2) schedule map (2) schedule reduce worker split 0 split 1 split 2 split 3 split 4 (3) read worker (4) local write (5) remote read worker worker (6) write output file 0 output file 1 worker Input files Map phase Intermediate files (on local disk) Reduce phase Output files Adapted from (Dean and Ghemawat, OSDI 2004)

Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes in the cluster Start up the workers on the node that has the data local Why? Not enough RAM to hold all the data in memory Disk access is slow, but disk throughput is reasonable A distributed file system is the answer GFS (Google File System) for Google s MapReduce HDFS (Hadoop Distributed File System) for Hadoop

GFS: Assumptions Commodity hardware over exotic hardware Scale out, not up High component failure rates Inexpensive commodity components fail all the time Modest number of huge files Multi-gigabyte files are common, if not encouraged, Files are write-once, mostly appended to Perhaps concurrently Large streaming reads over random access High sustained throughput over low latency

Files stored as chunks Fixed size (e.g., 64MB) GFS: Design Decisions e.g., 10K nodes, 100 million files, 10 PB Reliability through replication Each chunk replicated across 3+ chunkservers, on different racks Single master to coordinate access, keep metadata Simple centralized management Data locations exposed so that computations can move to where data resides No data caching Little benefit due to large datasets, streaming reads HDFS = GFS clone (same basic ideas) Simplify the API Push some of the issues onto the client (e.g., data layout)

From GFS to HDFS Terminology differences: GFS master = Hadoop namenode GFS chunkservers = Hadoop datanodes For the most part, we ll use the Hadoop terminology

HDFS Architecture Master (namenode) handles replication, deletion, creation HDFS namenode Application HDFS Client (file name, block id) (block id, block location) File namespace /foo/bar block 3df2 instructions to datanode (block id, byte range) block data HDFS datanode Linux file system datanode state HDFS datanode Linux file system Slave (datanode) handles data retrieval

HDFS Architecture Master (namenode) handles replication, deletion, creation HDFS namenode Application HDFS Client (file name, block id) (block id, block location) Files stored in many blocks Each block has a block Id Block Id associated with several nodes hostname:port (depending on level of replication) (block id, byte range) block data File namespace /foo/bar block 3df2 instructions to datanode datanode state HDFS datanode HDFS datanode Linux file system Linux file system Slave (datanode) handles data retrieval

Namenode Responsibilities Managing the file system namespace: Holds file/directory structure, metadata, file-to-block mapping, access permissions, etc. Coordinating file operations: Directs clients to datanodes for reads and writes No data is moved through the namenode Maintaining overall health: Periodic communication with the datanodes Block re-replication and rebalancing Garbage collection

NameNode Metadata Meta-data in Memory The entire metadata is in main memory No demand paging of meta-data Types of Metadata List of files List of Blocks for each file List of DataNodes for each block File attributes, e.g creation time, replication factor A Transaction Log Records file creations, file deletions. etc

HDFS: Some Notes Good for BIG files and streaming data access Bad for lots of small files and random access

MapReduce Architecture namenode job submission node namenode daemon jobtracker tasktracker datanode daemon Linux file system slave node tasktracker datanode daemon Linux file system slave node tasktracker datanode daemon Linux file system slave node

Hadoop Master/Slave relationship - JobTracker handles all scheduling & data flow between TaskTrackers - TaskTracker handles all worker tasks on a node - Individual worker task runs map or reduce operation Integrates with HDFS for data locality

Fault Recovery Workers are pinged by master periodically - Non-responsive workers are marked as failed - All tasks in-progress or completed by failed worker become eligible for rescheduling Master could periodically checkpoint - Current implementations abort on master failure

Hadoop Supported File Systems HDFS: Hadoop's own file system. Commercial Blob stores Azure Storage Amazon S3 file system - Targeted at clusters hosted on the Amazon Elastic Compute Cloud server-on-demand infrastructure

"Rack awareness" Optimization which takes into account the geographic clustering of servers Network traffic between servers in different geographic clusters is minimized.

References http://hadoop.apache.org/ Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large Clusters. Usenix SDI '04, 2004: http://www.usenix.org/events/osdi04/tech/full_papers/ dean/dean.pdf David DeWitt, Michael Stonebraker, MapReduce: A major step backwards, craig-henderson.blogspot.com http://scienceblogs.com/goodmath/2008/01/ databases_are_hammers_mapreduc.php

Hadoop Wiki Resources Introduction http://wiki.apache.org/lucene-hadoop/ Getting Started http://wiki.apache.org/lucene-hadoop/gettingstartedwithhadoop Map/Reduce Overview http://wiki.apache.org/lucene-hadoop/hadoopmapreduce http://wiki.apache.org/lucene-hadoop/hadoopmapredclasses Eclipse Environment Javadoc http://wiki.apache.org/lucene-hadoop/eclipseenvironment http://lucene.apache.org/hadoop/docs/api/