CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Similar documents
CS 345A Data Mining. MapReduce

MapReduce: Recap. Juliana Freire & Cláudio Silva. Some slides borrowed from Jimmy Lin, Jeff Ullman, Jerome Simeon, and Jure Leskovec

Outline. Distributed File System Map-Reduce The Computational Model Map-Reduce Algorithm Evaluation Computing Joins

Data contains value and knowledge. 4/1/19 Tim Althoff, UW CS547: Machine Learning for Big Data,

Instructor: Dr. Mehmet Aktaş. Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

Introduction to Data Mining

CS 345A Data Mining. MapReduce

Introduction to MapReduce (cont.)

MapReduce. Stony Brook University CSE545, Fall 2016

Principles of Data Management. Lecture #16 (MapReduce & DFS for Big Data)

CS246: Mining Massive Datasets Jure Leskovec, Stanford University.

Programming Systems for Big Data

Databases 2 (VU) ( )

Motivation. Map in Lisp (Scheme) Map/Reduce. MapReduce: Simplified Data Processing on Large Clusters

CS 61C: Great Ideas in Computer Architecture. MapReduce

Distributed Computations MapReduce. adapted from Jeff Dean s slides

Database Systems CSE 414

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

The MapReduce Abstraction

Announcements. Optional Reading. Distributed File System (DFS) MapReduce Process. MapReduce. Database Systems CSE 414. HW5 is due tomorrow 11pm

Where We Are. Review: Parallel DBMS. Parallel DBMS. Introduction to Data Management CSE 344

Introduction to Data Management CSE 344

Databases 2 (VU) ( / )

Announcements. Parallel Data Processing in the 20 th Century. Parallel Join Illustration. Introduction to Database Systems CSE 414

Clustering Lecture 8: MapReduce

MapReduce Spark. Some slides are adapted from those of Jeff Dean and Matei Zaharia

The amount of data increases every day Some numbers ( 2012):

2/26/2017. The amount of data increases every day Some numbers ( 2012):

CS / Cloud Computing. Recitation 3 September 9 th & 11 th, 2014

MapReduce: Simplified Data Processing on Large Clusters 유연일민철기

CS 4604: Introduc0on to Database Management Systems. B. Aditya Prakash Lecture #12: NoSQL and MapReduce

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

CompSci 516: Database Systems

Map Reduce Group Meeting

Map-Reduce and Related Systems

MapReduce and Friends

Announcements. Reading Material. Map Reduce. The Map-Reduce Framework 10/3/17. Big Data. CompSci 516: Database Systems

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #11: No- SQL, Map- Reduce and New So8ware Stack

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CSE Lecture 11: Map/Reduce 7 October Nate Nystrom UTA

Introduction to MapReduce

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 10 Parallel Programming Models: Map Reduce and Spark

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

Parallel Computing: MapReduce Jin, Hai

Introduction to Data Management CSE 344

Introduction to Map Reduce

The Google File System

Improving the MapReduce Big Data Processing Framework

Chapter 5. The MapReduce Programming Model and Implementation

Big Data Management and NoSQL Databases

Map Reduce. Yerevan.

Distributed computing: index building and use

Batch Processing Basic architecture

Google File System (GFS) and Hadoop Distributed File System (HDFS)

Introduction to Hadoop. Owen O Malley Yahoo!, Grid Team

Data Partitioning and MapReduce

Map-Reduce. Marco Mura 2010 March, 31th

/ Cloud Computing. Recitation 3 Sep 13 & 15, 2016

Distributed File Systems II

Jeffrey D. Ullman Stanford University

Distributed computing: index building and use

MapReduce. U of Toronto, 2014

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

Distributed Computation Models

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

Lecture 11 Hadoop & Spark

MapReduce: Algorithm Design for Relational Operations

MI-PDB, MIE-PDB: Advanced Database Systems

Introduction to Data Management CSE 344

Big Data Analytics CSCI 4030

Lecture Map-Reduce. Algorithms. By Marina Barsky Winter 2017, University of Toronto

BigData and Map Reduce VITMAC03

Distributed Systems. COS 418: Distributed Systems Lecture 1. Mike Freedman. Backrub (Google) Google The Cloud is not amorphous

CS427 Multicore Architecture and Parallel Computing

HDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung

MapReduce, Hadoop and Spark. Bompotas Agorakis

Introduction to MapReduce

Google: A Computer Scientist s Playground

Google: A Computer Scientist s Playground

ΕΠΛ 602:Foundations of Internet Technologies. Cloud Computing

Yuval Carmel Tel-Aviv University "Advanced Topics in Storage Systems" - Spring 2013

Database Applications (15-415)

CS370 Operating Systems

CmpE 138 Spring 2011 Special Topics L2

CSE 124: Networked Services Fall 2009 Lecture-19

HADOOP FRAMEWORK FOR BIG DATA

The Google File System. Alexandru Costan

Homework 3: Map-Reduce, Frequent Itemsets, LSH, Streams (due March 16 th, 9:30am in class hard-copy please)

Cloud Computing CS

PSON: A Parallelized SON Algorithm with MapReduce for Mining Frequent Sets

Generalizing Map- Reduce

Parallel Nested Loops

Parallel Partition-Based. Parallel Nested Loops. Median. More Join Thoughts. Parallel Office Tools 9/15/2011

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

Introduction to Hadoop and MapReduce

A BigData Tour HDFS, Ceph and MapReduce

Scalable Web Programming. CS193S - Jan Jannink - 2/25/10

Transcription:

CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 3

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 4 Data contains value and knowledge

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 5 But to extract the knowledge data needs to be Stored Managed And ANALYZED this class Data Mining Big Data Predictive Analytics Data Science

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 6

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 7 Given lots of data Discover patterns and make predictions that are: Valid: hold on new data with some certainty Useful: should be possible to act on the item Unexpected: non-obvious to the system Understandable: humans should be able to interpret the pattern

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 8 Descriptive methods Find human-interpretable patterns that describe the data Example: Clustering Predictive methods Use some variables to predict unknown or future values of other variables Example: Recommender systems

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 9 A risk with Data mining is that an analyst can discover patterns that are meaningless Statisticians call it Bonferroni s principle: Roughly, if you look in more places for interesting patterns than your amount of data will support, you are bound to find crap

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 11 Usage Quality Context Streaming Scalability

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 12 Data mining overlaps with: Databases (DB): Large-scale data, simple queries Machine Learning (ML): Small data, Complex models CS Theory: (Randomized) Algorithms Different cultures: To a DB person, data mining is an extreme form of analytic processing queries that examine large amounts of data Result is the query answer To a ML person, data-mining is the inference of models Result is the parameters of the model In this class we will do both! CS Theory Data Mining Database systems Machine Learning

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 13 This class overlaps with machine learning, statistics, artificial intelligence, databases but more stress on Scalability (big data) Algorithms Computing architectures Automation for handling large data Statistics Data Mining Machine Learning Database systems

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 14 We will learn to mine different types of data: Data is high dimensional Data is a graph Data is infinite/never-ending Data is labeled We will learn to use different models of computation: MapReduce Streams and online algorithms Single machine in-memory

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 15 We will learn to solve real-world problems: Recommender systems Market Basket Analysis Spam detection Duplicate document detection We will learn various tools : Linear algebra (SVD, Rec. Sys., Communities) Optimization (stochastic gradient descent) Dynamic programming (frequent itemsets) Hashing (LSH, Bloom filters)

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 16 High dim. data Graph data Infinite data Machine learning Apps Locality sensitive hashing PageRank, SimRank Filtering data streams SVM Recommen der systems Clustering Network Analysis Web advertising Decision Trees Association Rules Dimensional ity reduction Spam Detection Queries on streams Perceptron, knn Duplicate document detection

How do you want that data? 1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 17

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 19 TAs: We have 9 great TAs! Hristo Paskov, Clifford Huang, Dima Brezhnev, James Liu, Negar Rahmati, Clement Ntwari Nshuti, Jean-Yves Stephan, Janice Lan, Nat Roth Office hours: Jure: Wednesdays 9-10am, Gates 418 See course website for TA office hours We start Office Hours next week For SCPD students we will use Google Hangout We will post Hangout link on Piazza 1min before the OH

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 20 Course website: http://cs246.stanford.edu Lecture slides (at least 30min before the lecture) Homeworks, solutions Readings Readings: Mining of Massive Datasets by A. Rajaraman, J. Ullman, and J. Leskovec Free online: http://mmds.org Note there are 2 editions: Use the 2 nd edition for class readings Gradiance quizzes often refer to chapters of the 1 st edition

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 21 Piazza Q&A website: https://piazza.com/class#winter2015/cs246 Use Piazza for all questions and public communication Search the forum before asking a question Please tag your posts and please no one-liners Jure will give extra credit for Piazza participation For e-mailing course staff always use: cs246-win1415-staff@lists.stanford.edu We will post course announcements to Piazza (make sure you check it regularly) Auditors are welcome to sit-in & audit the class

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 22 4 longer homeworks: 40% Theoretical and programming questions Hadoop tutorial has just been posted Assignments take lots of time (+20h). Start early!! How to submit? Homework write-up: Submit via Scoryst Everyone uploads code: Put all the code for 1 question into 1 file and submit at: http://snap.stanford.edu/submit/

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 23 Short weekly Gradiance quizzes: 20% Quizzes are posted on Fridays Due exactly 7 days later (Friday 23:59 PT) No late days! First quiz has already been posted To register on Gradiance please use SUNetID or <legal first name>_<legal last name> for username We have to be able to match your Gradiance ID and SUNetId Final exam: 40% Friday, March 20 12:15pm-3:15pm Alternate final: Monday, March 16 7-10pm

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 24 Homework schedule: Date (5pm) Out In 01/01, Tue Tutorial 01/08, Thu HW1 01/13, Tue Tutorial 01/22, Thu HW2 HW1 02/05, Thu HW3 HW2 02/19, Thu HW4 HW3 03/05, Thu HW4 2 late days (late periods) for HWs for the quarter: 1 late period expires on Tuesday 5pm You can use max 1 late period per assignment

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 25 Algorithms (CS161) Dynamic programming, basic data structures Probability (CS109 or Stat116) Moments, typical distributions, MLE, Programming (CS107 or CS145) Your choice, but Java will be very useful We provide some background at recitation sessions, but the class will be fast paced

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 26 Recitation sessions: Review of probability & stats: Fri 1/9, at 4:15pm Review of linear algebra: Fri 1/16, at 4:15pm All sessions will be held in Gates B01 Sessions will be video recorded

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 27 CS246H covers practical aspects of Hadoop and other distributed computing architectures HDFS, Combiners, Partitioners, Hive, Pig, Hbase, 1 unit course, optional homeworks CS246H runs (somewhat) parallel to CS246 CS246 discusses theory and algorithms CS246H tells you how to implement them Instructor: Daniel Templeton (Cloudera) First session of CS246H will give a Hadoop tutorial Think of it as CS246 Hadoop recitation CS246H lectures are recorded (available via SCPD)

Data Science / Info Seminar (CS545): http://i.stanford.edu/infoseminar Great industrial & academic speakers Topics include data mining, ML, data science Open to SCPD students (videos recorded) CS341: Project in Data Mining (Spring 2014) Research project on big data Groups of 3 students We provide interesting data, computing resources (Amazon EC2) and mentoring My group has RA positions open: See http://snap.stanford.edu/apply/ 1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 28

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 29 CS246 is fast paced! Requires programming maturity Strong math skills SCPD students tend to be rusty on math/theory Course time commitment: Homeworks take about 20h Gradiance quizes take about 1-2h Form study groups It s going to be fun and hard work.

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 30 4 to-do items for you: Register to Piazza Register to Scoryst Register to Gradiance and complete the first quiz Use your SUNet ID to register! (so we can match grading records) Complete the first quiz (it is already posted) Complete Hadoop tutorial Tutorial should take your about 2 hours to complete (Note this is a toy homework to get you started. Real homeworks will be much more challenging and longer) Additional details/instructions at http://cs246.stanford.edu

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 32 Much of the course will be devoted to large scale computing for data mining Challenges: How to distribute computation? Distributed/parallel programming is hard MapReduce addresses all of the above Google s computational/data manipulation model Elegant way to work with big data

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 33 CPU Memory Machine Learning, Statistics Classical Data Mining Disk

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 34 20+ billion web pages x 20KB = 400+ TB 1 computer reads 30-35 MB/sec from disk ~4 months to read the web ~1,000 hard drives to store the web Takes even more to do something useful with the data! Recently standard architecture for such problems emerged: Cluster of commodity Linux nodes Commodity network (ethernet) to connect them

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 35 1 Gbps between any pair of nodes in a rack Switch 2-10 Gbps backbone between racks Switch Switch CPU CPU CPU CPU Mem Mem Mem Mem Disk Disk Disk Disk Each rack contains 16-64 nodes In 2011 it was guestimated that Google had 1M machines, http://bit.ly/shh0ro

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 36

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 37 Large-scale computing for data mining problems on commodity hardware Challenges: How do you distribute computation? How can we make it easy to write distributed programs? Machines fail: One server may stay up 3 years (1,000 days) If you have 1,000 servers, expect to loose 1/day With 1M machines 1,000 machines fail every day!

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 38 Issue: Copying data over a network takes time Idea: Bring computation to data Store files multiple times for reliability MapReduce addresses these problems Storage Infrastructure File system Google: GFS. Hadoop: HDFS Programming model MapReduce

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 39 Problem: If nodes fail, how to store data persistently? Answer: Distributed File System Provides global file namespace Typical usage pattern: Huge files (100s of GB to TB) Data is rarely updated in place Reads and appends are common

Chunk servers File is split into contiguous chunks Typically each chunk is 16-64MB Each chunk replicated (usually 2x or 3x) Try to keep replicas in different racks Master node a.k.a. Name Node in Hadoop s HDFS Stores metadata about where files are stored Might be replicated Client library for file access Talks to master to find chunk servers Connects directly to chunk servers to access data 1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 40

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 41 Reliable distributed file system Data kept in chunks spread across machines Each chunk replicated on different machines Seamless recovery from disk or machine failure C 0 C 1 D 0 C 1 C 2 C 5 C 0 C 5 C 5 C 2 C 5 C 3 D 0 D 1 D 0 C 2 Chunk server 1 Chunk server 2 Chunk server 3 Chunk server N Bring computation directly to the data! Chunk servers also serve as compute servers

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 42 Warm-up task: We have a huge text document Count the number of times each distinct word appears in the file Sample application: Analyze web server logs to find popular URLs

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 43 Case 1: File too large for memory, but all <word, count> pairs fit in memory Case 2: Count occurrences of words: words(doc.txt) sort uniq -c where words takes a file and outputs the words in it, one word per a line Case 2 captures the essence of MapReduce Great thing is that it is naturally parallelizable

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 44 3 steps of MapReduce Sequentially read a lot of data Map: Extract something you care about Group by key: Sort and shuffle Reduce: Aggregate, summarize, filter, or transform Output the result Outline stays the same, Map and Reduce change to fit the problem

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 45 Input key-value pairs Intermediate key-value pairs k v map k k v v k v map k v k v k v

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 46 Intermediate key-value pairs k v Key-value groups k v v v reduce Output key-value pairs k v k v Group by key k v v reduce k v k v k v k v k v

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 47 Input: a set of key-value pairs Programmer specifies two methods: Map(k, v) <k, v >* Takes a key-value pair and outputs a set of key-value pairs E.g., key is the filename, value is a single line in the file There is one Map call for every (k,v) pair Reduce(k, <v >*) <k, v >* All values v with same key k are reduced together and processed in v order There is one Reduce function call per unique key k

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 48 MAP: Read input and produces a set of key-value pairs Group by key: Collect all pairs with same key (Hash merge, Shuffle, Sort, Partition) Reduce: Collect all values belonging to the key and output

All phases are distributed with many tasks doing the work 1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 49

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 50 Provided by the programmer Provided by the programmer MAP: Read input and produces a set of key-value pairs Group by key: Collect all pairs with same key Reduce: Collect all values belonging to the key and output The crew of the space shuttle Endeavor recently returned to Earth as ambassadors, harbingers of a new era of space exploration. Scientists at NASA are saying that the recent assembly of the Dextre bot is the first step in a long-term space-based man/mache partnership. '"The work we're doing now -- the robotics we're doing - - is what we're going to need.. (The, 1) (crew, 1) (of, 1) (the, 1) (space, 1) (shuttle, 1) (Endeavor, 1) (recently, 1). (crew, 1) (crew, 1) (space, 1) (the, 1) (the, 1) (the, 1) (shuttle, 1) (recently, 1) (crew, 2) (space, 1) (the, 3) (shuttle, 1) (recently, 1) Sequentially read the data Only sequential reads Big document (key, value) (key, value) (key, value)

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 51 map(key, value): // key: document name; value: text of the document for each word w in value: emit(w, 1) reduce(key, values): // key: a word; value: an iterator over counts result = 0 for each count v in values: result += v emit(key, result)

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 52 MapReduce environment takes care of: Partitioning the input data Scheduling the program s execution across a set of machines Performing the group by key step In practice this is is the bottleneck Handling machine failures Managing required inter-machine communication

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 54 Input and final output are stored on a distributed file system (HDFS): Scheduler tries to schedule map tasks close to physical storage location of input data Intermediate results are stored on local file system of Map and Reduce workers Output is often input to another MapReduce task

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 55 Master node takes care of coordination: Task status: (idle, in-progress, completed) Idle tasks get scheduled as workers become available When a map task completes, it sends the master the location and sizes of its intermediate files, one for each reducer Master pushes this info to reducers Master pings workers periodically to detect failures

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 56 Map worker failure Map tasks completed or in-progress at worker are reset to idle Reduce workers are notified when task is rescheduled on another worker Reduce worker failure Only in-progress tasks are reset to idle Reduce task is restarted Master failure MapReduce task is aborted and client is notified

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 57 M map tasks, R reduce tasks Rule of a thumb: Make M much larger than the number of nodes in the cluster One DFS chunk per map is common Improves dynamic load balancing and speeds up recovery from worker failures Usually R is smaller than M Because final output is spread across R files

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 58 Fine granularity tasks: map tasks >> machines Minimizes time for fault recovery Can do pipeline shuffling with map execution Better dynamic load balancing

Problem Slow workers significantly lengthen the job completion time: Other jobs on the machine Bad disks Weird things Solution Near end of phase, spawn backup copies of tasks Whichever one finishes first wins Effect Dramatically shortens job completion time 1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 59

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 61 Back to our word counting example: Combiner combines the values of all keys of a single mapper (single machine): Much less data needs to be copied and shuffled! Works if reduce function is commutative and associative

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 62 Want to control how keys get partitioned Inputs to map tasks are created by contiguous splits of input file Reduce needs to ensure that records with the same intermediate key end up at the same worker System uses a default partition function: hash(key) mod R Sometimes useful to override the hash function: E.g., hash(hostname(url)) mod R ensures URLs from a host end up in the same output file

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 66 Suppose we have a large web corpus Look at the metadata file Lines of the form: (URL, size, date, ) For each host, find the total number of bytes That is, the sum of the page sizes for all URLs from that particular host Other examples: Link analysis and graph processing Machine Learning algorithms

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 67 Statistical machine translation: Need to count number of times every 5-word sequence occurs in a large corpus of documents Very easy with MapReduce: Map: Extract (5-word sequence, count) from document Reduce: Combine the counts

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 68 Compute the natural join R(A,B) S(B,C) R and S are each stored in files Tuples are pairs (a,b) or (b,c) A B B C A C a 1 b 1 a 2 b 1 b 2 c 1 b 2 c 2 = a 3 c 1 a 3 c 2 a 3 b 2 b 3 c 3 a 4 c 3 a 4 b 3 S R

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 69 Use a hash function h from B-values to 1...k A Map process turns: Each input tuple R(a,b) into key-value pair (b,(a,r)) Each input tuple S(b,c) into (b,(c,s)) Map processes send each key-value pair with key b to Reduce process h(b) Hadoop does this automatically; just tell it what k is. Each Reduce process matches all the pairs (b,(a,r)) with all (b,(c,s)) and outputs (a,b,c).

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 70 MapReduce is great for: Problems that require sequential data access Large batch jobs (not interactive, real-time) MapReduce is inefficient for problems where random (or irregular) access to data required: Graphs Interdependent data Machine learning Comparisons of many pairs of items

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 71 In MapReduce we quantify the cost of an algorithm using 1. Communication cost = total I/O of all processes 2. Elapsed communication cost = max of I/O along any path 3. (Elapsed) computation cost analogous, but count only running time of processes Note that here the big-o notation is not the most useful (adding more machines is always an option)

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 72 For a map-reduce algorithm: Communication cost = input file size + 2 (sum of the sizes of all files passed from Map processes to Reduce processes) + the sum of the output sizes of the Reduce processes. Elapsed communication cost is the sum of the largest input + output for any map process, plus the same for any reduce process

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 73 Either the I/O (communication) or processing (computation) cost dominates Ignore one or the other Total cost tells what you pay in rent from your friendly neighborhood cloud Elapsed cost is wall-clock time using parallelism

Total communication cost = O( R + S + R S ) Elapsed communication cost = O(s) We re going to pick k and the number of Map processes so that the I/O limit s is respected We put a limit s on the amount of input or output that any one process can have. s could be: What fits in main memory What fits on local disk With proper indexes, computation cost is linear in the input + output size So computation cost is like comm. cost 1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 74

Google s MapReduce Not available outside Google Hadoop An open-source implementation in Java Uses HDFS for stable storage Download: http://lucene.apache.org/hadoop/ Aster Data Cluster-optimized SQL Database that also implements MapReduce 1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 76

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 77 Ability to rent computing by the hour Additional services e.g., persistent storage Amazon s Elastic Compute Cloud (EC2) Aster Data and Hadoop can both be run on EC2 For CS341 (offered next quarter) Amazon will provide free access for the class

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 78 Jeffrey Dean and Sanjay Ghemawat: MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html Sanjay Ghemawat, Howard Gobioff, and Shun- Tak Leung: The Google File System http://labs.google.com/papers/gfs.html

Hadoop Wiki Introduction http://wiki.apache.org/lucene-hadoop/ Getting Started http://wiki.apache.org/lucenehadoop/gettingstartedwithhadoop Map/Reduce Overview http://wiki.apache.org/lucene-hadoop/hadoopmapreduce http://wiki.apache.org/lucenehadoop/hadoopmapredclasses Eclipse Environment http://wiki.apache.org/lucene-hadoop/eclipseenvironment Javadoc http://lucene.apache.org/hadoop/docs/api/ 1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 79

1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 80 Releases from Apache download mirrors http://www.apache.org/dyn/closer.cgi/lucene/had oop/ Nightly builds of source http://people.apache.org/dist/lucene/hadoop/nig htly/ Source code from subversion http://lucene.apache.org/hadoop/version_control.html

Programming model inspired by functional language primitives Partitioning/shuffling similar to many large-scale sorting systems NOW-Sort ['97] Re-execution for fault tolerance BAD-FS ['04] and TACC ['97] Locality optimization has parallels with Active Disks/Diamond work Active Disks ['01], Diamond ['04] Backup tasks similar to Eager Scheduling in Charlotte system Charlotte ['96] Dynamic load balancing solves similar problem as River's distributed queues River ['99] 1/6/2015 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 81