Introduction to Data Management CSE 344 Lecture 24: MapReduce CSE 344 - Winter 215 1
HW8 MapReduce (Hadoop) w/ declarative language (Pig) Due next Thursday evening Will send out reimbursement codes later CSE 344 - Winter 215 2
Parallel Data Processing @ 199 CSE 344 - Winter 215 3
Parallel Join Data: R(K1,A, B), S(K2, B, C) Query: R(K1,A,B) S(K2,B,C) Shuffle R on R.B and S on S.B Initially, both R and S are horizontally partitioned on K1 and K2 R 1, S 1 R 2, S 2... R P, S P Each server computes the join locally R 1, S 1 R 2, S 2... R P, S P CSE 344 - Winter 215 4
Data: R(K1,A, B), S(K2, B, C) Query: R(K1,A,B) S(K2,B,C) R1 S1 R2 S2 K1 B K2 B K1 B K2 B Partition 1 2 11 5 3 2 21 2 2 5 12 5 4 2 22 5 M1 M2 Shuffle R1 S1 R2 S2 K1 B K2 B K1 B K2 B Local Join 1 2 3 2 21 2 2 5 11 5 12 5 4 2 M1 M2 22 5 CSE 344 - Winter 215 5
Parallel Data Processing @ 2 CSE 344 - Winter 215 6
Optional Reading Parallel Data Processing at Massive Scale (MapReduce) Chapter 2 (Sections 1,2,3 only) of Mining of Massive Datasets, by Rajaraman and Ullman http://i.stanford.edu/~ullman/mmds.html CSE 344 - Winter 215 7
Distributed File System (DFS) For very large files: TBs, PBs Each file is partitioned into chunks, typically 64MB Each chunk is replicated several times ( 3), on different racks, for fault tolerance Implementations: Google s DFS: GFS, proprietary Hadoop s DFS: HDFS, open source CSE 344 - Winter 215 8
MapReduce Google: paper published 24 Free variant: Hadoop MapReduce = high-level programming model and implementation for large-scale parallel data processing CSE 344 - Winter 215 9
Typical Problems Solved by MR Read a lot of data Map: extract something you care about from each record Shuffle and Sort Reduce: aggregate, summarize, filter, transform Write the results Paradigm stays the same, change map and reduce functions for different problems CSE 344 - Winter 215 1 slide source: Jeff Dean
Data Model Files! A file = a bag of (key, value) pairs A MapReduce program: Input: a bag of (inputkey, value) pairs Output: a bag of (outputkey, value) pairs CSE 344 - Winter 215 11
Step 1: the MAP Phase User provides the MAP-function: Input: (input key, value) Ouput: bag of (intermediate key, value) System applies the map function in parallel to all (input key, value) pairs in the input file CSE 344 - Winter 215 12
Step 2: the REDUCE Phase User provides the REDUCE function: Input: (intermediate key, bag of values) Output: bag of output (values) System groups all pairs with the same intermediate key, and passes the bag of values to the REDUCE function CSE 344 - Winter 215 13
Example Counting the number of occurrences of each word in a large collection of documents Each Document The key = document id (did) The value = set of words (word) map(string key, String value): // key: document name // value: document contents for each word w in value: EmitIntermediate(w, 1 ); reduce(string key, Iterator values): // key: a word // values: a list of counts int result = ; for each v in values: result += ParseInt(v); Emit(AsString(result));
MAP REDUCE (did1,v1) (w1,1) (w2,1) (w3,1) Shuffle (w1, (1,1,1,,1)) (w1, 25) (did2,v2) (w1,1) (w2,1) (w2, (1,1, )) (w3,(1 )) (w2, 77) (w3, 12) (did3,v3).... CSE 344 - Winter 215 15
Jobs v.s. Tasks A MapReduce Job One single query, e.g. count the words in all docs More complex queries may consists of multiple jobs A Map Task, or a Reduce Task A group of instantiations of the map-, or reducefunction, which are scheduled on a single worker CSE 344 - Winter 215 16
Workers A worker is a process that executes one task at a time Typically there is one worker per processor, hence 4 or 8 per node CSE 344 - Winter 215 17
MAP Tasks REDUCE Tasks (did1,v1) (w1,1) (w2,1) (w3,1) Shuffle (w1, (1,1,1,,1)) (w1, 25) (did2,v2) (w1,1) (w2,1) (w2, (1,1, )) (w3,(1 )) (w2, 77) (w3, 12) (did3,v3)....
MapReduce Execution Details Reduce (Shuffle) Task Output to disk, replicated in cluster Intermediate data goes to local disk Map Task Data not necessarily local File system: GFS or HDFS CSE 344 - Winter 215 19
Implementation There is one master node Master partitions input file into M splits, by key Master assigns workers (=servers) to the M map tasks, keeps track of their progress Workers write their output to local disk, partition into R regions Master assigns workers to the R reduce tasks Reduce workers read regions from the map workers local disks CSE 344 - Winter 215 2
Interesting Implementation Details Worker failure: Master pings workers periodically, If down then reassigns the task to another worker CSE 344 - Winter 215 21
Interesting Implementation Details Backup tasks: Straggler = a machine that takes unusually long time to complete one of the last tasks. Eg: Bad disk forces frequent correctable errors (3MB/s à 1MB/s) The cluster scheduler has scheduled other tasks on that machine Stragglers are a main reason for slowdown Solution: pre-emptive backup execution of the last few remaining in-progress tasks CSE 344 - Winter 215 22
Executing a Large MapReduce Job CSE 344 - Winter 215 23
Anatomy of a Query Execution Running problem #4 2 nodes = 1 master + 19 workers Using PARALLEL 5 CSE 344 - Winter 215 24
March 213 3/9/13 Hadoop job_213391944_1 on domu-12-31-39-6-75-a1 Hadoop job_213391944_1 on domu-12-31-39-6-75-a1 User: hadoop Job Name: PigLatin:DefaultJobName Job File: hdfs://1.28.122.79:9/mnt/var/lib/hadoop/tmp/mapred/staging/hadoop/.staging/job_213391944_1/job.xml Submit Host: domu-12-31-39-6-75-a1.compute-1.internal Submit Host Address: 1.28.122.79 Job-ACLs: All users are allowed Job Setup: Successful Status: Succeeded Started at: Sat Mar 9 19:49:21 UTC 213 Finished at: Sat Mar 9 23:33:14 UTC 213 Finished in: 3hrs, 43mins, 52sec Job Cleanup: Successful Black-listed TaskTrackers: 1 Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 1.% 798 798 14 / 16 reduce 1.% 5 5 / 8 Counter Map Reduce Total SLOTS_MILLIS_MAPS 454,162,761 Launched reduce tasks 58 Total time spent by all reduces waiting after reserving slots (ms) Job Counters Rack-local map tasks 7,938 Total time spent by all maps waiting after reserving slots
Some other time (March 212) Let s see what happened CSE 344 - Winter 215 26
1h 16min Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 33.17% 15816 1549 38 5229 / reduce 4.17% 5 31 19 / 1 9 8 7 6 5 4 3 2 1 1582 3164 4746 6328 791 9492 1174 12656 14238 1582 duce Completion Graph - close 1 9 8 7 6 5 4 3 2 1 5 1 15 2 25 3 35 4 45 5 copy sort reduce
1h 16min Only 19 reducers active, out of 5. Why? Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 33.17% 15816 1549 38 5229 / reduce 4.17% 5 31 19 / 1 9 8 7 6 5 4 3 2 1 1582 3164 4746 6328 791 9492 1174 12656 14238 1582 duce Completion Graph - close 1 9 8 7 6 5 4 3 2 1 Copying by 19 reducers in parallel with mappers. When will the other 31 reducers be scheduled? 5 1 15 2 25 3 35 4 45 5 copy sort reduce
1h 16min Only 19 reducers active, out of 5. Why? Hadoop job_21234195_1 on ip-1-23-3-146 User: hadoop Job Name: PigLatin:DefaultJobName Job File: hdfs://1.23.3.146:9/mnt/var/lib/hadoop/tmp/mapred/staging/hadoop/.staging/job_21234195_1/job.xml Submit Host: ip-1-23-3-146.ec2.internal Submit Host Address: 1.23.3.146 Job-ACLs: All users are allowed Job Setup: Successful Status: Running Started at: Sun Mar 4 19:8:29 UTC 212 Running for: 1hrs, 16mins, 33sec Job Cleanup: Pending 3h 5min Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 33.17% 15816 1549 38 5229 / reduce 4.17% 5 31 19 / Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 1.% 15816 15816 / 18 reduce 32.42% 5 31 19 / Counter Map Reduce Total SLOTS_MILLIS_MAPS 164,62,372 1 Launched reduce tasks 1 19 9 Job Counters 9 Rack-local map tasks 5,267 8 8 7 7 Launched map tasks 5,267 6 6 5 File Input Format 5 Bytes Read 4 4 Counters 3 3 S3N_BYTES_READ 175,523,148,98 175,523,148,98 2 2 1 1 FileSystemCounters HDFS_BYTES_READ 1,845,837 1,845,837 1582 3164 4746 6328 791 9492 1174 12656 14238 1582 1582 3164 FILE_BYTES_WRITTEN 4746 6328 791 9492 3,26,62,12 1174 12656 145,356,233 14238 1582 3,351,958,245 Map output materialized 2,444,314,273 2,444,314,273 bytes Reduce Completion Graph - close duce Completion Graph - close 1 Map input records 85,225,193 85,225,193 1 9 9 Reduce shuffle bytes 99,468,723 copy99,468,723 8 8 7 Spilled Records When will 173,82,131 the other sort 173,82,131 7 6 6 Copying by 19 reducers Map output bytes 31 reducers 62,732,457,83 be scheduled? reduce 62,732,457,83 5 5 in parallel with mappers. 4 4 CPU time spent (ms) 55,277,52 2,656,94 57,934,46 3 3 2 2 Total committed heap usage 1,956,86,312,96 3,42,83,712 1,959,129,116,672 1 1 (bytes) 5 1 15 2 25 3 35 4 45 5 Map-Reduce 5 1 15 Combine 2 input 25 records 3 35 85,225,193 4 45 5 62,442,816 867,668,9 Framework SPLIT_RAW_BYTES 1,845,837 1,845,837 Go back to JobTracker This is Apache Hadoop release.2.25 copy sort reduce
1h 16min Only 19 reducers active, out of 5. Why? Hadoop job_21234195_1 on ip-1-23-3-146 User: hadoop Job Name: PigLatin:DefaultJobName Job File: hdfs://1.23.3.146:9/mnt/var/lib/hadoop/tmp/mapred/staging/hadoop/.staging/job_21234195_1/job.xml Submit Host: ip-1-23-3-146.ec2.internal Submit Host Address: 1.23.3.146 Job-ACLs: All users are allowed Job Setup: Successful Status: Running Started at: Sun Mar 4 19:8:29 UTC 212 Running for: 1hrs, 16mins, 33sec Job Cleanup: Pending 3h 5min Completed. Sorting, and the rest of Reduce may proceed now Speculative Execution Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 33.17% 15816 1549 38 5229 / reduce 4.17% 5 31 19 / Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 1.% 15816 15816 / 18 reduce 32.42% 5 31 19 / Counter Map Reduce Total SLOTS_MILLIS_MAPS 164,62,372 1 Launched reduce tasks 1 19 9 Job Counters 9 Rack-local map tasks 5,267 8 8 7 7 Launched map tasks 5,267 6 6 5 File Input Format 5 Bytes Read 4 4 Counters 3 3 S3N_BYTES_READ 175,523,148,98 175,523,148,98 2 2 1 1 FileSystemCounters HDFS_BYTES_READ 1,845,837 1,845,837 1582 3164 4746 6328 791 9492 1174 12656 14238 1582 1582 3164 FILE_BYTES_WRITTEN 4746 6328 791 9492 3,26,62,12 1174 12656 145,356,233 14238 1582 3,351,958,245 Map output materialized 2,444,314,273 2,444,314,273 bytes Reduce Completion Graph - close duce Completion Graph - close 1 Map input records 85,225,193 85,225,193 1 9 9 Reduce shuffle bytes 99,468,723 copy99,468,723 8 8 7 Spilled Records When will 173,82,131 the other sort 173,82,131 7 6 6 Copying by 19 reducers Map output bytes 31 reducers 62,732,457,83 be scheduled? reduce 62,732,457,83 5 5 in parallel with mappers. 4 4 CPU time spent (ms) 55,277,52 2,656,94 57,934,46 3 3 2 2 Total committed heap usage 1,956,86,312,96 3,42,83,712 1,959,129,116,672 1 1 (bytes) 5 1 15 2 25 3 35 4 45 5 Map-Reduce 5 1 15 Combine 2 input 25 records 3 35 85,225,193 4 45 5 62,442,816 867,668,9 Framework SPLIT_RAW_BYTES 1,845,837 1,845,837 Go back to JobTracker This is Apache Hadoop release.2.25 copy sort reduce
3h 51min Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 1.% 15816 15816 / 18 reduce 37.72% 5 19 22 9 / p Completion Graph - close 1 9 8 7 6 5 4 3 2 1 1582 3164 4746 6328 791 9492 1174 12656 14238 1582 duce Completion Graph - close 1 9 8 7 6 5 4 3 2 1 5 1 15 2 25 3 35 4 45 5 copy sort reduce
3h 51min Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 1.% 15816 15816 / 18 reduce 37.72% 5 19 22 9 / p Completion Graph - close 1 9 8 7 6 5 4 3 2 1 1582 3164 4746 6328 791 9492 1174 12656 14238 1582 duce Completion Graph - close 1 9 8 7 6 5 4 3 2 1 Some of the 19 reducers have finished Next Batch of Reducers started 5 1 15 2 25 3 35 4 45 5 copy sort reduce
3h 51min Hadoop job_21234195_1 on ip-1-23-3-146 Hadoop job_21234195_1 on ip-1-23-3-146 User: hadoop Job Name: PigLatin:DefaultJobName Job File: 3h 52min User: hadoop Job Name: PigLatin:DefaultJobName Job File: hdfs://1.23.3.146:9/mnt/var/lib/hadoop/tmp/mapred/staging/hadoop/.staging/job_21234195_1/job.xml hdfs://1.23.3.146:9/mnt/var/lib/hadoop/tmp/mapred/staging/hadoop/.staging/job_21234195_1/job.xml Submit Host: ip-1-23-3-146.ec2.internal Submit Host: ip-1-23-3-146.ec2.internal Submit Host Address: 1.23.3.146 Submit Host Address: 1.23.3.146 Job-ACLs: All users are allowed Job-ACLs: All users are allowed Job Setup: Successful Job Setup: Successful Status: Running Status: Running Started at: Sun Mar 4 19:8:29 UTC 212 Started at: Sun Mar 4 19:8:29 UTC 212 Running for: 3hrs, 51mins, 19sec Running for: 3hrs, 52mins, 51sec Job Cleanup: Pending Job Cleanup: Pending Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 1.% 15816 15816 / 18 reduce 37.72% 5 19 22 9 / map 1.% 15816 15816 / 18 reduce 42.35% 5 11 2 19 / p Completion Graph - close Launched map tasks 15,834 Launched map tasks 15,834 1 1 SLOTS_MILLIS_REDUCES 118,328,83 9 9 8File Output Format 8 File Output Format Bytes Written 7 7 Counters Counters 6 6 File Input Format File Input Format 5 Bytes Read 5 Counters Counters 4 4 SLOTS_MILLIS_REDUCES Bytes Written Bytes Read 25,4,19 3 S3N_BYTES_READ 53,591,875,823 53,591,875,823 S3N_BYTES_READ 53,591,875,823 53,591,875,823 2 2 FILE_BYTES_READ 754,835,48 754,835,48 1 1 FILE_BYTES_READ 847,821,126 847,821,126 FileSystemCounters HDFS_BYTES_READ 1582 3164 4746 6328 791 9492 5,587,893 1174 12656 14238 1582 5,587,893 FileSystemCounters 1582 3164 HDFS_BYTES_READ 4746 6328 791 9492 5,587,893 1174 12656 14238 1582 5,587,893 FILE_BYTES_WRITTEN 9,616,982,133 85,567,984 1,467,55,117 FILE_BYTES_WRITTEN 9,616,982,133 864,512,16 1,481,494,149 duce Completion Graph - close HDFS_BYTES_WRITTEN 3,4,371,86 3,4,371,86 Reduce Completion Graph - close HDFS_BYTES_WRITTEN 3,967,197,533 3,967,197,533 1 9 8 7 6 5 4 3 2 1 Job Counters Counter Map Reduce Total SLOTS_MILLIS_MAPS 495,799,522 Launched reduce tasks 31 Rack-local map tasks 15,834 Job Counters Some of the 19 reducers have finished Counter Map Reduce Total SLOTS_MILLIS_MAPS 495,799,522 Launched reduce tasks 39 Rack-local map tasks 15,834 Map output materialized 1 Map output materialized 7,311,35,131 7,311,35,131 9 7,311,35,131 7,311,35,131 bytes copy bytes copy 8 Map input records 2,51,793,3 sort 2,51,793,3 7 Map input records 2,51,793,3 2,51,793,3 sort Reduce shuffle bytes 2,755,65,871 6 reduce 2,755,65,871 Reduce shuffle Next bytes Batch of 19 reducers 3,489,678,276 3,489,678,276 reduce 5 Spilled Records Next Batch of Reducers 465,817,71 started 26,163,538 491,981,248 4 Spilled Records 465,817,71 54,94,866 52,758,576 Map output bytes 199,575,247,17 3 199,575,247,17 2 Map output bytes 199,575,247,17 199,575,247,17 1 5 1 15 2 25 3 35 4 45 5 5 1 15 2 25 3 35 4 45 5 Go back to JobTracker
4h 18min Several servers failed: fetch error. Their map tasks need to be rerun. All reducers are waiting. Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 99.88% 15816 2638 3 13148 15 / 3337 reduce 48.42% 5 15 16 19 / 1 9 8 7 6 5 4 3 2 1 1582 3164 4746 6328 791 9492 1174 12656 14238 1582 Reduce Completion Graph - close 1 9 8 7 6 5 4 3 2 1 5 1 15 2 25 3 35 4 45 5 copy sort reduce Go back to JobTracker
4h 18min Several servers failed: fetch error. Their map tasks need to be rerun. All reducers are waiting. Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 99.88% 15816 2638 3 13148 15 / 3337 reduce 48.42% 5 15 16 19 / 1 9 8 7 6 5 4 3 2 1 1582 3164 4746 6328 791 9492 1174 12656 14238 1582 Reduce Completion Graph - close 1 9 8 7 6 5 4 3 2 1 Why did we lose some reducers? 5 1 15 2 25 3 35 4 45 5 copy sort reduce Go back to JobTracker
1 9 8 7 6 5 4 3 2 1 Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 99.88% 15816 2638 3 13148 15 / 3337 reduce 48.42% 5 15 16 19 / 1582 3164 4746 6328 791 9492 1174 12656 14238 1582 Reduce Completion Graph - close 1 9 8 7 6 5 4 3 2 1 4h 18min Several servers failed: fetch error. Their map tasks need to be rerun. All reducers are waiting. Why did we lose some reducers? 5 1 15 2 25 3 35 4 45 5 copy sort reduce 7h 1min Hadoop job_21234195_1 on ip-1-23-3-146 Mappers finished, User: hadoop CPU time spent (ms) 165,59,32 36,329,45 21,388,77 Job Name: PigLatin:DefaultJobName Job File: Total reducers committed heap usage resumed. hdfs://1.23.3.146:9/mnt/var/lib/hadoop/tmp/mapred/staging/hadoop/.staging/job_21234195_1/job.xml 5,92,284,372,992 15,76,56,896 5,935,36,933,888 (bytes) Submit Host: ip-1-23-3-146.ec2.internal Submit Host Address: 1.23.3.146 Map-Reduce Combine input records 2,51,793,3 437,117,972 2,938,911,2 Job-ACLs: All users are allowed Framework Job Setup: Successful SPLIT_RAW_BYTES 5,587,893 5,587,893 Status: Running Started at: Sun Mar Reduce 4 19:8:29 input records UTC 212 Running for: 7hrs, Reduce 1mins, input 54sec groups Job Cleanup: Pending Black-listed TaskTrackers: Combine output 3 records 465,817,71 126,918,315 16,55,13 117,266,617 126,918,315 16,55,13 583,84,327 Physical memory (bytes) 5,781,194,698,752 17,89,435,72 Failed/Killed 5,799,85,133,824 Kind % Complete snapshot Num Tasks Pending Running Complete Killed Task Attempts map 1.% Reduce output records 16,55,11 16,55,11 15816 15816 26 / 5968 Virtual memory (bytes) 8,999,333,4,128 29,498,195,968 9,28,831,236,96 reduce 94.15% snapshot 5 6 44 / 8 Map output records 2,51,793,3 2,51,793,3 Counter Map Reduce Total Map Completion Graph - close SLOTS_MILLIS_MAPS 676,845,552 1 9 Launched reduce tasks 62 8 Job Counters Rack-local map tasks 21,81 7 6 Launched map tasks 21,81 5 4 SLOTS_MILLIS_REDUCES 39,18,556 3 File Output Format 2 Counters Bytes Written 1 File Input Format Bytes Read Counters 1582 3164 4746 6328 791 9492 1174 12656 14238 1582 S3N_BYTES_READ 53,591,952,796 53,591,952,796 Reduce Completion Graph - close FILE_BYTES_READ 1,921,632,69 1,921,632,69 1 FileSystemCounters HDFS_BYTES_READ 5,587,893 5,587,893 9 copy 8 FILE_BYTES_WRITTEN 9,616,982,133 2,51,943,74 11,668,925,873 7 sort HDFS_BYTES_WRITTEN 9,411,137,927 9,411,137,927 6 reduce 5 Map output materialized 7,311,35,131 7,311,35,131 4 bytes 3 2 Map input records 2,51,793,3 2,51,793,3 1 Reduce shuffle bytes 7,226,95,915 7,226,95,915 5 1 Spilled 15 Records 2 25 3 35 465,817,71 4 45 122,997,587 5 588,815,297 Map output bytes 199,575,247,17 199,575,247,17 Go back to JobTracker Go back to JobTracker
7h 2min Success! 7hrs, 2mins. Hadoop job_21234195_1 on ip-1-23-3-146 User: hadoop Job Name: PigLatin:DefaultJobName Job File: hdfs://1.23.3.146:9/mnt/var/lib/hadoop/tmp/mapred/staging/hadoop/.staging/job_21234195_1/job.xml Submit Host: ip-1-23-3-146.ec2.internal Submit Host Address: 1.23.3.146 Job-ACLs: All users are allowed Job Setup: Successful Status: Succeeded Started at: Sun Mar 4 19:8:29 UTC 212 Finished at: Mon Mar 5 2:28:39 UTC 212 Finished in: 7hrs, 2mins, 1sec Job Cleanup: Successful Black-listed TaskTrackers: 3 Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts map 1.% 15816 15816 26 / 5968 reduce 1.% 5 5 / 14 1 9 8 7 6 5 4 3 2 1 5 1 15 2 25 3 35 4 45 5 copy sort reduce
Parallel Data Processing @ 21 CSE 344 - Winter 215 38
Issues with MapReduce Hides scheduling and parallelization details However, very limited queries Difficult to write more complex queries Need multiple MapReduce jobs Solution: declarative query language CSE 344 - Winter 215 39
Declarative Languages on MR PIG Latin (Yahoo!) New language, like Relational Algebra Open source HiveQL (Facebook) SQL-like language Open source SQL / Tenzing (Google) SQL on MR Proprietary CSE 344 - Winter 215 4
Implementing PIG Latin Over Hadoop! Parse query: Everything between LOAD and STORE à one logical plan Logical plan à graph of MapReduce ops All statements between two (CO)GROUPs à one MapReduce job CSE 344 - Winter 215 41
[Olston 28] Implementation CSE 344 - Winter 215 42
Review: MapReduce Data is typically a file in the Google File System HDFS for Hadoop File system partitions file into chunks Each chunk is replicated on k (typically 3) machines Each machine can run a few map and reduce tasks simultaneously Each map task consumes one chunk Can adjust how much data goes into each map task using splits Scheduler tries to schedule map task where its input data is located Map output is partitioned across reducers Map output is also written locally to disk Number of reduce tasks is configurable System shuffles data between map and reduce tasks Reducers sort-merge data before consuming it CSE 344 - Winter 215 43
MapReduce Phases Local storage ` CSE 344 - Winter 215 44