Data-Intensive Computing with MapReduce

Similar documents
Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Laarge-Scale Data Engineering

Introduction to Map/Reduce. Kostas Solomos Computer Science Department University of Crete, Greece

MapReduce Algorithm Design

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

Outline. What is Big Data? Hadoop HDFS MapReduce Twitter Analytics and Hadoop

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

UNIT V PROCESSING YOUR DATA WITH MAPREDUCE Syllabus

Data-Intensive Distributed Computing

Ghislain Fourny. Big Data 6. Massive Parallel Processing (MapReduce)

Introduction to MapReduce

Ghislain Fourny. Big Data Fall Massive Parallel Processing (MapReduce)

Big Data Analysis using Hadoop. Map-Reduce An Introduction. Lecture 2

Map Reduce. MCSN - N. Tonellotto - Distributed Enabling Platforms

Java in MapReduce. Scope

Chapter 3. Distributed Algorithms based on MapReduce

ECE5610/CSC6220 Introduction to Parallel and Distribution Computing. Lecture 6: MapReduce in Parallel Computing

CSE6331: Cloud Computing

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval

PARALLEL DATA PROCESSING IN BIG DATA SYSTEMS

Introduction to Map/Reduce & Hadoop

Big Data Analysis using Hadoop Lecture 3

Hadoop Map Reduce 10/17/2018 1

Clustering Documents. Case Study 2: Document Retrieval

Hadoop. copyright 2011 Trainologic LTD

Cloud Computing CS

Introduction to Map/Reduce & Hadoop

MapReduce Simplified Data Processing on Large Clusters

Vendor: Cloudera. Exam Code: CCD-410. Exam Name: Cloudera Certified Developer for Apache Hadoop. Version: Demo

Exam Name: Cloudera Certified Developer for Apache Hadoop CDH4 Upgrade Exam (CCDH)

Clustering Lecture 8: MapReduce

Hortonworks HDPCD. Hortonworks Data Platform Certified Developer. Download Full Version :

Introduction to MapReduce

Parallel Processing - MapReduce and FlumeJava. Amir H. Payberah 14/09/2018

Database Applications (15-415)

Recommended Literature

2. MapReduce Programming Model

MapReduce Algorithms

Cloud Computing. Leonidas Fegaras University of Texas at Arlington. Web Data Management and XML L12: Cloud Computing 1

Local MapReduce debugging

Introduction to HDFS and MapReduce

2/26/2017. For instance, consider running Word Count across 20 splits

A Guide to Running Map Reduce Jobs in Java University of Stirling, Computing Science

Informa)on Retrieval and Map- Reduce Implementa)ons. Mohammad Amir Sharif PhD Student Center for Advanced Computer Studies

Hadoop. Introduction to BIGDATA and HADOOP

Cloud Computing. Leonidas Fegaras University of Texas at Arlington. Web Data Management and XML L3b: Cloud Computing 1

Hadoop MapReduce Framework

Programming Models MapReduce

Cloud Computing. Up until now

itpass4sure Helps you pass the actual test with valid and latest training material.

Vendor: Hortonworks. Exam Code: HDPCD. Exam Name: Hortonworks Data Platform Certified Developer. Version: Demo

Introduction to Map Reduce

EE657 Spring 2012 HW#4 Zhou Zhao

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval

Computing as a Utility. Cloud Computing. Why? Good for...

COMP4442. Service and Cloud Computing. Lab 12: MapReduce. Prof. George Baciu PQ838.

Map-Reduce Applications: Counting, Graph Shortest Paths

Parallel Data Processing with Hadoop/MapReduce. CS140 Tao Yang, 2014

KillTest *KIJGT 3WCNKV[ $GVVGT 5GTXKEG Q&A NZZV ]]] QORRZKYZ IUS =K ULLKX LXKK [VJGZK YKX\OIK LUX UTK _KGX

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 16. Big Data Management VI (MapReduce Programming)

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

HDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung

ExamTorrent. Best exam torrent, excellent test torrent, valid exam dumps are here waiting for you

Processing Distributed Data Using MapReduce, Part I

Session 1 Big Data and Hadoop - Overview. - Dr. M. R. Sanghavi

Big Data Analytics: Insights and Innovations

Data Analytics Job Guarantee Program

Actual4Dumps. Provide you with the latest actual exam dumps, and help you succeed

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Certified Big Data and Hadoop Course Curriculum

Attacking & Protecting Big Data Environments

Java & Inheritance. Inheritance - Scenario

Semantics with Failures

Computer Science 572 Exam Prof. Horowitz Tuesday, April 24, 2017, 8:00am 9:00am

April Final Quiz COSC MapReduce Programming a) Explain briefly the main ideas and components of the MapReduce programming model.

INTRODUCTION TO HADOOP

MapReduce Design Patterns

Interfaces 3. Reynold Xin Aug 22, Databricks Retreat. Repurposed Jan 27, 2015 for Spark community

Certified Big Data Hadoop and Spark Scala Course Curriculum

MapReduce-style data processing

Map-Reduce Applications: Counting, Graph Shortest Paths

MRUnit testing framework is based on JUnit and it can test Map Reduce programs written on 0.20, 0.23.x, 1.0.x, 2.x version of Hadoop.

What is this course about?

CCA-410. Cloudera. Cloudera Certified Administrator for Apache Hadoop (CCAH)

FAQs. Topics. This Material is Built Based on, Analytics Process Model. 8/22/2018 Week 1-B Sangmi Lee Pallickara

HADOOP COURSE CONTENT (HADOOP-1.X, 2.X & 3.X) (Development, Administration & REAL TIME Projects Implementation)

Cloud Programming on Java EE Platforms. mgr inż. Piotr Nowak

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

MI-PDB, MIE-PDB: Advanced Database Systems

Big Data and Scripting map reduce in Hadoop

W1.A.0 W2.A.0 1/22/2018 1/22/2018. CS435 Introduction to Big Data. FAQs. Readings

International Journal of Advance Engineering and Research Development. A Study: Hadoop Framework

Recommended Literature

Parallel Computing: MapReduce Jin, Hai

Introduction to Data Management CSE 344

TI2736-B Big Data Processing. Claudia Hauff

MapReduce and Hadoop. The reference Big Data stack

Map- reduce programming paradigm

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Big Data Exercises. Fall 2017 Week 5 ETH Zurich. MapReduce

Transcription:

Data-Intensive Computing with MapReduce Session 2: Hadoop Nuts and Bolts Jimmy Lin University of Maryland Thursday, January 31, 2013 This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

Source: Wikipedia (The Scream)

Source: Wikipedia (Japanese rock garden)

Source: Wikipedia (Keychain)

How will I actually learn Hadoop? This class session Hadoop: The Definitive Guide RTFM RTFC(!)

Materials in the course The course homepage Hadoop: The Definitive Guide Data-Intensive Text Processing with MapReduce Cloud 9 Take advantage of GitHub! clone, branch, send pull request

Source: Wikipedia (Mahout)

Basic Hadoop API* Mapper l void setup(mapper.context context) Called once at the beginning of the task l void map(k key, V value, Mapper.Context context) Called once for each key/value pair in the input split l void cleanup(mapper.context context) Called once at the end of the task Reducer/Combiner l void setup(reducer.context context) Called once at the start of the task l void reduce(k key, Iterable<V> values, Reducer.Context context) Called once for each key l void cleanup(reducer.context context) Called once at the end of the task *Note that there are two versions of the API!

Basic Hadoop API* Partitioner Job l int getpartition(k key, V value, int numpartitions) Get the partition number given total number of partitions l Represents a packaged Hadoop job for submission to cluster l Need to specify input and output paths l Need to specify input and output formats l Need to specify mapper, reducer, combiner, partitioner classes l Need to specify intermediate/final key/value classes l Need to specify number of reducers (but not mappers, why?) l Don t depend of defaults! *Note that there are two versions of the API!

A tale of two packages org.apache.hadoop.mapreduce org.apache.hadoop.mapred Source: Wikipedia (Budapest)

Data Types in Hadoop: Keys and Values Writable Defines a de/serialization protocol. Every data type in Hadoop is a Writable. WritableComprable Defines a sort order. All keys must be of this type (but not values). IntWritable LongWritable Text Concrete classes for different data types. SequenceFiles Binary encoded of a sequence of key/value pairs

Hello World : Word Count Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value);

Hello World : Word Count private static class MyMapper extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable ONE = new IntWritable(1); private final static Text WORD = new Text(); @Override public void map(longwritable key, Text value, Context context) throws IOException, InterruptedException { String line = ((Text) value).tostring(); StringTokenizer itr = new StringTokenizer(line); while (itr.hasmoretokens()) { WORD.set(itr.nextToken()); context.write(word, ONE); } } }

Hello World : Word Count private static class MyReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private final static IntWritable SUM = new IntWritable(); @Override public void reduce(text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { Iterator<IntWritable> iter = values.iterator(); int sum = 0; while (iter.hasnext()) { sum += iter.next().get(); } SUM.set(sum); context.write(key, SUM); } }

Three Gotchas Avoid object creation at all costs l Reuse Writable objects, change the payload Execution framework reuses value object in reducer Passing parameters via class statics

Getting Data to Mappers and Reducers Configuration parameters l Directly in the Job object for parameters Side data l DistributedCache l Mappers/reducers read from HDFS in setup method

Complex Data Types in Hadoop How do you implement complex data types? The easiest way: l Encoded it as Text, e.g., (a, b) = a:b l Use regular expressions to parse and extract data l Works, but pretty hack-ish The hard way: l Define a custom implementation of Writable(Comprable) l Must implement: readfields, write, (compareto) l Computationally efficient, but slow for rapid prototyping l Implement WritableComparator hook for performance Somewhere in the middle: l Cloud 9 offers JSON support and lots of useful Hadoop types l Quick tour

Basic Cluster Components* One of each: l Namenode (NN): master node for HDFS l Jobtracker (JT): master node for job submission Set of each per slave machine: l Tasktracker (TT): contains multiple task slots l Datanode (DN): serves HDFS data blocks * Not quite leaving aside YARN for now

Putting everything together namenode job submission node namenode daemon jobtracker tasktracker datanode daemon Linux file system slave node tasktracker datanode daemon Linux file system slave node tasktracker datanode daemon Linux file system slave node

Anatomy of a Job MapReduce program in Hadoop = Hadoop job l Jobs are divided into map and reduce tasks l An instance of running a task is called a task attempt (occupies a slot) l Multiple jobs can be composed into a workflow Job submission: l Client (i.e., driver program) creates a job, configures it, and submits it to jobtracker l That s it! The Hadoop cluster takes over

Anatomy of a Job Behind the scenes: l Input splits are computed (on client end) l Job data (jar, configuration XML) are sent to JobTracker l JobTracker puts job data in shared location, enqueues tasks l TaskTrackers poll for tasks l Off to the races

Input File Input File InputFormat InputSplit InputSplit InputSplit InputSplit InputSplit RecordReader RecordReader RecordReader RecordReader RecordReader Mapper Mapper Mapper Mapper Mapper Intermediates Intermediates Intermediates Intermediates Intermediates Source: redrawn from a slide by Cloduera, cc-licensed

Client Records InputSplit InputSplit InputSplit RecordReader RecordReader RecordReader Mapper Mapper Mapper

Mapper Mapper Mapper Mapper Mapper Intermediates Intermediates Intermediates Intermediates Intermediates Partitioner Partitioner Partitioner Partitioner Partitioner (combiners omitted here) Intermediates Intermediates Intermediates Reducer Reducer Reduce Source: redrawn from a slide by Cloduera, cc-licensed

Reducer Reducer Reduce OutputFormat RecordWriter RecordWriter RecordWriter Output File Output File Output File Source: redrawn from a slide by Cloduera, cc-licensed

Input and Output InputFormat: l TextInputFormat l KeyValueTextInputFormat l SequenceFileInputFormat l OutputFormat: l TextOutputFormat l SequenceFileOutputFormat l

Shuffle and Sort in Hadoop Probably the most complex aspect of MapReduce Map side l Map outputs are buffered in memory in a circular buffer l When buffer reaches threshold, contents are spilled to disk l Spills merged in a single, partitioned file (sorted within each partition): combiner runs during the merges Reduce side l First, map outputs are copied over to reducer machine l Sort is a multi-pass merge of map outputs (happens in memory and on disk): combiner runs during the merges l Final merge pass goes directly into reducer

Shuffle and Sort Mapper merged spills (on disk) intermediate files (on disk) Combiner Reducer circular buffer (in memory) Combiner spills (on disk) other reducers other mappers

Hadoop Workflow 1. Load data into HDFS 2. Develop code locally You 3. Submit MapReduce job 3a. Go back to Step 2 Hadoop Cluster 4. Retrieve data from HDFS

Recommended Workflow Here s how I work: l Develop code in Eclipse on host machine l Build distribution on host machine l Check out copy of code on VM l Copy (i.e., scp) jars over to VM (in same directory structure) l Run job on VM l Iterate l l Commit code on host machine and push l Pull from inside VM, verify Avoid using the UI of the VM l Directly ssh into the VM

Actually Running the Job $HADOOP_CLASSPATH hadoop jar MYJAR.jar -D k1=v1... -libjars foo.jar,bar.jar my.class.to.run arg1 arg2 arg3

Debugging Hadoop First, take a deep breath Start small, start locally Build incrementally

Code Execution Environments Different ways to run code: l Plain Java l Local (standalone) mode l Pseudo-distributed mode l Fully-distributed mode Learn what s good for what

Hadoop Debugging Strategies Good ol System.out.println l Learn to use the webapp to access logs l Logging preferred over System.out.println l Be careful how much you log! Fail on success l Throw RuntimeExceptions and capture state Programming is still programming l Use Hadoop as the glue l Implement core functionality outside mappers and reducers l Independently test (e.g., unit testing) l Compose (tested) components in mappers and reducers

Questions? Assignment 2 due in two weeks Source: Wikipedia (Japanese rock garden)