What is this course about?

Similar documents
Introduction to MapReduce

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Introduction to MapReduce

Introduction to MapReduce

Clustering Lecture 8: MapReduce

Introduction to MapReduce

Introduction to MapReduce

CMSC 723: Computational Linguistics I Session #12 MapReduce and Data Intensive NLP. University of Maryland. Wednesday, November 18, 2009

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Laarge-Scale Data Engineering

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

ΕΠΛ 602:Foundations of Internet Technologies. Cloud Computing

Information Retrieval Processing with MapReduce

Map Reduce. Yerevan.

Cloud Computing. Big Data. Dell Zhang Birkbeck, University of London 2017/18

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

Hadoop Distributed File System(HDFS)

NoSQL. Based on slides by Mike Franklin and Jimmy Lin

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

Google File System (GFS) and Hadoop Distributed File System (HDFS)

CS-2510 COMPUTER OPERATING SYSTEMS

Map-Reduce and Related Systems

Map Reduce.

with MapReduce The ischool University of Maryland JHU Summer School on Human Language Technology Wednesday, June 17, 2009

Informa)on Retrieval and Map- Reduce Implementa)ons. Mohammad Amir Sharif PhD Student Center for Advanced Computer Studies

Lecture 11 Hadoop & Spark

Distributed Filesystem

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

Chapter 5. The MapReduce Programming Model and Implementation

Distributed Systems CS6421

Overview. Why MapReduce? What is MapReduce? The Hadoop Distributed File System Cloudera, Inc.

Big Data & Hadoop Environment

MapReduce. U of Toronto, 2014

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

Map-Reduce. Marco Mura 2010 March, 31th

Distributed File Systems II

BigData and Map Reduce VITMAC03

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

CLOUD-SCALE FILE SYSTEMS

MapReduce, Hadoop and Spark. Bompotas Agorakis

Distributed Systems 16. Distributed File Systems II

The MapReduce Framework

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

HADOOP FRAMEWORK FOR BIG DATA

Where We Are. Review: Parallel DBMS. Parallel DBMS. Introduction to Data Management CSE 344

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

The Google File System. Alexandru Costan

Data-Intensive Computing with MapReduce

Big Data Programming: an Introduction. Spring 2015, X. Zhang Fordham Univ.

Hadoop. copyright 2011 Trainologic LTD

Cloud Computing 2. CSCI 4850/5850 High-Performance Computing Spring 2018

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Hadoop/MapReduce Computing Paradigm

A BigData Tour HDFS, Ceph and MapReduce

CA485 Ray Walshe Google File System

Cloud Programming. Programming Environment Oct 29, 2015 Osamu Tatebe

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Distributed Computation Models

Data Clustering on the Parallel Hadoop MapReduce Model. Dimitrios Verraros

STATS Data Analysis using Python. Lecture 7: the MapReduce framework Some slides adapted from C. Budak and R. Burns

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Database Systems CSE 414

Announcements. Optional Reading. Distributed File System (DFS) MapReduce Process. MapReduce. Database Systems CSE 414. HW5 is due tomorrow 11pm

Introduction to MapReduce. Instructor: Dr. Weikuan Yu Computer Sci. & Software Eng.

MI-PDB, MIE-PDB: Advanced Database Systems

The amount of data increases every day Some numbers ( 2012):

Introduction to Data Management CSE 344

2/26/2017. The amount of data increases every day Some numbers ( 2012):

HDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung

Introduction to Hadoop. Owen O Malley Yahoo!, Grid Team

HDFS: Hadoop Distributed File System. Sector: Distributed Storage System

Introduction to Data Management CSE 344

Scaling Up 1 CSE 6242 / CX Duen Horng (Polo) Chau Georgia Tech. Hadoop, Pig

Topics. Big Data Analytics What is and Why Hadoop? Comparison to other technologies Hadoop architecture Hadoop ecosystem Hadoop usage examples

Parallel Computing: MapReduce Jin, Hai

2013 AWS Worldwide Public Sector Summit Washington, D.C.

Hadoop محبوبه دادخواه کارگاه ساالنه آزمایشگاه فناوری وب زمستان 1391

Introduction to Map Reduce

Database Applications (15-415)

The Google File System

Map Reduce Group Meeting

What is the maximum file size you have dealt so far? Movies/Files/Streaming video that you have used? What have you observed?

What Is Datacenter (Warehouse) Computing. Distributed and Parallel Technology. Datacenter Computing Architecture

Data-Intensive Distributed Computing

Parallel Nested Loops

Parallel Partition-Based. Parallel Nested Loops. Median. More Join Thoughts. Parallel Office Tools 9/15/2011

Developing MapReduce Programs

MapReduce and Hadoop

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

Clustering Documents. Case Study 2: Document Retrieval

Yuval Carmel Tel-Aviv University "Advanced Topics in Storage Systems" - Spring 2013

Scalable Web Programming. CS193S - Jan Jannink - 2/25/10

CSE 124: Networked Services Fall 2009 Lecture-19

Data Centers and Cloud Computing. Slides courtesy of Tim Wood

Distributed Systems. Lec 10: Distributed File Systems GFS. Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

Announcements. Parallel Data Processing in the 20 th Century. Parallel Join Illustration. Introduction to Database Systems CSE 414

Distributed Systems. CS422/522 Lecture17 17 November 2014

Transcription:

Data-Intensive Information Processing Applications! Session #1 Introduction to MapReduce Jordan Boyd-Graber University of Maryland Thursday, February 3, 2011 This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details What is this course about?! Data-intensive information processing! Large-data ( web-scale ) problems! Focus on applications! MapReduce and beyond " Hbase " Hive " Pig " (and possibly more) 1

What is MapReduce?! Programming model for expressing distributed computations at a massive scale! Execution framework for organizing and performing such computations! Open-source implementation called Hadoop Why large data? 2

Source: Wikipedia (Everest) How much data?! Google processes 20 PB a day (2008)! Wayback Machine has 3 PB + 100 TB/month (3/2009)! Facebook has 2.5 PB of user data + 15 TB/day (4/2009)! ebay has 6.5 PB of user data + 50 TB/day (5/2009)! CERN s LHC will generate 15 PB a year (??) 640K ought to be enough for anybody. 3

Maximilien Brice, CERN Maximilien Brice, CERN 4

No data like more data! s/knowledge/data/g; How do we get here if we re not Google? (Banko and Brill, ACL 2001) (Brants et al., EMNLP 2007) What to do with more data?! Answering factoid questions " Pattern matching on the Web " Works amazingly well Who shot Abraham Lincoln?! X shot Abraham Lincoln! Learning relations " Start with seed instances " Search for patterns on the Web " Using patterns to find more instances Wolfgang Amadeus Mozart (1756-1791) Einstein was born in 1879 Birthday-of(Mozart, 1756) Birthday-of(Einstein, 1879) PERSON (DATE PERSON was born in DATE (Brill et al., TREC 2001; Lin, ACM TOIS 2007) (Agichtein and Gravano, DL 2000; Ravichandran and Hovy, ACL 2002; ) 5

What is cloud computing? The best thing since sliced bread?! Before clouds " Grids " Vector supercomputers "! Cloud computing means many different things: " Large-data processing " Rebranding of web 2.0 " Utility computing " Everything as a service 6

Rebranding of web 2.0! Rich, interactive web applications " Clouds refer to the servers that run them " AJAX as the de facto standard (for better or worse) " Examples: Facebook, YouTube, Gmail,! The network is the computer : take two " User data is stored in the clouds " Rise of the netbook, smartphones, etc. " Browser is the OS Source: Wikipedia (Electricity meter) 7

Utility Computing! What? " Computing resources as a metered service ( pay as you go ) " Ability to dynamically provision virtual machines! Why? " Cost: capital vs. operating expenses " Scalability: infinite capacity " Elasticity: scale up or down on demand! Does it make sense? " Benefits to cloud users " Business case for cloud providers I think there is a world market for about five computers. Enabling Technology: Virtualization App App App App App App OS OS OS Operating System Hypervisor Hardware Hardware Traditional Stack Virtualized Stack 8

Everything as a Service! Utility computing = Infrastructure as a Service (IaaS) " Why buy machines when you can rent cycles? " Examples: Amazon s EC2, Rackspace! Platform as a Service (PaaS) " Give me nice API and take care of the maintenance, upgrades, " Example: Google App Engine! Software as a Service (SaaS) " Just run it for me! " Example: Gmail, Salesforce Who cares?! Ready-made large-data problems " Lots of user-generated content " Even more user behavior data " Examples: Facebook friend suggestions, Google ad placement, Netflix movie suggestions " Business intelligence: gather everything in a data warehouse and run analytics to generate insight! Utility computing " Provision Hadoop clusters on-demand in the cloud " Lower barrier to entry for tackling large-data problem " Commoditization and democratization of large-data capabilities 9

Course Administrivia Course Pre-requisites! Strong Java programming " But this course is not about programming: we ll expect you to pick up Hadoop (quickly) along the way " Focus on thinking at scale and algorithm design! Solid knowledge of " Probability and statistics " Computer architecture! No previous experience necessary in " MapReduce " Parallel and distributed programming! If you re not in INFM, no problem (e-mail me)! Audits: Must do homework, no exams project optional 10

What s in store! Time and effort! New was of thinking about computing! Resources outside the class! Access to cool resources! Learning a hot, in-demand skill! Interesting, big problems! Uncertainty, unpredictability, etc. that comes with bleeding edge software Course components! Textbooks! Components of the final grade: " Assignments " Midterm and final exams " Final project (of your choice, in groups of ~3) " Class participation! Late policy " Everybody gets four free late days " This covers traditional excuses Too busy It took longer than I thought it would take It was harder than I initially thought My dog ate my homework and modern variants thereof 11

Cloud Resources! Hadoop on your local machine! Hadoop in a virtual machine on your local machine! Hadoop on the Google/IBM cluster Important Aside! Usage agreement for Google/IBM cluster! Stay tuned for more details over email 12

Source: Wikipedia (Japanese rock garden) Hadoop Zen! This is bleeding edge technology (= immature!) " Bugs, undocumented features, inexplicable behavior " Data loss(!)! Don t get frustrated (take a deep breath) " Those W$*#T@F! moments! Be patient " We will inevitably encounter situations along the way! Be flexible " We will have to be creative in workarounds! Be constructive " Tell me how I can make everyone s experience better 13

How do we scale up? Source: Wikipedia (IBM Roadrunner) 14

Divide and Conquer Work Partition w 1 w 2 w 3 worker worker worker r 1 r 2 r 3 Result Combine Parallelization Challenges! How do we assign work units to workers?! What if we have more work units than workers?! What if workers need to share partial results?! How do we aggregate partial results?! How do we know all the workers have finished?! What if workers die? What is the common theme of all of these problems? 15

Common Theme?! Parallelization problems arise from: " Communication between workers (e.g., to exchange state) " Access to shared resources (e.g., data)! Thus, we need a synchronization mechanism Source: Ricardo Guimarães Herrmann 16

Managing Multiple Workers! Difficult because " We don t know the order in which workers run " We don t know when workers interrupt each other " We don t know the order in which workers access shared data! Thus, we need: " Semaphores (lock, unlock) " Conditional variables (wait, notify, broadcast) " Barriers! Still, lots of problems: " Deadlock, livelock, race conditions... " Dining philosophers, sleeping barbers, cigarette smokers...! Moral of the story: be careful! Current Tools! Programming models " Shared memory (pthreads) " Message passing (MPI)! Design Patterns " Master-slaves " Producer-consumer flows " Shared work queues Shared Memory P 1 P 2 P 3 P 4 P 5 Memory Message Passing P 1 P 2 P 3 P 4 P 5 master producer consumer work queue slaves producer consumer 17

Where the rubber meets the road! Concurrency is difficult to reason about! Concurrency is even more difficult to reason about " At the scale of datacenters (even across datacenters) " In the presence of failures " In terms of multiple interacting services! Not to mention debugging! The reality: " Lots of one-off solutions, custom code " Write you own dedicated library, then program with it " Burden on the programmer to explicitly manage everything Source: Wikipedia (Flat Tire) 18

Source: MIT Open Courseware Source: MIT Open Courseware 19

Source: Harper s (Feb, 2008) What s the point?! It s all about the right level of abstraction " The von Neumann architecture has served us well, but is no longer appropriate for the multi-core/cluster environment! Hide system-level details from the developers " No more race conditions, lock contention, etc.! Separating the what from how " Developer specifies the computation that needs to be performed " Execution framework ( runtime ) handles actual execution The datacenter is the computer! 20

Big Ideas! Scale out, not up " Limits of SMP and large shared-memory machines! Move processing to the data " Even the best clusters have limited bandwidth! Process data sequentially, avoid random access " Seeks are expensive, disk throughput is reasonable! Seamless scalability " From the mythical man-month to the tradable machine-hour MapReduce 21

Typical Large-Data Problem! Iterate over a large number of records Map! Extract something of interest from each! Shuffle and sort intermediate results! Aggregate intermediate results! Generate final output Reduce Key idea: provide a functional abstraction for these two operations (Dean and Ghemawat, OSDI 2004) Roots in Functional Programming Map f f f f f Fold g g g g g 22

Roots in Functional Programming MapReduce! Programmers specify two functions: map (k, v)! <k, v >* reduce (k, v )! <k, v >* " All values with the same key are sent to the same reducer! The execution framework handles everything else 23

k 1 v 1 k 2 v 2 k 3 v 3 k 4 v 4 k 5 v 5 k 6 v 6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 3 6 8 reduce reduce reduce r 1 s 1 r 2 s 2 r 3 s 3 MapReduce! Programmers specify two functions: map (k, v)! <k, v >* reduce (k, v )! <k, v >* " All values with the same key are sent to the same reducer! The execution framework handles everything else What s everything else? 24

MapReduce Runtime! Handles scheduling " Assigns workers to map and reduce tasks! Handles data distribution " Moves processes to data! Handles synchronization " Gathers, sorts, and shuffles intermediate data! Handles errors and faults " Detects worker failures and restarts! Everything happens on top of a distributed FS (later) MapReduce! Programmers specify two functions: map (k, v)! <k, v >* reduce (k, v )! <k, v >* " All values with the same key are reduced together! The execution framework handles everything else! Not quite usually, programmers also specify: partition (k, number of partitions)! partition for k " Often a simple hash of the key, e.g., hash(k ) mod n " Divides up key space for parallel reduce operations combine (k, v )! <k, v >* " Mini-reducers that run in memory after the map phase " Used as an optimization to reduce network traffic 25

k 1 v 1 k 2 v 2 k 3 v 3 k 4 v 4 k 5 v 5 k 6 v 6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 combine combine combine combine a 1 b 2 c 9 a 5 c 2 b 7 c 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 93 86 8 reduce reduce reduce r 1 s 1 r 2 s 2 r 3 s 3 Two more details! Barrier between map and reduce phases " But we can begin copying intermediate data earlier! Keys arrive at each reducer in sorted order " No enforced ordering across reducers 26

Hello World : Word Count Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value); MapReduce can refer to! The programming model! The execution framework (aka runtime )! The specific implementation Usage is usually clear from context! 27

MapReduce Implementations! Google has a proprietary implementation in C++ " Bindings in Java, Python! Hadoop is an open-source implementation in Java " Development led by Yahoo, used in production " Now an Apache project " Rapidly expanding software ecosystem! Lots of custom research implementations " For GPUs, cell processors, etc. User Program (1) submit Master (2) schedule map (2) schedule reduce worker split 0 split 1 split 2 split 3 split 4 (3) read worker (4) local write (5) remote read worker worker (6) write output file 0 output file 1 worker Input files Map phase Intermediate files (on local disk) Reduce phase Output files Adapted from (Dean and Ghemawat, OSDI 2004) 28

How do we get data to the workers? NAS SAN Compute Nodes What s the problem here? Distributed File System! Don t move data to workers move workers to the data! " Store data on the local disks of nodes in the cluster " Start up the workers on the node that has the data local! Why? " Not enough RAM to hold all the data in memory " Disk access is slow, but disk throughput is reasonable! A distributed file system is the answer " GFS (Google File System) for Google s MapReduce " HDFS (Hadoop Distributed File System) for Hadoop 29

GFS: Assumptions! Commodity hardware over exotic hardware " Scale out, not up! High component failure rates " Inexpensive commodity components fail all the time! Modest number of huge files " Multi-gigabyte files are common, if not encouraged! Files are write-once, mostly appended to " Perhaps concurrently! Large streaming reads over random access " High sustained throughput over low latency GFS slides adapted from material by (Ghemawat et al., SOSP 2003) GFS: Design Decisions! Files stored as chunks " Fixed size (64MB) avoid little files!! Reliability through replication " Each chunk replicated across 3+ chunkservers! Single master to coordinate access, keep metadata " Simple centralized management! No data caching " Little benefit due to large datasets, streaming reads! Simplify the API " Push some of the issues onto the client (e.g., data layout) HDFS = GFS clone (same basic ideas) 30

From GFS to HDFS! Terminology differences: " GFS master = Hadoop namenode " GFS chunkservers = Hadoop datanodes! Functional differences: " No file appends in HDFS (planned feature) " HDFS performance is (likely) slower For the most part, we ll use the Hadoop terminology HDFS Architecture HDFS namenode Application HDFS Client (file name, block id) (block id, block location) File namespace /foo/bar block 3df2 instructions to datanode (block id, byte range) block data datanode state HDFS datanode Linux file system HDFS datanode Linux file system Adapted from (Ghemawat et al., SOSP 2003) 31

Namenode Responsibilities! Managing the file system namespace: " Holds file/directory structure, metadata, file-to-block mapping, access permissions, etc.! Coordinating file operations: " Directs clients to datanodes for reads and writes " No data is moved through the namenode! Maintaining overall health: " Periodic communication with the datanodes " Block re-replication and rebalancing " Garbage collection Putting everything together namenode job submission node namenode daemon jobtracker tasktracker datanode daemon Linux file system slave node tasktracker datanode daemon Linux file system slave node tasktracker datanode daemon Linux file system slave node 32

Recap! Why large data?! Cloud computing and MapReduce! Large-data processing: big ideas! What is MapReduce?! Importance of the underlying distributed file system Questions? Photo credit: Jimmy Lin 33