Introduction to MapReduce

Similar documents
Introduction to MapReduce

Introduction to MapReduce

Introduction to MapReduce

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Introduction to MapReduce

What is this course about?

Clustering Lecture 8: MapReduce

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

Laarge-Scale Data Engineering

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

CMSC 723: Computational Linguistics I Session #12 MapReduce and Data Intensive NLP. University of Maryland. Wednesday, November 18, 2009

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

ΕΠΛ 602:Foundations of Internet Technologies. Cloud Computing

Introduction to MapReduce. Instructor: Dr. Weikuan Yu Computer Sci. & Software Eng.

Map Reduce. Yerevan.

CS60021: Scalable Data Mining. Sourangshu Bhattacharya

Hadoop Distributed File System(HDFS)

Google File System (GFS) and Hadoop Distributed File System (HDFS)

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

Cloud Computing. Big Data. Dell Zhang Birkbeck, University of London 2017/18

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

Distributed Filesystem

CS-2510 COMPUTER OPERATING SYSTEMS

Distributed Systems 16. Distributed File Systems II

MI-PDB, MIE-PDB: Advanced Database Systems

Information Retrieval Processing with MapReduce

Distributed File Systems II

The Google File System. Alexandru Costan

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

CS 345A Data Mining. MapReduce

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

MapReduce. U of Toronto, 2014

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

The MapReduce Abstraction

Map-Reduce. Marco Mura 2010 March, 31th

Informa)on Retrieval and Map- Reduce Implementa)ons. Mohammad Amir Sharif PhD Student Center for Advanced Computer Studies

BigData and Map Reduce VITMAC03

A BigData Tour HDFS, Ceph and MapReduce

Lecture 11 Hadoop & Spark

Map Reduce.

CLOUD-SCALE FILE SYSTEMS

The Google File System

Map-Reduce and Related Systems

HADOOP FRAMEWORK FOR BIG DATA

Yuval Carmel Tel-Aviv University "Advanced Topics in Storage Systems" - Spring 2013

Overview. Why MapReduce? What is MapReduce? The Hadoop Distributed File System Cloudera, Inc.

The amount of data increases every day Some numbers ( 2012):

CA485 Ray Walshe Google File System

2/26/2017. The amount of data increases every day Some numbers ( 2012):

The Google File System

NoSQL. Based on slides by Mike Franklin and Jimmy Lin

Big Data Programming: an Introduction. Spring 2015, X. Zhang Fordham Univ.

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

Programming Systems for Big Data

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

CSE 124: Networked Services Fall 2009 Lecture-19

CS370 Operating Systems

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Parallel Computing: MapReduce Jin, Hai

Chapter 5. The MapReduce Programming Model and Implementation

CSE 124: Networked Services Lecture-16

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

Where We Are. Review: Parallel DBMS. Parallel DBMS. Introduction to Data Management CSE 344

Data Clustering on the Parallel Hadoop MapReduce Model. Dimitrios Verraros

Distributed Computation Models

Outline. Distributed File System Map-Reduce The Computational Model Map-Reduce Algorithm Evaluation Computing Joins

Map Reduce Group Meeting

Big Data Management and NoSQL Databases

The Google File System

Hadoop/MapReduce Computing Paradigm

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2016

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment.

HDFS Architecture. Gregory Kesden, CSE-291 (Storage Systems) Fall 2017

CS427 Multicore Architecture and Parallel Computing

Introduction to Map Reduce

The MapReduce Framework

MapReduce & BigTable

Dept. Of Computer Science, Colorado State University

Introduction to Hadoop. Owen O Malley Yahoo!, Grid Team

The Google File System

MapReduce, Hadoop and Spark. Bompotas Agorakis

A brief history on Hadoop

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E.

Clustering Documents. Case Study 2: Document Retrieval

The Google File System

The Google File System

HDFS: Hadoop Distributed File System. CIS 612 Sunnie Chung

Lecture 12 DATA ANALYTICS ON WEB SCALE

What is the maximum file size you have dealt so far? Movies/Files/Streaming video that you have used? What have you observed?

The Google File System (GFS)

Introduction to Data Management CSE 344

Clustering Documents. Document Retrieval. Case Study 2: Document Retrieval

The Google File System

Hadoop and HDFS Overview. Madhu Ankam

CS 138: Google. CS 138 XVI 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

GFS: The Google File System. Dr. Yingwu Zhu

Distributed Systems. 05r. Case study: Google Cluster Architecture. Paul Krzyzanowski. Rutgers University. Fall 2016

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo

Transcription:

Introduction to MapReduce Some slides borrowed from Jimmy Lin, Jeff Ullman, Jerome Simeon, and Jure Leskovec

Single-node architecture CPU Memory Data Analysis, Machine Learning, Statistics Disk

How much data? Google processes 20 PB a day (2008) Wayback Machine has 3 PB + 100 TB/month (3/2009) Facebook has 2.5 PB of user data + 15 TB/day (4/2009) ebay has 6.5 PB of user data + 50 TB/day (5/2009) CERN s LHC will generate 15 PB a year (??)

What to do with more data? Machine learning algorithms really don t matter, all that matters is the amount of data you have, Banko and Brill, 2001 More data à higher accuracy More data à accuracy of different algorithms converge

What to do with more data? Answering factoid questions Pattern matching on the Web Works amazingly well Who shot Abraham Lincoln? X shot Abraham Lincoln Learning relations Start with seed instances Search for patterns on the Web Using patterns to find more instances Wolfgang Amadeus Mozart (1756-1791) Einstein was born in 1879 Birthday-of(Mozart, 1756) Birthday-of(Einstein, 1879) (Brill et al., TREC 2001; Lin, ACM TOIS 2007) Big (Agichtein Data and Spring Gravano, 2014 DL 2000; Ravichandran and Hovy, ACL 2002; ) PERSON (DATE PERSON was born in DATE

Motivation: Google Example 20+ billion web pages x 20KB = 400+ TB 1 computer reads 30-35 MB/sec from disk ~4 months to read the web ~1,000 hard drives to store the web Takes even more to do something useful with the data! A standard architecture for such problems is emerging Cluster of commodity Linux nodes Commodity network (ethernet) to connect them

Cluster Architecture 1 Gbps between any pair of nodes in a rack Switch 2-10 Gbps backbone between racks Switch Switch CPU CPU CPU CPU Mem Mem Mem Mem Disk Disk Disk Disk Each rack contains 16-64 nodes In 2011 it was guestimated that Google had 1M machines, http://bit.ly/shh0ro

Divide and Conquer Work Partition w 1 w 2 w 3 worker worker worker r 1 r 2 r 3 Result Combine

Parallelization Challenges How do we assign work units to workers? What if we have more work units than workers? What if workers need to share partial results? How do we aggregate partial results? How do we know all the workers have finished? What if workers die? What is the common theme of all of these problems?

Common Theme? Parallelization problems arise from: Communication between workers (e.g., to exchange state) Access to shared resources (e.g., data) Thus, we need a synchronization mechanism

Managing Multiple Workers Difficult because We don t know the order in which workers run We don t know when workers interrupt each other We don t know the order in which workers access shared data Thus, we need: Semaphores (lock, unlock) Conditional variables (wait, notify, broadcast) Barriers Still, lots of problems: Deadlock, livelock, race conditions... Dining philosophers, sleeping barbers, cigarette smokers... Moral of the story: be careful!

Current Tools Programming models Shared memory (pthreads) Message passing (MPI) Design Patterns Master-slaves Producer-consumer flows Shared work queues Shared Memory P 1 P 2 P 3 P 4 P 5 Memory Message Passing P 1 P 2 P 3 P 4 P 5 master producer consumer work queue slaves producer consumer

Where the rubber meets the road Concurrency is difficult to reason about Concurrency is even more difficult to reason about At the scale of datacenters (even across datacenters) In the presence of failures In terms of multiple interacting services Not to mention debugging The reality: Lots of one-off solutions, custom code Write you own dedicated library, then program with it Burden on the programmer to explicitly manage everything

What s the point? It s all about the right level of abstraction The von Neumann architecture has served us well, but is no longer appropriate for the multi-core/cluster environment Hide system-level details from the developers No more race conditions, lock contention, etc. Separating the what from how Developer specifies the computation that needs to be performed Execution framework ( runtime ) handles actual execution The datacenter is the computer!

Cloud (Utility) Computing What? Computing resources as a metered service ( pay as you go ) Ability to dynamically provision virtual machines Why? Cost: capital vs. operating expenses Scalability: infinite capacity Elasticity: scale up or down on demand I think there is a world market for about five computers. Does it make sense? Benefits to cloud users Business case for cloud providers MapReduce computational model abstracts many of the complexities

Big Ideas: Scale Out vs. Scale Up Scale up: small number of high-end servers Symmetric multi-processing (SMP) machines, large shared memory l Not cost-effective cost of machines does not scale linearly; and no single SMP machine is big enough Scale out: Large number of commodity low-end servers is more effective for data-intensive applications 8 128-core machines vs. 128 8-core machines low-end server platform is about 4 times more cost efficient than a high-end shared memory platform from the same vendor, Barroso and Hölzle, 2009

Why commodity machines? Big Source: Data Spring Barroso 2014 and Urs Hölzle (2009); performance figures from late 2007

Big Ideas: Failures are Common Suppose a cluster is built using machines with a mean-time between failures (MTBF) of 1000 days For a 10,000 server cluster, there are on average 10 failures per day! MapReduce implementation cope with failures Automatic task restarts

Big Ideas: Move Processing to Data Supercomputers often have processing nodes and storage nodes Computationally expensive tasks High-capacity interconnect to move data around Many data-intensive applications are not very processor-demanding Data movement leads to a bottleneck in the network! New idea: move processing to where the data reside In MapReduce, processors and storage are colocated Leverage locality

Big Ideas: Avoid Random Access Disk seek times are determined by mechanical factors Read heads can only move so fast and platters can only spin so rapidly

Big Ideas: Avoid Random Access Example: 1 TB database containing 10 10 100 byte records Random access: each update takes ~30ms (seek,read, write) Updating 1% of the records takes ~35 days Sequential access: 100MB/s throughput Reading the whole database and rewriting all the records, takes 5.6 hours MapReduce was designed for batch processing --- organize computations into long streaming operations

Big Ideas: Abstract System-Level Details Developing distributed software is very challenging Manage multiple threads, processes, machines Concurrency control and synchronization Race conditions, deadlocks a nightmare to debug! MapReduce isolates developers from these details Programmer defines what computations are to be performed MapReduce execution framework takes care of how the computations are carried out

MapReduce

Typical Large-Data Problem Iterate over a large number of records Map Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Reduce Key idea: provide a functional abstraction for these two operations (Dean and Ghemawat, OSDI 2004)

Map and Reduce The idea of Map, and Reduce is 40+ year old - Present in all Functional Programming Languages. - See, e.g., APL, Lisp and ML Alternate names for Map: Apply-All Higher Order Functions - take function definitions as arguments, or - return a function as output Map and Reduce are higher-order functions.

Map Examples in Haskell ñ map (+1) [1,2,3,4,5] == [2, 3, 4, 5, 6] ñ map (tolower) "abcdefg12!@# == "abcdefg12!@# ñ map (`mod` 3) [1..10] == [1, 2, 0, 1, 2, 0, 1, 2, 0, 1]

Fold-Left in Haskell ñ Definition - foldl f z [] = z - foldl f z (x:xs) = foldl f (f z x) xs ñ Examples - foldl (+) 0 [1..5] ==15 - foldl (+) 10 [1..5] == 25 - foldl (div) 7 [34,56,12,4,23] == 0

MapReduce Programmers specify two functions: map (k, v) <k, v >* reduce (k, v ) <k, v >* All values with the same key are sent to the same reducer The execution framework handles everything else

Hello World : Word Count Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value);

Word Counting with MapReduce M 1 Documents! Key! Value! Key! Value! Doc1 Doc2 Doc3 Doc4 Doc5 Financial, IMF, Eco nomics, Crisis Financial, IMF, Cris is Documents! Economics, Harry Financial, Harry, Po tter, Film Crisis, Harry, Potter" Map" Map" `" `" Financial" 1 IMF" 1" Economics" 1" `" Crisis" 1" Financial" 1" `" IMF" 1" Crisis" 1" Economics" 1 `" Harry" 1" Financial" 1" Harry" 1" `" Potter" 1" Film" 1" Crisis" 1" Harry" 1" `" Potter" 1" M 2 Big Data Kyuseok Spring 2014 Shim (VLDB 2012 TUTORIAL)"

Word Counting with MapReduce Doc1 Doc2 Doc3 Doc4 Doc5 Documents! Financial, IMF, Eco nomics, Crisis Financial, IMF, Cris is Documents! Economics, Harry Financial, Harry, Po tter, Film Crisis, Harry, Potter" Map" Map" Key! Key! Value! Key! Value list! Financial" Financial" 1 Crisis" 1 1, 1, 1 Financial" IMF" 1" Crisis" 1" 1, 1" Financial" Economics" 1" Crisis" 1" 1, 1" IMF" Crisis" 1" Harry" 1" 1, 1, 1" IMF" Harry" 1" Harry" 1" 1, 1, 1" Economics" Film" 1" Harry" 1" 1" Economics" Potter" 1" Film" 1" 1, 1" Potter" 1" Potter" 1" Value! Reduce" Key! Financial" 3 IMF" 2 Economics" 2 Crisis" 3 Harry" 3 Film" 1 Potter" 2 Before reduce functions are called, for each distinct key, the list of its values is generated Kyuseok Shim (VLDB 2012 TUTORIAL)" Reduce" `"`" `" Value!

MapReduce k 1 v 1 k 2 v 2 k 3 v 3 k 4 v 4 k 5 v 5 k 6 v 6 Group by is applied to intermediate keys map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 3 6 8 reduce reduce reduce r 1 s 1 r 2 s 2 r 3 s 3 Output written to DFS

MapReduce Runtime Handles scheduling Assigns workers to map and reduce tasks Handles data distribution Moves processes to data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles errors and faults Detects worker failures and restarts Everything happens on top of a distributed FS (later)

MapReduce Programmers specify two functions: map (k, v) <k, v >* reduce (k, v ) <k, v >* All values with the same key are reduced together Mappers and reducers can specify arbitrary computations Be careful with access to external resources! The execution framework handles everything else Not quite usually, programmers also specify: partition (k, number of partitions) partition for k Often a simple hash of the key, e.g., hash(k ) mod n Divides up key space for parallel reduce operations combine (k, v ) <k, v >* Mini-reducers that run in memory after the map phase Used as an optimization to reduce network traffic

Word Counting with MapReduce Doc1 Doc2 Doc3 Doc4 Doc5 Documents! Financial, IMF, Eco nomics, Crisis Financial, IMF, Cris is Documents! Economics, Harry Financial, Harry, Po tter, Film Crisis, Harry, Potter" Map" Map" Key! Key! Value! Key! Value list! Financial" Financial" 1 Crisis" 1 1, 1, 1 Financial" IMF" 1" Crisis" 1" 1, 1" Financial" Economics" 1" Crisis" 1" 1, 1" IMF" Crisis" 1" Harry" 1" 1, 1, 1" IMF" Harry" 1" Harry" 1" 1, 1, 1" Economics" Film" 1" Harry" 1" 1" Economics" Potter" 1" Film" 1" 1, 1" Potter" 1" Potter" 1" Value! Reduce" Key! Financial" 3 IMF" 2 Economics" 2 Crisis" 3 Harry" 3 Film" 1 Potter" 2 Before reduce functions are called, for each distinct key, the list of its values is generated Kyuseok Shim (VLDB 2012 TUTORIAL)" Reduce" `"`" `" Value!

MapReduce: The Complete Picture k 1 v 1 k 2 v 2 k 3 v 3 k 4 v 4 k 5 v 5 k 6 v 6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 combine combine combine combine a 1 b 2 c 9 a 5 c 2 b 7 c 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 93 86 8 reduce reduce reduce r 1 s 1 r 2 s 2 r 3 s 3

Two more details Barrier between map and reduce phases But we can begin copying intermediate data earlier Keys arrive at each reducer in sorted order No enforced ordering across reducers

MapReduce can refer to The programming model The execution framework (aka runtime ) The specific implementation Usage is usually clear from context!

MapReduce Implementations Google has a proprietary implementation in C++ Bindings in Java, Python Hadoop is an open-source implementation in Java Development led by Yahoo, used in production Now an Apache project Rapidly expanding software ecosystem Lots of custom research implementations For GPUs, cell processors, etc.

User Program (1) submit Master (2) schedule map (2) schedule reduce worker split 0 split 1 split 2 split 3 split 4 (3) read worker (4) local write (5) remote read worker worker (6) write output file 0 output file 1 worker Input files Map phase Intermediate files (on local disk) Reduce phase Output files Adapted from (Dean and Ghemawat, OSDI 2004)

Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes in the cluster Start up the workers on the node that has the data local Why? Not enough RAM to hold all the data in memory Disk access is slow, but disk throughput is reasonable A distributed file system is the answer GFS (Google File System) for Google s MapReduce HDFS (Hadoop Distributed File System) for Hadoop

GFS: Assumptions Commodity hardware over exotic hardware Scale out, not up High component failure rates Inexpensive commodity components fail all the time Modest number of huge files Multi-gigabyte files are common, if not encouraged Files are write-once, mostly appended to Perhaps concurrently Large streaming reads over random access High sustained throughput over low latency Big GFS Data slides Spring adapted 2014 from material by (Ghemawat et al., SOSP 2003)

Files stored as chunks Fixed size (e.g., 64MB) GFS: Design Decisions Reliability through replication Each chunk replicated across 3+ chunkservers Single master to coordinate access, keep metadata Simple centralized management No data caching Little benefit due to large datasets, streaming reads Simplify the API Push some of the issues onto the client (e.g., data layout) HDFS = GFS clone (same basic ideas)

From GFS to HDFS Terminology differences: GFS master = Hadoop namenode GFS chunkservers = Hadoop datanodes Functional differences: No file appends in HDFS (planned feature) HDFS performance is (likely) slower For the most part, we ll use the Hadoop terminology

HDFS Architecture HDFS namenode Application HDFS Client (file name, block id) (block id, block location) File namespace /foo/bar block 3df2 instructions to datanode (block id, byte range) block data datanode state HDFS datanode Linux file system HDFS datanode Linux file system

Namenode Responsibilities Managing the file system namespace: Holds file/directory structure, metadata, file-to-block mapping, access permissions, etc. Coordinating file operations: Directs clients to datanodes for reads and writes No data is moved through the namenode Maintaining overall health: Periodic communication with the datanodes Block re-replication and rebalancing Garbage collection

MapReduce Architecture namenode job submission node namenode daemon jobtracker tasktracker datanode daemon Linux file system slave node tasktracker datanode daemon Linux file system slave node tasktracker datanode daemon Linux file system slave node

Fault Recovery Workers are pinged by master periodically - Non-responsive workers are marked as failed - All tasks in-progress or completed by failed worker become eligible for rescheduling Master could periodically checkpoint - Current implementations abort on master failure

Component Overview

http://hadoop.apache.org/ Open source Java Scale Thousands of nodes and petabytes of data

Hadoop MapReduce and Distributed File System framework for large commodity clusters Master/Slave relationship - JobTracker handles all scheduling & data flow between TaskTrackers - TaskTracker handles all worker tasks on a node - Individual worker task runs map or reduce operation Integrates with HDFS for data locality

Hadoop Supported File Systems HDFS: Hadoop's own file system. Amazon S3 file system - Targeted at clusters hosted on the Amazon Elastic Compute Cloud server-on-demand infrastructure - Not rack-aware CloudStore - previously Kosmos Distributed File System - like HDFS, this is rack-aware. FTP Filesystem - stored on remote FTP servers. Read-only HTTP and HTTPS file systems.

"Rack awareness" Optimization which takes into account the geographic clustering of servers Network traffic between servers in different geographic clusters is minimized.

Goals of HDFS Very Large Distributed File System 10K nodes, 100 million files, 10 PB Assumes Commodity Hardware Files are replicated to handle hardware failure Detect failures and recovers from them Optimized for Batch Processing Data locations exposed so that computations can move to where data resides Provides very high aggregate bandwidth User Space, runs on heterogeneous OS

HDFS: Hadoop DFS Designed to scale to petabytes of storage, and run on top of the file systems of the underlying OS. Master ( NameNode ) handles replication, deletion, creation Slave ( DataNode ) handles data retrieval Files stored in many blocks - Each block has a block Id - Block Id associated with several nodes hostname:port (depending on level of replication)

Distributed File System Single Namespace for entire cluster Data Coherency Write-once-read-many access model Client can only append to existing files Files are broken up into blocks Typically 128 MB block size Each block replicated on multiple DataNodes Intelligent Client Client can find location of blocks Client accesses data directly from DataNode

NameNode Metadata Meta-data in Memory The entire metadata is in main memory No demand paging of meta-data Types of Metadata List of files List of Blocks for each file List of DataNodes for each block File attributes, e.g creation time, replication factor A Transaction Log Records file creations, file deletions. etc

DataNode A Block Server Stores data in the local file system (e.g. ext3) Stores meta-data of a block (e.g. CRC) Serves data and meta-data to Clients Block Report Periodically sends a report of all existing blocks to the NameNode Facilitates Pipelining of Data Forwards data to other specified DataNodes

Block Placement Current Strategy -- One replica on local node -- Other replicas on remote racks -- Additional replicas are randomly placed Clients read from neapont replica Would like to make this policy pluggable

Data Correctness Use Checksums to validate data Use CRC32 File Creation Client computes checksum per 512 byte DataNode stores the checksum File access Client retrieves the data and checksum from DataNode If Validation fails, Client tries other replicas

NameNode Failure A single point of failure Transaction Log stored in multiple directories A directory on the local file system A directory on a remote file system (NFS/CIFS)

Data Pipelining Client retrieves a list of DataNodes on which to place replicas of a block Client writes block to the first DataNode The first DataNode forwards the data to the next DataNode in the Pipeline When all replicas are written, the Client moves on to write the next block in file

Rebalancer Goal: % disk full on DataNodes should be similar Usually run when new DataNodes are added Cluster is online when Rebalancer is active Rebalancer is throttled to avoid network congestion Command line tool

Hadoop vs. MapReduce MapReduce is also the name of a framework developed by Google Hadoop was initially developed by Yahoo and now part of the Apache group. Hadoop was inspired by Google's MapReduce and Google File System (GFS) papers.

MapReduce v. Hadoop MapReduce Hadoop Org Google Yahoo/Apache Impl C++ Java Distributed File Sys GFS HDFS Data Base Bigtable HBase Distributed lock mgr Chubby ZooKeeper

References http://hadoop.apache.org/ Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large Clusters. Usenix SDI '04, 2004. http://www.usenix.org/events/osdi04/tech/ full_papers/dean/dean.pdf David DeWitt, Michael Stonebraker, "MapReduce: A major step backwards, craighenderson.blogspot.com http://scienceblogs.com/goodmath/2008/01/ databases_are_hammers_mapreduc.php