CLOUD- SCALE FILE SYSTEMS THANKS TO M. GROSSNIKLAUS

Similar documents
CLOUD-SCALE FILE SYSTEMS

The Google File System

The Google File System

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo

The Google File System

The Google File System

The Google File System

! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like

The Google File System

Google File System. Arun Sundaram Operating Systems

Authors : Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung Presentation by: Vijay Kumar Chalasani

Google File System. By Dinesh Amatya

The Google File System. Alexandru Costan

The Google File System (GFS)

NPTEL Course Jan K. Gopinath Indian Institute of Science

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

The Google File System GFS

NPTEL Course Jan K. Gopinath Indian Institute of Science

NPTEL Course Jan K. Gopinath Indian Institute of Science

GFS: The Google File System. Dr. Yingwu Zhu

The Google File System

Google File System (GFS) and Hadoop Distributed File System (HDFS)

GFS: The Google File System

CSE 124: Networked Services Lecture-16

The Google File System

CA485 Ray Walshe Google File System

CSE 124: Networked Services Fall 2009 Lecture-19

7680: Distributed Systems

Distributed File Systems II

Distributed System. Gang Wu. Spring,2018

Google File System 2

Google Disk Farm. Early days

Georgia Institute of Technology ECE6102 4/20/2009 David Colvin, Jimmy Vuong

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

Distributed Filesystem

CS435 Introduction to Big Data FALL 2018 Colorado State University. 11/7/2018 Week 12-B Sangmi Lee Pallickara. FAQs

Distributed Systems 16. Distributed File Systems II

4/9/2018 Week 13-A Sangmi Lee Pallickara. CS435 Introduction to Big Data Spring 2018 Colorado State University. FAQs. Architecture of GFS

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E.

9/26/2017 Sangmi Lee Pallickara Week 6- A. CS535 Big Data Fall 2017 Colorado State University

Outline. Spanner Mo/va/on. Tom Anderson

L1:Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung ACM SOSP, 2003

MapReduce. U of Toronto, 2014

Yuval Carmel Tel-Aviv University "Advanced Topics in Storage Systems" - Spring 2013

Lecture XIII: Replication-II

Google File System, Replication. Amin Vahdat CSE 123b May 23, 2006

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

BigData and Map Reduce VITMAC03

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Distributed Systems. Lec 10: Distributed File Systems GFS. Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

Distributed Systems. GFS / HDFS / Spanner

Lecture 3 Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, SOSP 2003

Staggeringly Large Filesystems

Distributed File Systems (Chapter 14, M. Satyanarayanan) CS 249 Kamal Singh

Seminar Report On. Google File System. Submitted by SARITHA.S

Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA {Shv, Hairong, SRadia,

GOOGLE FILE SYSTEM: MASTER Sanjay Ghemawat, Howard Gobioff and Shun-Tak Leung

2/27/2019 Week 6-B Sangmi Lee Pallickara

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment.

goals monitoring, fault tolerance, auto-recovery (thousands of low-cost machines) handle appends efficiently (no random writes & sequential reads)

Google File System and BigTable. and tiny bits of HDFS (Hadoop File System) and Chubby. Not in textbook; additional information

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2016

Abstract. 1. Introduction. 2. Design and Implementation Master Chunkserver

CS 138: Google. CS 138 XVI 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

11/5/2018 Week 12-A Sangmi Lee Pallickara. CS435 Introduction to Big Data FALL 2018 Colorado State University

DISTRIBUTED FILE SYSTEMS CARSTEN WEINHOLD

Google is Really Different.

DISTRIBUTED FILE SYSTEMS CARSTEN WEINHOLD

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Staggeringly Large File Systems. Presented by Haoyan Geng

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

Map-Reduce. Marco Mura 2010 March, 31th

Hadoop Distributed File System(HDFS)

HDFS: Hadoop Distributed File System. Sector: Distributed Storage System

Distributed File Systems. Directory Hierarchy. Transfer Model

HDFS Architecture. Gregory Kesden, CSE-291 (Storage Systems) Fall 2017

HDFS Architecture Guide

GFS. CS6450: Distributed Systems Lecture 5. Ryan Stutsman

CS 138: Google. CS 138 XVII 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

TI2736-B Big Data Processing. Claudia Hauff

Bigtable: A Distributed Storage System for Structured Data By Fay Chang, et al. OSDI Presented by Xiang Gao

This material is covered in the textbook in Chapter 21.

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

Outline. INF3190:Distributed Systems - Examples. Last week: Definitions Transparencies Challenges&pitfalls Architecturalstyles

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

Introduction to Cloud Computing

Lecture: The Google Bigtable

Google Cluster Computing Faculty Training Workshop

Main Points. File systems. Storage hardware characteris7cs. File system usage Useful abstrac7ons on top of physical devices

Engineering Goals. Scalability Availability. Transactional behavior Security EAI... CS530 S05

GFS-python: A Simplified GFS Implementation in Python

Informa)on Retrieval and Map- Reduce Implementa)ons. Mohammad Amir Sharif PhD Student Center for Advanced Computer Studies

The Hadoop Distributed File System Konstantin Shvachko Hairong Kuang Sanjay Radia Robert Chansler

Map Reduce. Yerevan.

Data Storage in the Cloud

Performance Gain with Variable Chunk Size in GFS-like File Systems

MapReduce, Apache Hadoop

Extreme computing Infrastructure

Transcription:

Data Management in the Cloud CLOUD- SCALE FILE SYSTEMS THANKS TO M. GROSSNIKLAUS While produc7on systems are well disciplined and controlled, users some7mes are not Ghemawat, Gobioff & Leung 1

Google File System (GFS) Designing a file system for the Cloud design assump7ons design choices Architecture GFS Master GFS Chunkservers GFS Clients System opera7ons and interac7ons Replica7on fault tolerance high availability 2

Key ApplicaDon & Environment ObservaDons Component failure is the norm Applica7on bugs, OS bugs, human errors, plus hw failures => Need constant monitoring error detec7on & recovery Files are huge Files contain many small documents Managing many small (kb) files is unwieldy => I/O ops and block size need to be revisited Append- only Workload Write- once, read- many (usually sequen7ally) Such as: repositories for analysis, data streams, archive data, intermediate results => Appending is the focus of performance op7miza7on and atomicity guarantees, caching data blocks at client loses appeal 3

Kinds of Data ComputaDons Lists of searches and their results Web crawl results Building and inverted index (list of pages where a word appears) Find all pages that link to a given page Intermediate results on page rank Spam pages for training 4

Design AssumpDons System is built from many inexpensive commodity components component failures happen on a rou7ne basis monitor itself to detect, tolerate, and recover from failures System stores a modest number of large files a few million files, typically 100 MB or larger mul7- GB files are common and need to be managed efficiently small files are to be supported but not op7mized for System workload large streaming reads: successive reads from one client read con7guous region, commonly 1 MB or more small random reads: typically a few KB at some arbitrary offset (may be batched) large sequendal writes: append data to files; opera7on sizes similar to streaming reads; small arbitrary writes supported, but not efficiently small random writes: supported, but not necessarily efficient 5

Design AssumpDon Support concurrent appends to the same file efficient implementa7on well- defined (but non- standard) seman7cs use case: producer- consumer queues or many- way merging, with hundreds of producers (one per machine) concurrently appending to a file atomicity with minimal synchroniza7on overhead is essen7al file might be read later or simultaneously High sustained bandwidth is more important than low latency 6

Design Decisions: Interface GFS does not implement a standard API such as POSIX Supports standard file opera7ons create/delete open/close read/write Supports addi7onal opera7ons snapshot: creates a copy of a file or a directory tree at low cost, using copy on write record append: allows mul7ple clients to append data to the same file concurrently, while guaranteeing the atomicity of each individual client s append 7

Architecture Data messages Control messages Figure Credit: The Google File System by S. Ghemawat, H. Gobioff, and S.- T. Leung, 2003 8

Design Decisions: Architecture GFS cluster single master and mul7ple chunkservers accessed by mul7ple clients components are typically commodity Linux machines chunkserver and client can run on same machine (lower reliability due to flaky applica7on code) GFS server processes run in user mode Chunks files are divided into fixed- size chunks iden7fied by globally unique chunk handle (64 bit), assigned by master read/write based on chunk handle and byte range (no POSIX API) chunks are replicated for reliability, typically the replica7on factor is 3 9

Design Decisions: Architecture Mul7ple chunkservers store chunks on local disk as Linux files accept and handle data requests no special caching, relies on Linux s buffer cache Master maintains all f/s meta- data (namespace, acl, file- >chunk mapping, ) periodic heartbeat messages to each chunkserver (monitoring) clients communicate with master for metadata, but data requests go to chunkservers Caching neither chunkserver nor client cache data clients stream through files or have large working sets eliminates complica7ons of cache coherence 10

Design Decisions: Single Master Single master simplifies overall design enables more sophis7cated chunk placement and replicadon, but single point of failure maintains file system metadata: namespace, access control informa7on, file- to- chunk mapping, current chunk loca7on performs management acdvides: chunk leases, garbage collec7on, orphaned chunks, chunk migra7on heartbeats: periodic messages sent to chunkservers to give instruc7ons, collect state, detect failures 11

Design Decisions: Single Master Simple read example client translates file/byte offset - > chunk index (fixed chunk size) master responds with handle/loca7ons data requests go to chunkservers (closest one, olen) mul7ple chunk requests at once - sidesteps client- master communica7on at minimal cost (round- trip is the overhead) 12

Design Decisions: Chunk Size One of the key parameters set to a large value, i.e. 64 MB (larger than typical fs) chunks stored as plain Linux files to avoid fragmenta7on, chunkservers use lazy space alloca7on, i.e. files are only extended as needed Advantages reduce interac7on between client and master one ini7al chunk info request (client can cache info for mul7- TB working set) reduce network overhead by using persistent TCP connec7on to do many opera7ons on one chunk reduce size of metadata stored on master Disadvantages small files consist of one or a few chunks risk of hot spots increase replica7on factor for small files, stagger start 7mes of reads 13

Design Decision: Metadata All metadata is kept in the master s main memory file and chunk namespaces: lookup table with prefix compression file- to- chunk mapping locadons of chunk replicas Opera7on log does not log chunk loca7ons (queried from chunkservers) stored on master s local disc and replicated on remote machines used to recover master in the event of a crash Discussion size of master s main memory limits number of possible files master maintains less than 64 bytes per chunk 14

Design Decisions: Consistency Model Relaxed consistency model tailored to Google s highly distributed applica7ons simple and efficient to implement File namespace muta7ons (modifica7ons) are atomic handled exclusively by the master namespace locking guarantees atomicity and correctness master s opera7on log defines global total order of opera7ons State of file region aler data muta7on consistent: all clients always see the same data, regardless of the replica they read from defined: consistent, plus all clients see the en7re data muta7on undefined but consistent: result of concurrent successful muta7ons; all clients see the same data, but it may not reflect any one muta7on inconsistent: result of a failed muta7on 15

Design Decisions: Consistency Model Write data muta7on data is wrioen at an applica7on- specific file offset Record append data muta7on data ( the record ) is appended atomically at least once even in the presence of concurrent muta7ons GFS chooses the offset and returns it to the client GFS may insert padding or record duplicates in between Stale replicas not involved in muta7ons or given to clients reques7ng info garbage collected as soon as possible clients may read stale data limited by cache entry 7meout and next file open (purges chunk info from cache) append- only means incomplete but not outdated data also handles data corrup7ons due to hw/sw failure 16

Design Decisions: Concurrency Model Implica7ons for applica7ons rely on appends rather than overwrites checkpoints inside a file indicate completeness wri7ng self- valida7ng, self- iden7fying records Typical use cases (or hacking around relaxed consistency ) writer generates file from beginning to end and then atomically renames it to a permanent name under which it is accessed writer inserts periodic checkpoints, readers only read up to last checkpoint (like punctua7on in stream systems) many writers concurrently append to file to merge results, reader skip occasional padding and repe77on using checksums applica7ons can use unique iden7fiers to deal with duplicates 17

OperaDons: Leases and MutaDon Order Chunk lease granted to one replica primary primary picks serial order for muta7ons other replicas follow this order global muta7on order: lease grant order, serial order assigned by primary Designed to minimize management overhead lease 7meout 60 seconds, but extensions can be requested and are typically approved extension requests and grants piggybacked on heartbeat messages if master loses communica7on with primary, grant new lease to new replica aler old lease expires (C vs. A) 18

OperaDons: WriDng Files Ship data, then insert it client master (1, 2) chunkserver with chunk lease chunkservers with replicas client chunkservers push data to chunkservers (3) write request to primary (4) primary secondary forward write request (5) secondary primary opera7on status (6) primary client opera7on status Figure Credit: The Google File System by S. Ghemawat, H. Gobioff, and S.- T. Leung, 2003 19

OperaDons: Data Flow Data flow decoupled from control flow Control flows client - > primary - > secondary Data is pushed (linearly) along a selected chain of chunkservers (pipelined) U7lize each machine s network bandwidth Chain of chunk servers (not a tree), use outbound bandwidth Avoid network boolenecks and high- latency links each machine forwards to closest machine Minimize the latency to push the data data pipelined over TCP connec7ons (start forwarding as soon as you receive data) 20

OperaDons: Atomic Record Append Atomic append Record Append Tradi7onal append: client specifies offset concurrent writes to same region are not serializable Record append client specifies only the data GFS appends at least once atomically GFS chooses offsets Avoids complicated & expensive synchroniza7on i.e. distributed lock manager Typical workload: mul7ple producer/single consumer 21

OperaDons: Atomic Record Append II Atomic append Record Append Tradi7onal append: client specifies offset concurrent writes to same region are not serializable Follows previous control flow with only liole extra logic client pushes data to all replicas of the last chunk of the file (3 ) client sends its request to the primary replica (4) Addi7onally, primary checks if appending the record to the chunk exceeds the maximum chunk size (64 MB) yes: primary and secondary pad the chunk to the maximum size (fragmenta7on) and append is tried on next chunk record appends restricted to ¼ of maximum chunk size to keep worst- case fragmenta7on low no: primary appends data to its replica and instructs secondaries to write data at the exact same offset 22

OperaDons: Append Failures Record append fails at any replica => client retries opera7on => replicas of same chunk may contain different data, may include duplicates (whole or in part) GFS does not guarantee that all replicas are bytewise iden7cal; guarantees data is wrioen at least once as an atomic unit Opera7on reports success if data is wrioen at same offset on all replicas Replicas are at least as long as the end of record, higher offsets for later appends Serial success Concurrent successes Write defined consistent but undefined Record Append defined interspersed with inconsistent Failure inconsistent 23

OperaDons: Snapshot Copies a file or directory (almost) instantaneously and with minimal interrup7on to ongoing muta7ons quickly create branch copies of huge data sets checkpoin7ng the current state before experimen7ng Lazy copy- on- write approach: share chunks across files upon snapshot request, master first revokes any outstanding leases on the chunks of the files it is about to snapshot subsequent writes will require interac7on with master giving master chance to create new copy of chunk aler leases are revoked or have expired, opera7on is logged to disk in- memory state is updated by duplica7ng metadata of source file or directory tree reference counts of all chunks in the snapshot are incremented by one 24

OperaDons: Snapshot II Write request aler snapshot Client sends request to master to write chunk C Master no7ces reference count on chunk C > 1 and defers reply to client Master picks new chunk handle; asks each chunkserver with a current replica of C to create a copy C C created on chunkservers that have C local data copying Aler crea7on of C, proceed as normal Can be delays on wri7ng a snapshooed file. Consider snapshot followed by append. 25

Master OperaDon Executes all namespace opera7ons Manages chunk replicas Makes placement decisions Creates new chunks and replicas Coordina7on: keep chunks replicated balance load reclaim unused storage 26

Namespace Management and Locking Differences to tradi7onal file systems no per- directory structures that list files in a directory no support for file or directory aliases, e.g. sol and hard links in Unix Namespace implemented as a flat lookup table full path name metadata prefix compression for efficient in- memory representa7on each node in the namespace tree (absolute file or directory path) is associated with a read/write lock Each master opera7on needs to acquire locks before it can run read locks on all parent nodes read or write lock on node itself file crea7on does not require a write lock on parent directory as there is no such structure note metadata records have locks, whereas data chunks have leases 27

Replica Placement Goals of placement policy Maximize data reliability and availability maximize network bandwidth udlizadon Background: GFS clusters are highly distributed 100s of chunkservers across many racks accessed from 100s of clients from the same or different racks traffic between machines on different racks may cross many switches in/out bandwidth of rack typically lower than within rack Possible solu7on: spread chunks across machines and racks not enough to spread across machines, need to spread across racks also failure of shared rack resource (network, power switch) reads exploit aggregate bandwidth of mul7ple racks, writes flow through mul7ple racks 28

Replica Placement Replica CreaDon Replicas created for three reasons chunk crea7on re- replica7on rebalancing Chunk Crea7on: Selec7ng a chunkserver place chunks on servers with below- average disk space u7liza7on (goal is to equalize disk u7liza7on) place chunks on servers with low number of recent crea7ons (crea7on => imminent heavy write traffic) spread chunks across racks (see above) 29

Re- replicadon and Rebalancing Master triggers re- replicadon when replica7on factor drops below a user- specified goal chunkservers becomes unavailable replica is reported corrupted a faulty disk is disabled replica7on goal is increased Re- replica7on priori7zes chunks how far chunk is from replica7on goal (i.e. lost two replicas vs. lost one) priori7ze chunks of live files (vs. deleted, not yet removed) priori7ze any chunk that is blocking client progress Limit number of ac7ve clone opera7ons (cluster and chunkserver) Master rebalances replicas periodically beoer disk space u7liza7on load balancing gradually fill up new chunkservers (remove from below- avg free space) 30

Garbage CollecDon GFS does not immediately reclaim physical storage aler a file is deleted Lazy garbage collec7on mechanism master logs dele7on immediately by renaming file to a hidden name master removes any such hidden files during regular file system scan (3 day window to undelete) Orphaned chunks chunks that are not reachable through any file master iden7fies them in regular scan and deletes metadata uses heartbeat messages to inform chunkservers about dele7on (chunkservers send master list of chunks, master informs about deletes) 31

Garbage CollecDon II GC Discussion easy to iden7fy references (in file- chunk mappings on master) easy to iden7fy replicas (linux files in designated directories, not known to master - > garbage) GC Advantages vs. Eager Dele7on simple and reliable ex. chunk crea7on succeeds on only some servers, can just let GC clean up replicas ex. replica dele7on message lost Merges storage reclama7on into master s normal background ac7vi7es (scans of namespaces and handshakes with chunkservers) Batches, amor7zed cost, done when master is rela7vely free Disadvantages hinders user effort to fine tune usage when storage is 7ght clients with lots of temp files (double delete to remove sooner) 32

Stale Replicas Stale replicas detected based on chunk version number chunk version number is increased whenever master grants a lease master and replicas record the new version number (in persistent state) before client is no7fied stale replicas detected on chunkserver restart when sends its set of chunks to master removed during regular garbage collec7on 33

Fault Tolerance High availability fast recovery: master and chunkserver designed to recover in seconds; no dis7nc7on between normal and abnormal termina7on chunk replicadon: different parts of the namespace can have different replica7on factors master replicadon: opera7on log replicated for reliability; muta7on is considered commioed only once all replicas have wrioen the update; shadow masters for read- only access Data integrity chunkservers use checksums to detect data corrup7on idle chunkservers scan and verify inac7ve chunks and report to master each 64 KB block of a chunk has a corresponding 32 bit checksum if a block does not match its check sum, client is instructed to read from different replica checksums op7mized for write appends, not overwrites 34

GFS Performance 35

Hadoop Distributed File System (HDFS) Open- source clone of GFS similar assump7ons very similar design and architecture Differences no support for random writes, append only emphasizes platorm independence (implemented in Java) possibly, HDFS does not use a lookup table to manage namespace terminology (see next bullet) Alternate terminology namenode master datanode chunkserver block chunk edit log opera7on log 36

HDFS Architecture Figure Credit: HDFS Architecture Guide by D. Borthakur, 2008 37

GFS (2003) 227 chunkservers Example Cluster Sizes 180 TB available space, 155 TB used space 737k files, 232k dead files, 1550k chunks 60 MB metadata on master, 21 GB metadata on chunkservers HDFS (2010) 3500 nodes 60 million files, 63 million blocks, 2 million new files per day 54k block replicas per datanode all 25k nodes in HDFS clusters at Yahoo! provide 25 PB of storage large Hadoop clusters now: Yahoo! 5000- node cluster (?), but 42,000 nodes running Hadoop Facebook, LinkedIn, NetSeer, Quantcast 38

References S. Ghemawat, H. Gobioff, and S.- T. Leung: The Google File System. Proc. Symp. on OperaOng Systems Principles (SOSP), pp. 29-43, 2003. D. Borthakur: HDFS Architecture Guide. 2008. K. Shvachko, H. Kuang, S. Radia, and R. Chansler: The Hadoop Distributed File System. IEEE Symp. on Mass Storage Systems and Technologies, pp.1-10, 2010. 39