NPTEL Course Jan K. Gopinath Indian Institute of Science

Similar documents
The Google File System

The Google File System

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo

CLOUD-SCALE FILE SYSTEMS

Google File System 2

Google File System. Arun Sundaram Operating Systems

The Google File System

NPTEL Course Jan K. Gopinath Indian Institute of Science

Distributed System. Gang Wu. Spring,2018

CS435 Introduction to Big Data FALL 2018 Colorado State University. 11/7/2018 Week 12-B Sangmi Lee Pallickara. FAQs

NPTEL Course Jan K. Gopinath Indian Institute of Science

The Google File System

! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like

The Google File System (GFS)

The Google File System

Google Disk Farm. Early days

The Google File System GFS

GFS: The Google File System. Dr. Yingwu Zhu

9/26/2017 Sangmi Lee Pallickara Week 6- A. CS535 Big Data Fall 2017 Colorado State University

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Google File System. By Dinesh Amatya

GFS: The Google File System

Georgia Institute of Technology ECE6102 4/20/2009 David Colvin, Jimmy Vuong

GOOGLE FILE SYSTEM: MASTER Sanjay Ghemawat, Howard Gobioff and Shun-Tak Leung

2/27/2019 Week 6-B Sangmi Lee Pallickara

The Google File System

Authors : Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung Presentation by: Vijay Kumar Chalasani

7680: Distributed Systems

The Google File System

CSE 124: Networked Services Lecture-16

Lecture XIII: Replication-II

CSE 124: Networked Services Fall 2009 Lecture-19

Staggeringly Large Filesystems

Google File System, Replication. Amin Vahdat CSE 123b May 23, 2006

Seminar Report On. Google File System. Submitted by SARITHA.S

The Google File System. Alexandru Costan

Distributed File Systems II

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Lecture 3 Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, SOSP 2003

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

Abstract. 1. Introduction. 2. Design and Implementation Master Chunkserver

MapReduce. U of Toronto, 2014

CS 138: Google. CS 138 XVI 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

The Google File System

BigData and Map Reduce VITMAC03

CA485 Ray Walshe Google File System

goals monitoring, fault tolerance, auto-recovery (thousands of low-cost machines) handle appends efficiently (no random writes & sequential reads)

Distributed Systems. GFS / HDFS / Spanner

Google File System (GFS) and Hadoop Distributed File System (HDFS)

Distributed File Systems (Chapter 14, M. Satyanarayanan) CS 249 Kamal Singh

CS 138: Google. CS 138 XVII 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

4/9/2018 Week 13-A Sangmi Lee Pallickara. CS435 Introduction to Big Data Spring 2018 Colorado State University. FAQs. Architecture of GFS

Distributed Filesystem

GFS-python: A Simplified GFS Implementation in Python

L1:Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung ACM SOSP, 2003

Google is Really Different.

Distributed Systems. Lec 10: Distributed File Systems GFS. Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

DISTRIBUTED FILE SYSTEMS CARSTEN WEINHOLD

Google Cluster Computing Faculty Training Workshop

DISTRIBUTED FILE SYSTEMS CARSTEN WEINHOLD

GFS. CS6450: Distributed Systems Lecture 5. Ryan Stutsman

Google File System and BigTable. and tiny bits of HDFS (Hadoop File System) and Chubby. Not in textbook; additional information

Distributed Systems 16. Distributed File Systems II

Data Storage in the Cloud

Outline. INF3190:Distributed Systems - Examples. Last week: Definitions Transparencies Challenges&pitfalls Architecturalstyles

Distributed File Systems. Directory Hierarchy. Transfer Model

DISTRIBUTED SYSTEMS [COMP9243] Lecture 9b: Distributed File Systems INTRODUCTION. Transparency: Flexibility: Slide 1. Slide 3.

HDFS Architecture. Gregory Kesden, CSE-291 (Storage Systems) Fall 2017

Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA {Shv, Hairong, SRadia,

This material is covered in the textbook in Chapter 21.

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

Today CSCI Coda. Naming: Volumes. Coda GFS PAST. Instructor: Abhishek Chandra. Main Goals: Volume is a subtree in the naming space

Engineering Goals. Scalability Availability. Transactional behavior Security EAI... CS530 S05

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

Bigtable: A Distributed Storage System for Structured Data By Fay Chang, et al. OSDI Presented by Xiang Gao

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment.

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

11/5/2018 Week 12-A Sangmi Lee Pallickara. CS435 Introduction to Big Data FALL 2018 Colorado State University

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2016

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E.

EECS 482 Introduction to Operating Systems

BigTable. CSE-291 (Cloud Computing) Fall 2016

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

FLAT DATACENTER STORAGE CHANDNI MODI (FN8692)

Staggeringly Large File Systems. Presented by Haoyan Geng

Distributed File Systems

Big Data Processing Technologies. Chentao Wu Associate Professor Dept. of Computer Science and Engineering

CLOUD- SCALE FILE SYSTEMS THANKS TO M. GROSSNIKLAUS

Map-Reduce. Marco Mura 2010 March, 31th

Yuval Carmel Tel-Aviv University "Advanced Topics in Storage Systems" - Spring 2013

Performance Gain with Variable Chunk Size in GFS-like File Systems

HDFS Architecture Guide

Introduction to Cloud Computing

CSE 444: Database Internals. Lectures 26 NoSQL: Extensible Record Stores

big picture parallel db (one data center) mix of OLTP and batch analysis lots of data, high r/w rates, 1000s of cheap boxes thus many failures

NPTEL Course Jan K. Gopinath Indian Institute of Science

Yves Goeleven. Solution Architect - Particular Software. Shipping software since Azure MVP since Co-founder & board member AZUG

References. What is Bigtable? Bigtable Data Model. Outline. Key Features. CSE 444: Database Internals

Transcription:

Storage Systems NPTEL Course Jan 2012 (Lecture 41) K. Gopinath Indian Institute of Science

Lease Mgmt designed to minimize mgmt overhead at master a lease initially times out at 60 secs. primary can request and typ receive extensions indef msgs piggybacked on heartbeats master may need to revoke a lease before expiry eg. disable mutations on a file that is being renamed even if master loses comm with a primary, safe to grant a new lease to another replica after old lease expires.

Record Appends client pushes data to all replicas of last chunk of file client sends request to primary primary checks if append exceeds max size (64 MB) if so, pad chunk to max size tell secondaries to do same, and reply to client to retry on next chunk if record fits within max size (common case), primary appends data to its replica, tells secondaries to write data at its own offset, reports success to client

Record Appends (contd) if record append fails at any replica, client retries op replicas of same chunk may contain different data possibly including duplicates of the same record in whole or in part. GFS does not guarantee that all replicas are bytewise identical only guarantees that data written at least once as an atomic unit for op to report success, data must have been written at same offset on all replicas of some chunk all replicas at least as long as the end of record and any future record will be assigned a higher offset or a different chunk even if a different replica later becomes primary regions in which successful record append ops have written their data defined (hence consistent), whereas intervening regions inconsistent (hence undefined)

Snapshots make a copy of a file or a dir tree instantly minimize interruptions of ongoing mutations std COW: when master receives a snapshot request, revokes any outstanding leases on chunks in files to be snapshot any later write to these chunks: first find lease holder from master master can now create a new copy of chunk master logs op to disk; applies this log record to its inmemory state by duplicating metadata of source file or dir tree newly created snapshot files point to same chunks as source files

Snapshots (contd) new client write to a chunk C (after snapshot) sends request to master to find current lease holder master notices ref count of C>1 defers replying to client req and instead picks a new chunk handle C' asks each chunkserver with curr replica of C to create a new chunk called C data copied locally, not over netw (disks 3x 100 Mb Eth) request handling now same: master grants one replica a lease on C and replies to client, client writes chunk normally

Master Operation executes all namespace ops manages chunk replicas: makes placement decisions, creates new chunks and hence replicas coordinates various system-wide activities to keep chunks fully replicated balance load across all chunkservers reclaim unused storage Many ops take long time: eg. snapshot op has to revoke chunkserver leases on all chunks covered by it Allow multiple ops active and use locks over regions of namespace to ensure proper serialization

GFS dir ops GFS does not have a per-directory data structure that lists all the files in that directory instead: lookup table mapping full pathnames to metadata With prefix compression, efficient in memory No hard or symbolic links Each node in namespace tree (absolute file/dir name) has an associated read-write lock Each master op acquires a set of locks before it runs op on /d1/d2/.../dn/leaf: acquire read-locks on dir names /d1, /d1/d2,..., /d1/d2/.../dn, and either a read lock or a write lock on full pn /d1/d2/.../dn/leaf allows concurrent mutations in same dir. multiple file creations concurrent in same dir: each acquires a read lock on the directory name and a write lock on the file name

GFS dir ops (contd) Eg: prevent a file /home/user/foo from being created while /home/user is being snapshotted to /save/user snapshot creat snapshot /home R R /save R user W R user W foo W Since /home/user is locked R or W, only one of them can proceed Related HSM Problem: file being migrated in by perpetrator (locks inode a) victim: looks up file parent of dir locked while look up goes on but file locked by perpetrator, so hangs during migration in next victim: looks up parent of dir!

GFS Locks read-write lock objects allocated lazily to handle large namespace with many nodes deleted once not in use locks taken in a consistent total order to prevent deadlock ordered first by level in the namespace tree next, lexicographically within the same level

Replica Placement hundreds of chunkservers and clients spread across many machine racks crossing many netw switches also, bw into or out of a rack may be less than aggregate bw of all machines within rack need to maximize data reliability and availability spread chunk replicas across racks data survives even if an entire rack damaged or offline network bandwidth utilization read traffic for a chunk: aggregate bw of multiple racks write traffic: data to multiple racks, a tradeoff

hunk Creation, Re-replication, Rebalancing master creates a chunk: choose loc for initial empty replicas place new replicas on chunkservers with below-average disk space utilization limit number of recent creations on each chunkserver creation cheap but heavy write traffic soon append-once-read-many workload typ becomes readonly once they have been completely written spread replicas of a chunk across racks master re-replicates a chunk: # avlbl replicas below reqd some policies: boost priority of any chunk blocking client progress, or lost too many replicas, limit # of active repls in cluster and in each chunkserver; chunkserver limits repl bandwidth by throttling read reqs to source chunkserver master rebalances replicas

Garbage collection files deleted gc'ed later on regular fs scans till gc, avlbl under a hidden name eager deletion problematic (eg. loss of delete msgs) write errors create garbage chunks unknown to master similar regular scan of chunk namespace: master identifies orphaned chunks (not reachable from any file) and erases its metadata distributed garbage collection a hard problem but file-to-chunk mappings only by master any chunk replica is a Linux file in designated dir on each chunkserver any such replica not known to master is garbage

Stale Replica Detection replica may become stale if chunkserver fails and misses mutations master maintains a chunk version number to distinguish between up-to-date and stale replicas. when master grants a new lease on a chunk, incr chunk version # and informs up-to-date replicas master and these replicas all log new version # before any client notified and write to chunk. if a replica is not avlbl, its chunk version number not incr master detects stale replica when chunkserver restarts and reports its set of chunks and associated version# if master sees a version # greater than its #, master assumes that it failed when granting lease and so takes higher version to be up-to-date.

Fault Tolerance and Diagnosis High Availability Fast recovery: do not distinguish betw normal and abnormal termination servers routinely shut down just by killing process clients and other servers that time out on their requests: reconnect to restarted server, and retry Chunk replication Master Replication: Master's op log and checkpoints replicated on multiple machines A mutation to state committed only after its log record flushed to disk locally and on all master replicas. But one master process remains in charge of all mutations as well as background activities such as garbage collection that change system internally When master fails, restart almost instant. if its machine or disk fails, monitoring infrastructure outside GFS starts a new master process elsewhere with replicated operation log Shadows read only servers; try to track master by applying log ops

(contd) Data Integrity 32bit checksums in memory and logged persistently, separate from user data impractical to detect corruption by comparing replicas across chunkservers divergent replicas legal (esp with atomic record append each chunkserver must independently verify integrity of its own copy by maintaining checksums during idle periods, chunkservers scan and verify contents of inactive chunks Diagnostic tools detailed diagnostic (asynch) logging

Summary First really large FS Reliable Scalable Available No POSIX...