ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Similar documents
The Google File System. Alexandru Costan

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

The Google File System

The Google File System

The Google File System

The Google File System (GFS)

The Google File System

Google File System. Arun Sundaram Operating Systems

HDFS: Hadoop Distributed File System. Sector: Distributed Storage System

Georgia Institute of Technology ECE6102 4/20/2009 David Colvin, Jimmy Vuong

The Google File System

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo

Google Cluster Computing Faculty Training Workshop

GFS: The Google File System. Dr. Yingwu Zhu

! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like

CLOUD-SCALE FILE SYSTEMS

Google Disk Farm. Early days

Google File System. By Dinesh Amatya

Google File System (GFS) and Hadoop Distributed File System (HDFS)

The Google File System

L1:Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung ACM SOSP, 2003

GFS: The Google File System

CSE 124: Networked Services Fall 2009 Lecture-19

Distributed File Systems II

The Google File System

Google File System, Replication. Amin Vahdat CSE 123b May 23, 2006

CSE 124: Networked Services Lecture-16

CA485 Ray Walshe Google File System

Distributed System. Gang Wu. Spring,2018

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

Authors : Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung Presentation by: Vijay Kumar Chalasani

GOOGLE FILE SYSTEM: MASTER Sanjay Ghemawat, Howard Gobioff and Shun-Tak Leung

Google File System 2

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

The Google File System

NPTEL Course Jan K. Gopinath Indian Institute of Science

CS435 Introduction to Big Data FALL 2018 Colorado State University. 11/7/2018 Week 12-B Sangmi Lee Pallickara. FAQs

The Google File System GFS

NPTEL Course Jan K. Gopinath Indian Institute of Science

Staggeringly Large Filesystems

NPTEL Course Jan K. Gopinath Indian Institute of Science

GFS. CS6450: Distributed Systems Lecture 5. Ryan Stutsman

Distributed Filesystem

Google is Really Different.

Distributed Systems 16. Distributed File Systems II

Distributed Systems. Lec 10: Distributed File Systems GFS. Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

MapReduce. U of Toronto, 2014

BigData and Map Reduce VITMAC03

9/26/2017 Sangmi Lee Pallickara Week 6- A. CS535 Big Data Fall 2017 Colorado State University

Lecture 3 Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, SOSP 2003

4/9/2018 Week 13-A Sangmi Lee Pallickara. CS435 Introduction to Big Data Spring 2018 Colorado State University. FAQs. Architecture of GFS

Distributed Systems. GFS / HDFS / Spanner

goals monitoring, fault tolerance, auto-recovery (thousands of low-cost machines) handle appends efficiently (no random writes & sequential reads)

Seminar Report On. Google File System. Submitted by SARITHA.S

2/27/2019 Week 6-B Sangmi Lee Pallickara

Yuval Carmel Tel-Aviv University "Advanced Topics in Storage Systems" - Spring 2013

Abstract. 1. Introduction. 2. Design and Implementation Master Chunkserver

CS 138: Google. CS 138 XVI 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

Distributed File Systems (Chapter 14, M. Satyanarayanan) CS 249 Kamal Singh

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

Lecture XIII: Replication-II

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

11/5/2018 Week 12-A Sangmi Lee Pallickara. CS435 Introduction to Big Data FALL 2018 Colorado State University

7680: Distributed Systems

HDFS Architecture. Gregory Kesden, CSE-291 (Storage Systems) Fall 2017

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E.

GFS-python: A Simplified GFS Implementation in Python

Staggeringly Large File Systems. Presented by Haoyan Geng

Engineering Goals. Scalability Availability. Transactional behavior Security EAI... CS530 S05

Map-Reduce. Marco Mura 2010 March, 31th

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Overview. Why MapReduce? What is MapReduce? The Hadoop Distributed File System Cloudera, Inc.

Google File System and BigTable. and tiny bits of HDFS (Hadoop File System) and Chubby. Not in textbook; additional information

Data Storage in the Cloud

CS 138: Google. CS 138 XVII 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

Hadoop Distributed File System(HDFS)

DISTRIBUTED FILE SYSTEMS CARSTEN WEINHOLD

DISTRIBUTED FILE SYSTEMS CARSTEN WEINHOLD

DISTRIBUTED SYSTEMS [COMP9243] Lecture 9b: Distributed File Systems INTRODUCTION. Transparency: Flexibility: Slide 1. Slide 3.

Outline. INF3190:Distributed Systems - Examples. Last week: Definitions Transparencies Challenges&pitfalls Architecturalstyles

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment.

Distributed File Systems. Directory Hierarchy. Transfer Model

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2016

Today CSCI Coda. Naming: Volumes. Coda GFS PAST. Instructor: Abhishek Chandra. Main Goals: Volume is a subtree in the naming space

System that permanently stores data Usually layered on top of a lower-level physical storage medium Divided into logical units called files

Map Reduce. Yerevan.

Hadoop and HDFS Overview. Madhu Ankam

A BigData Tour HDFS, Ceph and MapReduce

HDFS Architecture Guide

Extreme computing Infrastructure

Distributed File Systems

Performance Gain with Variable Chunk Size in GFS-like File Systems

Flat Datacenter Storage. Edmund B. Nightingale, Jeremy Elson, et al. 6.S897

Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA {Shv, Hairong, SRadia,

AN OVERVIEW OF DISTRIBUTED FILE SYSTEM Aditi Khazanchi, Akshay Kanwar, Lovenish Saluja

CSE 124: Networked Services Lecture-17

This material is covered in the textbook in Chapter 21.

Transcription:

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part II: Data Center Software Architecture: Topic 1: Distributed File Systems GFS (The Google File System) 1

Filesystems Overview Permanently stores data Usually layered on top of a lower-level physical storage medium Divided into logical units called files Addressable by a filename ( foo.txt ) Usually supports hierarchical nesting (directories) A file path = relative (or absolute) directory + file name /dir1/dir2/foo.txt 2

Distributed Filesystems Support access to files on remote servers Must support concurrency Make varying guarantees about locking, who wins with concurrent writes, etc... Must gracefully handle dropped connections Can offer support for replication and local caching Different implementations sit in different places on complexity/feature scale 3

Motivation Google needed a good distributed file system Redundant storage of massive amounts of data on cheap and unreliable computers Why not use an existing file system? Google s problems are different from anyone else s Different workload and design priorities GFS is designed for Google apps and workloads Google apps are designed for GFS 4

High component failure rates Assumptions Inexpensive commodity components fail all the time Modest number of HUGE files Just a few million Each is 100MB or larger; multi-gb files typical Files are write-once, mostly appended to Perhaps concurrently Large streaming reads High sustained throughput favored over low latency 5

Files stored as chunks Fixed size (64MB) Reliability through replication GFS Design Decisions Each chunk replicated across 3+ chunkservers Single master to coordinate access, keep metadata Simple centralized management No data caching Little benefit due to large data sets, streaming reads Familiar interface, but customize the API Simplify the problem; focus on Google apps Add snapshot and record append operations 6

GFS Architecture Can anyone see a potential weakness in this design? 7

Problem: Single point of failure Scalability bottleneck GFS solutions: Shadow masters Single master Minimize master involvement never move data through it, use only for metadata and cache metadata at clients large chunk size master delegates authority to primary replicas in data mutations (chunk leases) Simple, and good enough for Google s concerns 8

Metadata Global metadata is stored on the master File and chunk namespaces Mapping from files to chunks Locations of each chunk s replicas All in memory (64 bytes / chunk) Fast Easily accessible Master has an operation log for persistent logging of critical metadata updates Persistent on local disk Replicated Checkpoints for faster recovery 9

Master s Responsibilities Metadata storage Namespace management/locking Periodic communication with chunkservers give instructions, collect state, track cluster health Chunk creation, re-replication, rebalancing balance space utilization and access speed spread replicas across racks to reduce correlated failures re-replicate data if redundancy falls below threshold rebalance data to smooth out storage and request load Garbage Collection simpler, more reliable than traditional file delete master logs the deletion, renames the file to a hidden name lazily garbage collects hidden files Stale replica deletion detect stale replicas using chunk version numbers 10

Mutations Mutation = write or record append Must be done for all replicas Goal: minimize master involvement Lease mechanism: Master picks one replica as primary of a chunk; gives it a lease for mutations Data flow decoupled from control flow 11

Read Algorithm 1. Application originates the read request 2. GFS client translates request and sends it to master 3. Master responds with chunk handle and replica locations 12

Read Algorithm 4. Client picks a location and sends the request 5. Chunkserver sends requested data to the client 6. Client forwards the data to the application 13

Write Algorithm 1. Application originates the request 2. GFS client translates request and sends it to master 3. Master responds with chunk handle and replica locations, which are cached by the client. 14

Write Algorithm 4. Client pushes write data to all locations. Data is stored in chunkserver s internal buffers 15

Write Algorithm 5. Client sends write command to primary 6. Primary determines serial order for data instances in its buffer and writes the instances in that order to the chunk 7. Primary sends the serial order to the secondaries and tells them to perform the write 16

Write Algorithm 8. Secondaries respond back to primary 9. Primary responds back to the client 17

Atomic Record Append GFS appends it to the file atomically at least once GFS picks the offset Works for concurrent writers Used heavily by Google apps e.g., for files that serve as multiple-producer/single-consumer queues Merge results from multiple machines into one file 18

Record Append Algorithm Same as write, but no offset and 1. Client pushes write data to all locations 2. Primary checks if record fits in specified chunk 3. If the record does not fit: 1. Pads the chunk 2. Tells secondary to do the same 3. Informs client and has the client retry 4. If record fits, then the primary: 1. Appends the record 2. Tells secondaries to do the same 3. Receives responses and responds to the client 19

Relaxed Consistency Model Consistent = all replicas have the same value Defined = replica reflects the mutation in its entirety, and consistent Some properties: concurrent writes leave region consistent, but possibly undefined failed writes leave the region inconsistent Some work has moved into the applications: e.g., self-validating, self-identifying records Simple, efficient Google apps can live with it 20

Garbage Collection use garbage collection instead of immediate reclamation when a file is deleted: The master only records chunks (files) that have been removed. Attached with HeartBeat messages, chunkservers know which of their chunks that are orphaned and can be removed. Why garbage collection and not eager deletion? Simple and reliable as the master doesn t send and manage deletion messages. Batched operation for lower amortized cost and better timing. The delay provides a safety net against accidental, irreversible deletion. 21

Fault Tolerance High availability Fast recovery master and chunkservers restartable in a few seconds Chunk replication default: 3 replicas. Shadow masters Data integrity Checksum every 64KB block in each chunk 22

Conclusion GFS demonstrates how to support large-scale processing workloads on commodity hardware design to tolerate frequent component failures optimize for huge files that are mostly appended and read feel free to relax and extend FS interface as required go for simple solutions (e.g., single master) GFS has met Google s storage needs, therefore good enough for them. Acknowledgement: The slides are adapted from slides made by Chris Hill and Sussman. 23

Questions What are the workload and technology characteristics GFS assumed in its design and what are their corresponding design choices? (Sections 1 and 2.1) What design choices GFS adopts to keep the master from being a bottleneck (Sections 2.4, 2.5, 3.1) Why does GFS use garbage collection, rather than eager deletion, for its file deletions? (Section 4.4.1) 24