The Google File System. Alexandru Costan

Similar documents
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Google File System (GFS) and Hadoop Distributed File System (HDFS)

The Google File System

CLOUD-SCALE FILE SYSTEMS

The Google File System

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo

The Google File System

L1:Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung ACM SOSP, 2003

The Google File System

The Google File System

Distributed File Systems II

GFS: The Google File System. Dr. Yingwu Zhu

The Google File System

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Google File System. By Dinesh Amatya

Yuval Carmel Tel-Aviv University "Advanced Topics in Storage Systems" - Spring 2013

GFS: The Google File System

Google File System. Arun Sundaram Operating Systems

Authors : Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung Presentation by: Vijay Kumar Chalasani

HDFS: Hadoop Distributed File System. Sector: Distributed Storage System

The Google File System

Distributed Filesystem

The Google File System (GFS)

CA485 Ray Walshe Google File System

! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

The Google File System

Google File System, Replication. Amin Vahdat CSE 123b May 23, 2006

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

Distributed Systems 16. Distributed File Systems II

Distributed Systems. Lec 10: Distributed File Systems GFS. Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

Georgia Institute of Technology ECE6102 4/20/2009 David Colvin, Jimmy Vuong

Distributed System. Gang Wu. Spring,2018

CS435 Introduction to Big Data FALL 2018 Colorado State University. 11/7/2018 Week 12-B Sangmi Lee Pallickara. FAQs

Google Cluster Computing Faculty Training Workshop

Google Disk Farm. Early days

CSE 124: Networked Services Lecture-16

CSE 124: Networked Services Fall 2009 Lecture-19

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E.

MapReduce. U of Toronto, 2014

GOOGLE FILE SYSTEM: MASTER Sanjay Ghemawat, Howard Gobioff and Shun-Tak Leung

BigData and Map Reduce VITMAC03

Distributed Systems. GFS / HDFS / Spanner

NPTEL Course Jan K. Gopinath Indian Institute of Science

GFS. CS6450: Distributed Systems Lecture 5. Ryan Stutsman

Google File System 2

Abstract. 1. Introduction. 2. Design and Implementation Master Chunkserver

Distributed File Systems (Chapter 14, M. Satyanarayanan) CS 249 Kamal Singh

Google is Really Different.

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

Staggeringly Large Filesystems

NPTEL Course Jan K. Gopinath Indian Institute of Science

DISTRIBUTED FILE SYSTEMS CARSTEN WEINHOLD

DISTRIBUTED FILE SYSTEMS CARSTEN WEINHOLD

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

7680: Distributed Systems

Lecture 3 Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, SOSP 2003

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA {Shv, Hairong, SRadia,

HDFS Architecture. Gregory Kesden, CSE-291 (Storage Systems) Fall 2017

The Google File System GFS

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

NPTEL Course Jan K. Gopinath Indian Institute of Science

Seminar Report On. Google File System. Submitted by SARITHA.S

Hadoop Distributed File System(HDFS)

4/9/2018 Week 13-A Sangmi Lee Pallickara. CS435 Introduction to Big Data Spring 2018 Colorado State University. FAQs. Architecture of GFS

Staggeringly Large File Systems. Presented by Haoyan Geng

UNIT-IV HDFS. Ms. Selva Mary. G

Map-Reduce. Marco Mura 2010 March, 31th

TI2736-B Big Data Processing. Claudia Hauff

HDFS Architecture Guide

9/26/2017 Sangmi Lee Pallickara Week 6- A. CS535 Big Data Fall 2017 Colorado State University

Google File System and BigTable. and tiny bits of HDFS (Hadoop File System) and Chubby. Not in textbook; additional information

Hadoop and HDFS Overview. Madhu Ankam

Distributed File Systems. Directory Hierarchy. Transfer Model

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

Lecture XIII: Replication-II

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment.

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2016

Overview. Why MapReduce? What is MapReduce? The Hadoop Distributed File System Cloudera, Inc.

2/27/2019 Week 6-B Sangmi Lee Pallickara

11/5/2018 Week 12-A Sangmi Lee Pallickara. CS435 Introduction to Big Data FALL 2018 Colorado State University

Outline. INF3190:Distributed Systems - Examples. Last week: Definitions Transparencies Challenges&pitfalls Architecturalstyles

The Hadoop Distributed File System Konstantin Shvachko Hairong Kuang Sanjay Radia Robert Chansler

CS 138: Google. CS 138 XVI 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Data Storage in the Cloud

A BigData Tour HDFS, Ceph and MapReduce

goals monitoring, fault tolerance, auto-recovery (thousands of low-cost machines) handle appends efficiently (no random writes & sequential reads)

DISTRIBUTED SYSTEMS [COMP9243] Lecture 9b: Distributed File Systems INTRODUCTION. Transparency: Flexibility: Slide 1. Slide 3.

CS 345A Data Mining. MapReduce

Engineering Goals. Scalability Availability. Transactional behavior Security EAI... CS530 S05

GFS-python: A Simplified GFS Implementation in Python

Introduction to MapReduce

CS60021: Scalable Data Mining. Sourangshu Bhattacharya

CS370 Operating Systems

TITLE: PRE-REQUISITE THEORY. 1. Introduction to Hadoop. 2. Cluster. Implement sort algorithm and run it using HADOOP

CS 138: Google. CS 138 XVII 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

CLOUD- SCALE FILE SYSTEMS THANKS TO M. GROSSNIKLAUS

Map Reduce. Yerevan.

Transcription:

1 The Google File System Alexandru Costan

Actions on Big Data 2 Storage Analysis Acquisition Handling the data stream Data structured unstructured semi-structured Results Transactions

Outline File systems overview GFS (Google File System) Motivations Architecture Algorithms HDFS (Hadoop File System) 3

File systems overview Permanently stores data Usually layered on top of a lower-level physical storage medium Divided into logical units called files Addressable by a filename ( foo.txt ) Usually supports hierarchical nesting (directories) A file path = relative (or absolute) directory + file name /dir1/dir2/foo.txt 4

Distributed file systems Support access to files on remote servers Must support concurrency Make varying guarantees about locking, who wins with concurrent writes, etc. Must gracefully handle dropped connections Can offer support for replication and local caching 5

Motivating application: search Crawl the whole web Store it all on one big disk Process user searches on one big CPU Doesn t scale! 6

Motivation Google needed a good distributed file system Redundant storage of massive amounts of data on cheap and unreliable computers Why not use an existing file system? Google s problems are different from anyone else s Different workload and design priorities GFS is designed for Google apps and workloads Google apps are designed for GFS 7

Assumptions: environment Commodity hardware Inexpensive High component failure rates Inexpensive commodity components fail all the time The norm rather than the exception Huge storage needs Must support TBs of space 8

Assumptions: applications Modest number of HUGE files Just a few million Each is 100 MB or larger; multi-gb files typical Files are write-once, mostly appended to Large, sequential writes that append Multiple clients concurrently append to one file (e.g., producer-consumer queues) Want atomicity for appends without synchronization overhead among clients Large streaming reads High sustained throughput favored over low latency 9

GFS design principles Files stored as chunks Fixed size (64MB) Replication Each chunk replicated across 3+ chunkservers No data caching: little benefit due to large data sets 10

GFS Architecture Single master to coordinate access, keep metadata Simple centralized management Multiple chunkservers Grouped into racks Connected through switches Master/chunkserver coordination: HeartBeat messages Multiple clients 11

GFS Architecture 12 Can anyone see a potential weakness in this design?

Single master Problem: Single point of failure Scalability bottleneck GFS solutions: Shadow masters Minimize master involvement never move data through it, use only for metadata and cache metadata at clients large chunk size master delegates authority to primary replicas in data mutations (chunk leases) Simple, and good enough for Google s concerns 13

Metadata Global metadata is stored on the master File and chunk namespaces Mapping from files to chunks Locations of each chunk s replicas All in memory (64 bytes / chunk) Fast Easily accessible Master has an operation log for persistent logging of critical metadata updates Persistent on local disk Replicated Checkpoints for faster recovery 14

Master s responsabilities Metadata storage Holds all metadata in RAM Very fast operations on file system metadata Namespace management / locking Periodic communication with chunkservers Give instructions, collect state, track cluster health Chunk creation, re-replication, rebalancing Balance space utilization and access speed Spread replicas across racks to reduce correlated failures Re-replicate data if redundancy falls below threshold Rebalance data to smooth out storage and request load 15

Master s responsabilities Garbage collection simpler, more reliable than traditional file delete master logs the deletion, renames the file to a hidden name lazily garbage collects hidden files Stale replica deletion detect stale replicas using chunk version numbers 16

Chunkserver s responsabilities Stores 64 MB file chunks on local disk Using standard Linux filesystem Each with version number and checksum Read/write requests specify chunk handle and byte range Chunks replicated on configurable number of chunkservers (default: 3) No caching of file data (beyond standard Linux buffer cache) 17

Client Issues control (metadata) requests to master server Issues data requests directly to chunkservers Caches metadata Does no caching of data No consistency difficulties among clients Streaming reads (read once) and append writes (write once) don t benefit much from caching at client 18

Mutations 19 Two types of mutations Writes Cause data to be written at an application-specified file offset Record appends Operations that append data to a file Cause data to be appended atomically at least once Offset chosen by GFS, not by the client Must be done for all replicas Lease mechanism: Master picks one replica as primary; Gives it a lease for mutations Goal: minimize master involvement Data flow decoupled from control flow

Client API Not a file system in traditional sense Not POSIX compliant Library that apps can link in for storage access API: open, delete, read, write (as expected) snapshot: quickly create copy of file append: at least once, possibly with gaps and/or inconsistencies among clients 20

Read algorithm 1. Application originates the read request 2. GFS client translates request and sends it to master 3. Master responds with chunk handle and replica locations 21

Read algorithm 4. Client picks a location and sends the request 5. Chunkserver sends requested data to the client 6. Client forwards the data to the application 22

Write algorithm 1. Application originates the request 2. GFS client translates request and sends it to master 3. Master responds with chunk handle and replica locations 23

Write algorithm 4. Client pushes write data to all locations. Data is stored in chunkserver s internal buffers 24

Write algorithm 5. Client sends write command to primary 6. Primary determines serial order for data instances in its buffer and writes the instances in that order to the chunk 7. Primary sends the serial order to the secondaries and tells them to perform the write 25

Write algorithm 8. Secondaries respond back to primary 9. Primary responds back to the client 26

Atomic record append GFS appends it to the file atomically at least once GFS picks the offset Works for concurrent writers Used heavily by Google apps e.g., for files that serve as multipleproducer/single-consumer queues Merge results from multiple machines (search bots) into one file 27

Record Append algorithm 28 Same as write, but no offset and... 1. Client pushes write data to all locations 2. Primary checks if record fits in specified chunk (the last chunk of the file) 3. If the record does not fit: 1. Pads the chunk 2. 3. Tells secondary to do the same Informs client to retry on next chunk 4. If record fits, then the primary: 1. 2. Appends the record Tells secondaries to do the same 3. Receives responses and responds to the client

Relaxed consistency model States of a file region after a mutation Some properties: concurrent writes leave region consistent, but possibly undefined failed writes leave the region inconsistent Some work has moved into the applications: Consistent: all clients see the same data, regardless which replicas they read from Defined: consistent + all clients see what the mutation writes in its entirety Undefined: consistent + but it may not reflect what any one mutation has written Inconsistent: clients see different data at different times e.g., self-validating, self-identifying records Simple, efficient : Google apps can live with it 29

Fault tolerance 30 High availability Fast recovery: master and chunkservers restartable in a few seconds Chunk replication: default: 3 replicas Shadow masters Data integrity Checksum every 64KB block in each chunk Limitations Security Trusted environment, trusted users That doesn t stop users from interfering with each other Does not mask all forms of data corruption: requires application-level checksum

Deployment at Google Many GFS clusters Hundreds/thousands of storage nodes each Managing petabytes of data GFS is under BigTable, etc. GFS demonstrates how to support large-scale processing workloads on commodity hardware Design to tolerate frequent component failures Optimize for huge files that are mostly appended and read Feel free to relax and extend FS interface as required Go for simple solutions (e.g., single master) GFS has met Google s storage needs, therefore good enough for them 31

32 Hadoop File System (open-source GFS)

HDFS design assumptions Single machines tend to fail Hard disk, power supply More machines = increased failure probability Data doesn t fit on a single node Desired: Commodity hardware Built-in backup and failover... Does this look familiar? 33

Namenode and Datanodes Namenode (Master) Holds metadata: Where file blocks are stored (namespace image) Edit (Operation) log Secondary namenode (Shadow master) Datanode (Chunkserver) Stores and retrieves blocks...by client or namenode Reports to namenode with list of blocks they are storing 34

HDFS Architecture 35

Fault tolerance in HDFS NameNode uses heartbeats to detect DataNode failures: Once every 3 seconds Chooses new DataNodes for new replicas Balances disk usage Balances communication traffic to DataNodes Multiple copies of a block are stored: Default replication: 3 Copy #1 on another node on the same rack Copy #2 on another node on a different rack 36

Data Correctness Use checksums to validate data CRC32 File creation Client computes checksum per 512 byte DataNode stores the checksum File access Client retrieves the data and checksum from DataNode If validation fails, client tries other replicas 37

NameNode failures 38 A single point of failure Transaction Log stored in multiple directories A directory on the local file system A directory on a remote file system (NFS/CIFS) The Secondary NameNode holds a backup of the NameNode data On the same machine L Need to develop a really highly available solution!

Data pipelining Client retrieves a list of DataNodes on which to place replicas of a block Client writes block to the first DataNode The first DataNode forwards the data to the second The second DataNode forwards the data to the third DataNode in the pipeline When all replicas are written, the client moves on to write the next block in file 39

User interface Commands for HDFS User: hdfs hdfs hdfs hdfs hdfs dfs dfs dfs dfs dfs put /localsrc /dest -mkdir /foodir -cat /foodir/myfile.txt -rm /foodir myfile.txt -cp /src /dest Commands for HDFS Administrator hdfs dfsadmin -report hdfs dfsadmin -decommission datanodename Web Interface http://host:port/dfshealth.jsp 40

Noticeable differences from GFS Only single-writers per file No record append operation Chunks of 128 MB or more Open source Provides many interfaces and libraries for different file systems S3, KFS, etc. Thrift (C++, Python,...), libhdfs (C), FUSE 41

Anatomy of a file read 42

Anatomy of a file write 43

Bibliography Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. 2003. The Google file system. In Proceedings of the Nineteenth ACM Symposium on Operating systems principles (SOSP '03) Konstantin Shvachko, Hairong Kuang, Sanjay Radia, and Robert Chansler. 2010. The Hadoop Distributed File System. In Proceedings of the 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST) (MSST '10). IEEE Computer Society https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html 44