ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Similar documents
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

The Google File System. Alexandru Costan

The Google File System

The Google File System (GFS)

Google File System. Arun Sundaram Operating Systems

CLOUD-SCALE FILE SYSTEMS

The Google File System

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo

The Google File System

The Google File System

The Google File System

! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like

Distributed File Systems II

CA485 Ray Walshe Google File System

Georgia Institute of Technology ECE6102 4/20/2009 David Colvin, Jimmy Vuong

GFS: The Google File System. Dr. Yingwu Zhu

The Google File System

Google Cluster Computing Faculty Training Workshop

HDFS: Hadoop Distributed File System. Sector: Distributed Storage System

GFS: The Google File System

Google File System. By Dinesh Amatya

CSE 124: Networked Services Lecture-16

CSE 124: Networked Services Fall 2009 Lecture-19

The Google File System

Google Disk Farm. Early days

NPTEL Course Jan K. Gopinath Indian Institute of Science

The Google File System

Google File System (GFS) and Hadoop Distributed File System (HDFS)

Distributed System. Gang Wu. Spring,2018

Google File System, Replication. Amin Vahdat CSE 123b May 23, 2006

GFS. CS6450: Distributed Systems Lecture 5. Ryan Stutsman

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

NPTEL Course Jan K. Gopinath Indian Institute of Science

Distributed Systems 16. Distributed File Systems II

CSE 124: Networked Services Lecture-17

L1:Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung ACM SOSP, 2003

Distributed Filesystem

Authors : Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung Presentation by: Vijay Kumar Chalasani

Google File System 2

CS6030 Cloud Computing. Acknowledgements. Today s Topics. Intro to Cloud Computing 10/20/15. Ajay Gupta, WMU-CS. WiSe Lab

CS435 Introduction to Big Data FALL 2018 Colorado State University. 11/7/2018 Week 12-B Sangmi Lee Pallickara. FAQs

The Google File System GFS

Staggeringly Large Filesystems

Distributed Systems. Lec 10: Distributed File Systems GFS. Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

GOOGLE FILE SYSTEM: MASTER Sanjay Ghemawat, Howard Gobioff and Shun-Tak Leung

Google is Really Different.

NPTEL Course Jan K. Gopinath Indian Institute of Science

HDFS Architecture. Gregory Kesden, CSE-291 (Storage Systems) Fall 2017

MapReduce. U of Toronto, 2014

BigData and Map Reduce VITMAC03

9/26/2017 Sangmi Lee Pallickara Week 6- A. CS535 Big Data Fall 2017 Colorado State University

CS 138: Google. CS 138 XVI 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Lecture XIII: Replication-II

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

FLAT DATACENTER STORAGE. Paper-3 Presenter-Pratik Bhatt fx6568

FLAT DATACENTER STORAGE CHANDNI MODI (FN8692)

CS-580K/480K Advanced Topics in Cloud Computing. Object Storage

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

Engineering Goals. Scalability Availability. Transactional behavior Security EAI... CS530 S05

Finding a Needle in a Haystack. Facebook s Photo Storage Jack Hartner

Abstract. 1. Introduction. 2. Design and Implementation Master Chunkserver

Map-Reduce. Marco Mura 2010 March, 31th

Flat Datacenter Storage. Edmund B. Nightingale, Jeremy Elson, et al. 6.S897

Distributed Systems. GFS / HDFS / Spanner

2/27/2019 Week 6-B Sangmi Lee Pallickara

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

DISTRIBUTED FILE SYSTEMS CARSTEN WEINHOLD

GFS-python: A Simplified GFS Implementation in Python

4/9/2018 Week 13-A Sangmi Lee Pallickara. CS435 Introduction to Big Data Spring 2018 Colorado State University. FAQs. Architecture of GFS

CS 138: Google. CS 138 XVII 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E.

Seminar Report On. Google File System. Submitted by SARITHA.S

DISTRIBUTED FILE SYSTEMS CARSTEN WEINHOLD

Finding a needle in Haystack: Facebook's photo storage

Lecture 3 Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, SOSP 2003

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Yuval Carmel Tel-Aviv University "Advanced Topics in Storage Systems" - Spring 2013

Outline. INF3190:Distributed Systems - Examples. Last week: Definitions Transparencies Challenges&pitfalls Architecturalstyles

Distributed File Systems (Chapter 14, M. Satyanarayanan) CS 249 Kamal Singh

Today CSCI Coda. Naming: Volumes. Coda GFS PAST. Instructor: Abhishek Chandra. Main Goals: Volume is a subtree in the naming space

Ceph: A Scalable, High-Performance Distributed File System

7680: Distributed Systems

System that permanently stores data Usually layered on top of a lower-level physical storage medium Divided into logical units called files

Ambry: LinkedIn s Scalable Geo- Distributed Object Store

Google File System and BigTable. and tiny bits of HDFS (Hadoop File System) and Chubby. Not in textbook; additional information

Map Reduce. Yerevan.

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment.

goals monitoring, fault tolerance, auto-recovery (thousands of low-cost machines) handle appends efficiently (no random writes & sequential reads)

Distributed File Systems

HDFS Architecture Guide

DISTRIBUTED SYSTEMS [COMP9243] Lecture 9b: Distributed File Systems INTRODUCTION. Transparency: Flexibility: Slide 1. Slide 3.

Staggeringly Large File Systems. Presented by Haoyan Geng

CSE 153 Design of Operating Systems

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2016

11/5/2018 Week 12-A Sangmi Lee Pallickara. CS435 Introduction to Big Data FALL 2018 Colorado State University

Today s Papers. Array Reliability. RAID Basics (Two optional papers) EECS 262a Advanced Topics in Computer Systems Lecture 3

A BigData Tour HDFS, Ceph and MapReduce

Hadoop Distributed File System(HDFS)

Transcription:

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part II: Software Infrastructure in Data Centers: Distributed File Systems 1

Permanently stores data Filesystems Overview Usually layered on top of a lower-level physical storage medium Divided into logical units called files Addressable by a filename ( foo.txt ) Usually supports hierarchical nesting (directories) A file path = relative (or absolute) directory + file name /dir1/dir2/foo.txt 2

Distributed Filesystems Support access to files on remote servers Must support concurrency Make varying guarantees about locking, who wins with concurrent writes, etc... Must gracefully handle dropped connections Can offer support for replication and local caching Different implementations sit in different places on complexity/feature scale 3

A Case study: Evolution of Distributed File System Example: NFS (Network File System) File is allocation unit. Files are not evenly distributed and load may be balanced. The control path (data location service) is coupled with the data path, limiting the throughout Separate metadata server and data (chunk) servers. Decouple the two paths Chunk is the unit. Replication is used to tolerate fault. Could be the metadata server the bottleneck? 4

A Case study: Evolution of Distributed File System (cont d) Typical files of several hundreds of MBs each, and sets accordingly the chunk size to 64 MBs limiting memory demand (why is this important?) Client cache metadata to reduce the master s load Three-copy-replication of each chunk the Master is in charge of detecting and repairing failed chunk servers. Know your customers (workloads) before you design and implement. 5

Google s GFS: Motivation Google needed a good distributed file system Redundant storage of massive amounts of data on cheap and unreliable computers Why not use an existing file system? Google s problems are different from anyone else s Different workload and design priorities GFS is designed for Google apps and workloads Google apps are designed for GFS 6

Assumptions High component failure rates Inexpensive commodity components fail all the time Modest number of HUGE files Just a few million Each is 100MB or larger; multi-gb files typical Files are write-once, mostly appended to Perhaps concurrently Large streaming reads High sustained throughput favored over low latency 7

Files stored as chunks Fixed size (64MB) Reliability through replication GFS Design Decisions Each chunk replicated across 3+ chunkservers Single master to coordinate access, keep metadata Simple centralized management No data caching Little benefit due to large data sets, streaming reads Familiar interface, but customize the API Simplify the problem; focus on Google apps Add snapshot and record append operations 8

GFS Architecture Each chunk is identified by an immutable and globally unique 64 bit chunk handle assigned by the master at the time of chunk creation. Can anyone see a potential weakness in this design? 9

Problem: Single point of failure Scalability bottleneck GFS solutions: Shadow masters Single master Minimize master involvement never move data through it, use only for metadata and cache metadata at clients large chunk size master delegates authority to primary replicas in data mutations (chunk leases) Simple, and good enough for Google s concerns 10

Metadata Global metadata is stored on the master File and chunk namespaces Mapping from files to chunks Locations of each chunk s replicas All in memory (64 bytes / chunk) Fast Easily accessible Master has an operation log for persistent logging of critical metadata updates Persistent on local disk Replicated Checkpoints for faster recovery 11

Master s Responsibilities Metadata storage Namespace management/locking Periodic communication with chunkservers give instructions, collect state, track cluster health Chunk creation, re-replication, rebalancing balance space utilization and access speed spread replicas across racks to reduce correlated failures re-replicate data if redundancy falls below threshold rebalance data to smooth out storage and request load Garbage Collection simpler, more reliable than traditional file delete master logs the deletion, renames the file to a hidden name lazily garbage collects hidden files Stale replica deletion detect stale replicas using chunk version numbers 12

Mutations Mutation = write or record append Must be done for all replicas Goal: minimize master involvement Lease mechanism: Master picks one replica as primary of a chunk; gives it a lease for mutations Data flow decoupled from control flow 13

Read Algorithm 1. Application originates the read request 2. GFS client translates request and sends it to master 3. Master responds with chunk handle and replica locations 14

Read Algorithm 4. Client picks a location and sends the request 5. Chunkserver sends requested data to the client 6. Client forwards the data to the application 15

Write Algorithm 1. Application originates the request 2. GFS client translates request and sends it to the master 3. Master responds with chunk handle and replica locations, which are cached by the client. 16

Write Algorithm 4. Client pushes write data to all locations. Data is stored in chunkserver s internal buffers 17

Write Algorithm 5. Client sends write command to primary 6. Primary determines serial order for data instances in its buffer and writes the instances in that order to the chunk 7. Primary sends the serial order to the secondaries and tells them to perform the write 18

Write Algorithm 8. Secondaries respond back to primary 9. Primary responds back to the client 19

Atomic Record Append GFS appends it to the file atomically at least once GFS picks the offset Works for concurrent writers Used heavily by Google apps e.g., for files that serve as multiple-producer/singleconsumer queues Merge results from multiple machines into one file 20

Record Append Algorithm Same as write, but no offset and 1. Client pushes write data to all locations 2. Primary checks if record fits in specified chunk 3. If the record does not fit: 1. Pads the chunk 2. Tells secondary to do the same 3. Informs client and has the client retry 4. If record fits, then the primary: 1. Appends the record 2. Tells secondaries to do the same 3. Receives responses and responds to the client 21

Relaxed Consistency Model Consistent = all replicas have the same value Defined = replica reflects the mutation in its entirety, and consistent Some properties: concurrent writes leave region consistent, but possibly undefined failed writes leave the region inconsistent Some work has moved into the applications: e.g., self-validating, self-identifying records Simple, efficient Google apps can live with it 22

Garbage Collection Use garbage collection instead of immediate reclamation when a file is deleted: The master only records chunks (files) that have been removed. Attached with HeartBeat messages, chunkservers know which of their chunks that are orphaned and can be removed. Why garbage collection and not eager deletion? Simple and reliable as the master doesn t send and manage deletion messages. Batched operation for lower amortized cost and better timing. The delay provides a safety net against accidental, irreversible deletion. 23

Fault Tolerance High availability Fast recovery master and chunkservers restartable in a few seconds Chunk replication default: 3 replicas. Shadow masters Data integrity Checksum every 64KB block in each chunk 24

Conclusion GFS demonstrates how to support large-scale processing workloads on commodity hardware design to tolerate frequent component failures optimize for huge files that are mostly appended and read feel free to relax and extend FS interface as required go for simple solutions (e.g., single master) GFS has met Google s storage needs, therefore good enough for them. 25

Facebook s Photo Storage: Motivation Facebook stores over 260 billion images 20 PB of data Users upload one billion new images each week 60 TB of data Facebook serves over one million images per second at peak Two types of workloads for image serving Profile pictures heavy access, smaller size Photo albums intermittent access, higher at beginning, decreasing over time (long tail) Data is written once, read often, never modified, and rarely deleted 26

Long Tail Issue What can you read from the plot? 27

Problem Description Four main goals for photo serving method: High throughput and low latency Current file systems need multiple disk accesses for a read Only one data access for a read, metadata are reduced to fit in memory Fault-tolerant Haystack replicates each photo in geographically distinct locations Cost-effective Save money over traditional approaches (reduce reliance on CDNs!) Simplicity Make it easy to implement and maintain 28

Typical Design 29

Facebook s Old Design 30

Old Design The old photo infrastructure consisted of several tiers: Upload tier receives users photo uploads, scales the original images and saves them on the NFS storage tier. Photo serving tier receives HTTP requests for photo images and serves them from the NFS storage tier. NFS storage tier built on top of commercial storage appliances. 31

Features of Old Design Since each image is stored in its own file, there is an enormous amount of metadata generated on the storage tier due to the namespace directories and file inodes. The amount of metadata far exceeds the caching abilities of the NFS storage tier, resulting in multiple I/O operations per photo upload or read request After optimization, there are three disk accesses: one to read the directory metadata into memory, a second to load the inode into memory, and a third to read the file contents) High degree of reliance on CDNs = expensive 32

Design of Haystack 33

User visits page Step-through of Operation Web server receives the request Uses Haystack Directory to construct URL for each photo http://<cdn>/<cache>/<machine id>/<logical volume, Photo> From which CDN to request the photo This portion may be omitted if the photo is available directly from the Cache If CDN is unsuccessful, contacts the Cache 34

Haystack 35

Four main functions Haystack Directory Provides a mapping from logical volumes to physical volumes Load balances writes across logical volumes Determines whether a photo request should be handled by the CDN or by the Haystack Cache Identifies logical volumes that are read-only Operational reasons Reached storage capacity 36

Haystack 37

Haystack Cache Distributed hash table, uses photo s id to locate cached data Receives HTTP requests from CDNs and browsers If photo is in Cache, return the photo If photo is not in Cache, fetches photo from the Haystack Store and returns the photo Add a photo to Cache if two conditions are met The request comes directly from a browser, not the CDN The photo is fetched from a write-enabled store machine 38

Cache Hit Rate 39

Haystack 40

Layout of Haystack Store File 41

A Closer Look at the Needles A needle is uniquely identified by its <Offset, Key, Alternate Key, Cookie> tuple, where the offset is the needle offset in the haystack store. 42

Haystack Index File 43

Haystack Index File The index file provides the minimal metadata required to locate a particular needle in the store Main purpose: allow quick loading of the needle metadata into memory without traversing the larger Haystack store file upon restarting Index is usually less than 1% the size of the store file 44

Haystack Store Each Store machine manages multiple physical volumes Can access a photo quickly using only the id of the corresponding logical volume and the file offset of the photo Handles three types of requests Read Write Delete 45

Haystack Store Read Cache machine supplies the logical volume id, key, alternate key, and cookie to the Store machine -- Purpose of the cookie? Store machine looks up the relevant metadata in its inmemory mappings Seeks to the appropriate offset in the volume file, reads the entire needle Verifies cookie and integrity of the data Returns data to the Cache machine 46

Haystack Store Write Web server provides logical volume id, key, alternate key, cookie, and data to Store machines Store machines synchronously append needle images to physical volume files Update in-memory mappings as needed 47

Haystack Store Delete Store machine sets the delete flag in both the in-memory mapping and in the volume file Space occupied by deleted needles is lost! How to reclaim? Compaction! Important because 25% of photos get deleted in a given year. 48

Reduced disk I/O Haystack Advantages 10 TB/node -> 10 GB of metadata This amount is easily cacheable! Simplified metadata No directory structures/file names 64-bit ID Results in easier lookups Single photo serving and storage layer Direct I/O path between client and storage Results in higher bandwidth 49

Microsoft s Flat Storage System Flat Datacenter Storage (FDS) is a high-performance, fault-tolerant, large-scale, locality-oblivious blob store. E. B. Nightingale et al. Flat Datacenter Storage, I OSDI 12. 50

Context: why do we need a locality-oblivious FDS? Here is what we need for a single machine or computer - bunch of processors - bunch of disks - controller 51

52

Reading We get full performance out of the disks because all the disks stay busy even if some processes consume data slowly and other quickly Programmers can pretend there s just one disk If the need is to attack a large problem in parallel the input doesn t need to be partitioned in advance.) Another benefit is that is easy to adjust the ratio of processors and disks 53

How much can this architecture scale? - Metadata management - Physically eouting data 54

In FDS, data is stored in blobs A blob is named with a GUID Reads from and writes to a blob are done in units called tracts. Each tract within a blob is numbered sequentially starting from 0. Blobs and tracts 55

Design The metadata server coordinates the cluster and helps clients meet with tractservers. How does the client know which tractserver should be used to read or write a tract? 56

57

The table only contains disks, not tracts. Clients can retrieve it from the metadata server once, then never contact the metadata server again. The only time the table changes is when a disk fails or is added. FDS uses SHA-1 for this hash. 58

Failure Recovery This simple method of replication is very slow. In FDS, when a disk dies, the goal isn t to reconstruct an exact duplicate of the disk that died. FDS wants to make sure that somewhere in the system, extra copies of the lost data get made. It doesn t matter where. When a disk dies all the other disks contain some backup copies of that disk s data. Every disk sends (in parallel) a copy of its small part of the lost data to some other disk that has some free space. Recovery s speed grows linearly whith N 59

FDS Recovery Solution 60

Constructing the Table for Quick Recovery 61

62

Ceph: A Scalable, High-Performance Scalability Distributed File System Storage capacity, throughput, client performance. Emphasis on HPC. Reliability failures are the norm rather than the exception Performance Dynamic workloads S. A. Weil et al., Ceph: A Scalable, High-Performance Distributed File System, in OSDI 06 63

64

65 65

System Overview 66

Key Features Decoupled data and metadata CRUSH Files striped onto predictably named objects CRUSH maps objects to storage devices Dynamic Distributed Metadata Management Dynamic subtree partitioning Distributes metadata amongst MDSs Object-based storage OSDs handle migration, replication, failure detection and recovery 67

Ceph interface Client Operation Nearly POSIX Decoupled data and metadata operation User space implementation FUSE or directly linked 68

Client Access Example 1. Client sends open request to MDS 2. MDS returns capability, file inode, file size and stripe information (map file data into objects) 3. Client read/write directly from/to OSDs 4. MDS manages the capability 5. Client sends close request, relinquishes capability, provides MDS with the new file size 69

Distributed Object Storage Files are split across objects Objects are members of placement groups Placement groups are distributed across OSDs. 70

Distributed Object Storage CRUSH takes the placement group and an OSD cluster map: a compact, hierarchical description of the devices comprising the storage cluster. 71

CRUSH CRUSH(x) (osd n1, osd n2, osd n3 ) Inputs x is the placement group Hierarchical cluster map Placement rules Outputs a list of OSDs Advantages Anyone can calculate object location Cluster map infrequently updated 72

Replication Objects are replicated on OSDs in terms of placement groups, each of which is mapped to an ordered list of n OSDs (for n-way replication). Client is oblivious to replication 73

Acronyms CRUSH: Controlled Replication Under Scalable Hashing EBOFS: Extent and B-tree based Object File System HPC: High Performance Computing MDS: MetaData server OSD: Object Storage Device PG: Placement Group POSIX: Portable Operating System Interface for unix RADOS: Reliable Autonomic Distributed Object Store 74

Summary of Ceph Scalability, Reliability, Performance Separation of data and metadata -- CRUSH data distribution function Object based storage 75