Lecture 2 Distributed Filesystems

Similar documents
System that permanently stores data Usually layered on top of a lower-level physical storage medium Divided into logical units called files

Google Cluster Computing Faculty Training Workshop

Lecture 3 Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, SOSP 2003

Data Centers. Tom Anderson

CA485 Ray Walshe Google File System

Distributed Systems 16. Distributed File Systems II

The Google File System

Distributed File Systems II

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

The Google File System

Lecture 21: Reliable, High Performance Storage. CSC 469H1F Fall 2006 Angela Demke Brown

Distributed System. Gang Wu. Spring,2018

CS 470 Spring Distributed Web and File Systems. Mike Lam, Professor. Content taken from the following:

CS 470 Spring Distributed Web and File Systems. Mike Lam, Professor. Content taken from the following:

COMP Parallel Computing. Lecture 22 November 29, Datacenters and Large Scale Data Processing

GFS: The Google File System. Dr. Yingwu Zhu

GFS: The Google File System

OPERATING SYSTEM. Chapter 12: File System Implementation

Chapter 11: Implementing File Systems

Chapter 11: File System Implementation. Objectives

NPTEL Course Jan K. Gopinath Indian Institute of Science

Current Topics in OS Research. So, what s hot?

Example Implementations of File Systems

CLOUD-SCALE FILE SYSTEMS

Operating Systems. Lecture File system implementation. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Distributed File Systems

The Google File System

Crash Consistency: FSCK and Journaling. Dongkun Shin, SKKU

Filesystem. Disclaimer: some slides are adopted from book authors slides with permission

AN OVERVIEW OF DISTRIBUTED FILE SYSTEM Aditi Khazanchi, Akshay Kanwar, Lovenish Saluja

DISTRIBUTED FILE SYSTEMS & NFS

Chapter 11: Implementing File

Lustre A Platform for Intelligent Scale-Out Storage

Distributed Systems - III

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition

Chapter 10: File System Implementation

The Google File System

What is a file system

Distributed Filesystem

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Today: Coda, xfs! Brief overview of other file systems. Distributed File System Requirements!

Chapter 12: File System Implementation

CSE 153 Design of Operating Systems

DISTRIBUTED SYSTEMS [COMP9243] Lecture 9b: Distributed File Systems INTRODUCTION. Transparency: Flexibility: Slide 1. Slide 3.

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo

The Google File System

OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD.

NFS: Naming indirection, abstraction. Abstraction, abstraction, abstraction! Network File Systems: Naming, cache control, consistency

Cloud Computing CS

The Google File System. Alexandru Costan

Today CSCI Coda. Naming: Volumes. Coda GFS PAST. Instructor: Abhishek Chandra. Main Goals: Volume is a subtree in the naming space

Distributed File Systems. Directory Hierarchy. Transfer Model

Da-Wei Chang CSIE.NCKU. Professor Hao-Ren Ke, National Chiao Tung University Professor Hsung-Pin Chang, National Chung Hsing University

CS5460: Operating Systems Lecture 20: File System Reliability

The Google File System

The Google File System (GFS)

Distributed File Systems. CS 537 Lecture 15. Distributed File Systems. Transfer Model. Naming transparency 3/27/09

CS307: Operating Systems

BeoLink.org. Design and build an inexpensive DFS. Fabrizio Manfredi Furuholmen. FrOSCon August 2008

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2017

CS /30/17. Paul Krzyzanowski 1. Google Chubby ( Apache Zookeeper) Distributed Systems. Chubby. Chubby Deployment.

Filesystem. Disclaimer: some slides are adopted from book authors slides with permission 1

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Introduction to Cloud Computing

Chapter 12: File System Implementation. Operating System Concepts 9 th Edition

! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like

Chapter 12: File System Implementation

File system internals Tanenbaum, Chapter 4. COMP3231 Operating Systems

CSE 124: Networked Services Lecture-16

The Google File System

Distributed File Systems. CS432: Distributed Systems Spring 2017

CS370 Operating Systems

Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA {Shv, Hairong, SRadia,

SMD149 - Operating Systems - File systems

Operating Systems. File Systems. Thomas Ropars.

TDDB68 Concurrent Programming and Operating Systems. Lecture: File systems

Background. 20: Distributed File Systems. DFS Structure. Naming and Transparency. Naming Structures. Naming Schemes Three Main Approaches

3/4/14. Review of Last Lecture Distributed Systems. Topic 2: File Access Consistency. Today's Lecture. Session Semantics in AFS v2

Chapter 17: Distributed-File Systems. Operating System Concepts 8 th Edition,

Bigtable: A Distributed Storage System for Structured Data By Fay Chang, et al. OSDI Presented by Xiang Gao

Extreme computing Infrastructure

File systems CS 241. May 2, University of Illinois

November 9 th, 2015 Prof. John Kubiatowicz

Chapter 11: Implementing File Systems

Today: Coda, xfs. Case Study: Coda File System. Brief overview of other file systems. xfs Log structured file systems HDFS Object Storage Systems

Lecture 14: Distributed File Systems. Contents. Basic File Service Architecture. CDK: Chapter 8 TVS: Chapter 11

Distributed Systems. 15. Distributed File Systems. Paul Krzyzanowski. Rutgers University. Fall 2016

C 1. Recap. CSE 486/586 Distributed Systems Distributed File Systems. Traditional Distributed File Systems. Local File Systems.

Distributed Systems. Hajussüsteemid MTAT Distributed File Systems. (slides: adopted from Meelis Roos DS12 course) 1/25

CSE 124: Networked Services Fall 2009 Lecture-19

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

File system internals Tanenbaum, Chapter 4. COMP3231 Operating Systems

Authors : Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung Presentation by: Vijay Kumar Chalasani

Operating Systems 2010/2011

Computer Systems Laboratory Sungkyunkwan University

PERSISTENCE: FSCK, JOURNALING. Shivaram Venkataraman CS 537, Spring 2019

Crossing the Chasm: Sneaking a parallel file system into Hadoop

CS /15/16. Paul Krzyzanowski 1. Question 1. Distributed Systems 2016 Exam 2 Review. Question 3. Question 2. Question 5.

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

File System Case Studies. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Transcription:

Lecture 2 Distributed Filesystems 922EU3870 Cloud Computing and Mobile Platforms, Autumn 2009 2009/9/21 Ping Yeh ( 葉平 ), Google, Inc.

Outline Get to know the numbers Filesystems overview Distributed file systems Basic (example: NFS) Shared storage (example: Global FS) Wide-area (example: AFS) Fault-tolerant (example: Coda) Parallel (example: Lustre) Fault-tolerant and Parallel (example: dcache) The Google File System Homework 2

Numbers real world engineers should know L1 cache reference 0.5 ns Branch mispredict 5 ns L2 cache reference 7 ns Mutex lock/unlock 100 ns Main memory reference 100 ns Compress 1 KB with Zippy 10,000 ns Send 2 KB through 1 Gbps network 20,000 ns Read 1 MB sequentially from memory 250,000 ns Round trip within the same data center 500,000 ns Disk seek 10,000,000 ns Read 1 MB sequentially from network 10,000,000 ns Read 1 MB sequentially from disk 30,000,000 ns Round trip between California and Netherlands 150,000,000 ns 3

The Joys of Real Hardware Typical first year for a new cluster: ~0.5 overheating (power down most machines in <5 mins, ~1-2 days to recover) ~1 PDU failure (~500-1000 machines suddenly disappear, ~6 hours to come back) ~1 rack-move (plenty of warning, ~500-1000 machines powered down, ~6 hours) ~1 network rewiring (rolling ~5% of machines down over 2-day span) ~20 rack failures (40-80 machines instantly disappear, 1-6 hours to get back) ~5 racks go wonky (40-80 machines see 50% packetloss) ~8 network maintenances (4 might cause ~30-minute random connectivity losses) ~12 router reloads (takes out DNS and external vips for a couple minutes) ~3 router failures (have to immediately pull traffic for an hour) ~dozens of minor 30-second blips for dns ~1000 individual machine failures ~thousands of hard drive failures slow disks, bad memory, misconfigured machines, flaky machines, etc.

File Systems Overview System that permanently stores data Usually layered on top of a lower-level physical storage medium Divided into logical units called files Addressable by a filename ( foo.txt ) Files are often organized into directories Usually supports hierarchical nesting (directories) A path is the expression that joins directories and filename to form a unique full name for a file. Directories may further belong to a volume The set of valid paths form the namespace of the file system. 5

What Gets Stored User data itself is the bulk of the file system's contents Also includes meta-data on a volume-wide and per-file basis: Volume- wide: Available space Formatting info character set... Per - file: name owner modification date physical layout... 6

High-Level Organization Files are typically organized in a tree structure made of nested directories One directory acts as the root links (symlinks, shortcuts, etc) provide simple means of providing multiple access paths to one file Other file systems can be mounted and dropped in as sub-hierarchies (other drives, network shares) Typical operations on a file: create, delete, rename, open, close, read, write, append. also lock for multi-user systems. 7

Low-Level Organization (1/2) File data and meta-data stored separately File descriptors + meta-data stored in inodes (Un*x) Large tree or table at designated location on disk Tells how to look up file contents Meta-data may be replicated to increase system reliability 8

Low-Level Organization (2/2) Standard read-write medium is a hard drive (other media: CDROM, tape,...) Viewed as a sequential array of blocks Usually address ~1 KB chunk at a time Tree structure is flattened into blocks Overlapping writes/deletes can cause fragmentation: files are often not stored with a linear layout inodes store all block numbers related to file 9

Fragmentation A B C (free space) A B C A (free space) A (free space) C A (free space) A D C A D (free) 10

Filesystem Design Considerations Namespace: physical, logical Consistency: what to do when more than one user reads/ writes on the same file? Security: who can do what to a file? Authentication/ACL Reliability: can files not be damaged at power outage or other hardware failures? 11

Local Filesystems on Unix-like Systems Many different designs Namespace: root directory /, followed by directories and files. Consistency: sequential consistency, newly written data are immediately visible to open reads (if...) Security: uid/gid, mode of files kerberos: tickets Reliability: superblocks, journaling, snapshot more reliable filesystem on top of existing filesystem: RAID computer 12

Namespace Physical mapping: a directory and all of its subdirectories are stored on the same physical media. /mnt/cdrom /mnt/disk1, /mnt/disk2, when you have multiple disks Logical volume: a logical namespace that can contain multiple physical media or a partition of a physical media still mounted like /mnt/vol1 dynamical resizing by adding/removing disks without reboot splitting/merging volumes as long as no data spans the split 13

Journaling Changes to the filesystem is logged in a journal before it is committed. useful if an supposedly atomic action needs two or more writes, e.g., appending to a file (update metadata + allocate space + write the data). can play back a journal to recover data quickly in case of hardware failure. (old-fashioned scan the volume fsck anyone?) What to log? changes to file content: heavy overhead changes to metadata: fast, but data corruption may occur Implementations: xfs3, ReiserFS, IBM's JFS, etc. 14

RAID Redundant Array of Inexpensive Disks Different RAID levels RAID 0 (striped disks): data is distributed among disk for performance. RAID 1 (mirror): data is mirrored on another disk 1:1 for reliability. RAID 5 (striped disks with distributed parity): similar to RAID 0, but with a parity bit for data recovery in case one disk is lost. The parity bit is rotated among disks to maximize throughput. RAID 6 (striped disks with dual distributed parity): similar to RAID 5 but with 2 parity bits. Can lose 2 disks without losing data. RAID 10 (1+0, striped mirrors) 15

Snapshot A snapshot = a copy of a set of files and directories at a point in time. read-only snapshots, read-write snapshots usually done by the filesystem itself, sometimes by LVMs backing up data can be done on a read-only snapshot without worrying about consistency Copy-on-write is a simple and fast way to create snapshots current data is the snapshot. a request to write to a file creates a new copy, and work from there afterwards. Implementation: UFS, Sun's ZFS, etc. 16

Should the file system be faster or more reliable? But faster at what: Large files? Small files? Lots of reading? Frequent writers, occasional readers? Block size Smaller block size reduces amount of wasted space Larger block size increases speed of sequential reads (may not help random access) 17

Distributed File Systems 18

Distributed Filesystems Support access to files on remote servers Must support concurrency Make varying guarantees about locking, who wins with concurrent writes, etc... Must gracefully handle dropped connections Can offer support for replication and local caching Different implementations sit in different places on complexity/feature scale 19

DFS is useful when... Multiple users want to share files Users move from one computer to another Users want a uniform namespace for shared files on every computer Users want a centralized storage system backup and management Note that a user of a DFS may actually be a program 20

Features of Distributed File Systems Different systems have different designs and behaviors on the following features. Interface: file system, block I/O, custom made Security: various authentication/authorization schemes. Reliability (fault-tolerance): can continue to function when some hardware fail (disks, nodes, power, etc). Namespace (virtualization): can provide logical namespace that can span across physical boundaries. Consistency: all clients get the same data all the time. It is related to locking, caching, and synchronization. Parallel: multiple clients can have access to multiple disks at the same time. Scope: local area network vs. wide area network. 21

NFS First developed in 1980s by Sun Most widely known distributed filesystem. Presented with standard POSIX FS interface Network drives are mounted into local directory hierarchy Your home directory at CINC is probably NFS-driven Type 'mount' some time at the prompt if curious server client client client 22

Initially completely stateless NFS Protocol Operated over UDP; did not use TCP streams File locking, etc, implemented in higher-level protocols Implication: All client requests have enough info to complete op example: Client specifies offset in file to write to one advantage: Server state does not grow with more clients Modern implementations use TCP/IP & stateful protocols RFCs: RFC 1094 for v2 (3/1989) RFC 1813 for v3 (6/1995) RFC 3530 for v4 (4/2003) 23

NFS Overview Remote Procedure Calls (RPC) for communication between client and server Client Implementation Provides transparent access to NFS file system UNIX contains Virtual File system layer (VFS) Vnode: interface for procedures on an individual file Translates vnode operations to NFS RPCs Server Implementation Stateless: Must not have anything only in memory Implication: All modified data written to stable storage before return control to client Servers often add NVRAM to improve performance 24

Server-side Implementation NFS defines a virtual file system Does not actually manage local disk layout on server Server instantiates NFS volume on top of local file system Local hard drives managed by concrete file systems (ext*, ReiserFS,...) Export local disks to clients NFS server User-visible filesystem Server filesystem EXT3 fs EXT3 fs NFS client EXT2 fs ReiserFS Hard Drive 1 Hard Drive 2 Hard Drive 1 Hard Drive 2 25

NFS Locking NFS v4 supports stateful locking of files Clients inform server of intent to lock Server can notify clients of outstanding lock requests Locking is lease-based: clients must continually renew locks before a timeout Loss of contact with server abandons locks 26

NFS Client Caching NFS Clients are allowed to cache copies of remote files for subsequent accesses Supports close-to-open cache consistency When client A closes a file, its contents are synchronized with the master, and timestamp is changed When client B opens the file, it checks that local timestamp agrees with server timestamp. If not, it discards local copy. Concurrent reader/writers must use flags to disable caching 27

Cache Consistency Problem: Consistency across multiple copies (server and multiple clients) How to keep data consistent between client and server? If file is changed on server, will client see update? Determining factor: Read policy on clients How to keep data consistent across clients? If write file on client A and read on client B, will B see update? Determining factor: Write and read policy on clients 28

Very popular NFS: Summary Full POSIX interface means it drops in very easily. No cluster-wide uniform namespace: each client decides how to name a volume mounted from an NFS server. No location transparency: data movement means remount NFS volume managed by single server Higher load on central server, bottleneck on scalability Simplifies coherency protocols Challenges: Fault tolerance, scalable performance, and consistency 29

AFS (The Andrew File System) Developed at Carnegie Mellon University since 1983 Strong security: Kerberos authentication richer set of access control bits than UNIX: separate administer / delete bits, application-specific bits. Higher scalability compared with distributed file systems at the time. Supports 50,000+ clients at enterprise level. Global namespace: /afs/cmu.edu/vol1/..., WAN scope Location transparency: moving files just works. Heavily use client side caching. Support read-only replication. replica server client client client 30

Morgan Stanley's comparison: AFS vs. NFS Better client/server ratio NFS (circa 1993) topped out at 25:1 AFS: 100s Robust volume replication NFS servers go down, and take their clients with them AFS servers go down, nobody notices (OK, RO data only) WAN file sharing NFS couldn't do it reliably AFS worked like a charm Security was never an issue (kerberos4) http://www-conf.slac.stanford.edu/afsbestpractices/slides/morganstanley.pdf 31

Local Caching File reads/writes operate on locally cached copy The modified local copy is sent back to server when file is closed Open local copies are notified of external updates through callbacks but not updated Trade-offs: Shared database files do not work well on this system Does not support write-through to shared medium 32

Replication AFS allows read-only copies of filesystem volumes Copies are guaranteed to be atomic checkpoints of entire FS at time of read-only copy generation ( snapshot ) Modifying data requires access to the sole r/w volume Changes do not propagate to existing read-only copies 33

AFS: Summary Not quite POSIX Stronger security/permissions No file write-through High availability through replicas and local caching Scalability by reducing load on servers Not appropriate for all file types ACM Trans. on Computer Systems, 6(1):51-81, Feb 1988. 34

Global File System 35

Lustre Overview Developed by Peter Braam at Carnegie Mellon since 1999 then Cluster File System in 2001, acquired by Sun in 2007 Goal: removing bottlenecks to achieve scalability Object-based: separate metadata and file data metadata server(s): MDS (namespace, ACL, attributes) object storage server(s): OSS client read/write directly with OSS Consistency: Lustre distributed lock manager Performance: data can be striped POSIX interface MDS OSS OSS MDS OSS OSS OST client client client 36

I/O throughput Key Design Issue : Scalability How to avoid bottlenecks Metadata scalability How can 10,000 s of nodes work on files in same folder Cluster Recovery If sth fails, how can transparent recovery happen Management Adding, removing, replacing, systems; data migration & backup 37

The Lustre Architecture 38

Metadata Service (MDS) All access to the file is governed by MDS which will directly or indirectly authorize access. To control namespace and manage inodes Load balanced cluster service for the scalability (a well balanced API, a stackable framework for logical MDS, replicated MDS) Journaled batched metadata updates 39

Object Storage Targets (OST) Keep file data objects File I/O service: Access to the objects The block allocation for data obj., leading distributed and scalability OST s/w modules OBD server, Lock server Obj. storage driver, OBD filter Portal API 40

Distributed Lock Manager For generic and rich lock service Lock resources: resource database Organize resources in trees High performance node that acquires resource manages tree Intent locking: utilization dependent locking modes write-back lock for low utilization files no lock for high contention files, write to OSS by DLM 41

Metadata 42

File I/O 43

Striping 44

Lustre Summary Good scalability and high performance Lawrence Livermore National Lab BlueGene/L has 200,000 clients processes access 2.5PB Lustre fs through 1024 IO nodes over 1Gbit Ethernet network at 35 GB/sec. Reliability Journaled metadata in metadata cluster OSTs are responsible for their data Consistency 45

Other file systems worth mentioning Coda: post-afs, pre-lustre, from CMU Global File System: shared disk file systems PVFS by IBM ZFS by Sun 46