Google File System
Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage machines with inexpensive commodity parts. Example is 1000 storage nodes with over 300 TB. High Aggregate Performance Fully utilize bandwidth to transfer data to many clients, achieving high system throughput. 2
Design 1 Observations and Assumptions Reliability: Component failures are the norm rather than the exception, therefore constant monitoring, error detection, fault tolerance, and automatic recovery must be integral to system. Normally, systems assume a working environment and handle failures as worst case scenarios. 3
Design 2 Files: Files are huge (multi-gb) with data sets in the range of TBs with billions of objects, therefore must revisit assumptions I/O operation and block sizes. System must store a modest number of large files. Due to the focus on processing large amounts of data in bulk, high sustained bandwidth is more important than low latency. Normally, file systems are composed of many small files and a few large ones and thus block sizes are minimized. 4
Design 3 I/O: Data is appended rather than overwritten. Random writes rare. Once written, files only read (usually sequentially), thus optimization is focused on append (must have atomicity with minimal synchronization). Two types of reads: large streaming reads or small random reads. Caching is not important because most applications stream through huge files or have extremely large working sets. Normally, files are updated in place, synchronization requires locking, and caching is important for performance. 5
These observations and assumptions are uncharacteristic for normal systems and environments and are particular to their specific applications and workloads. Typical workload Writers: 1 2 \ \ File: [ ] [ ] / \ / \ Readers: \ \ / \ 1 -\-X \ --\-2 \----3 6
Architecture 7
Chunks Chunks: files split into fixed-sized chunks which is given a globally unique chunk handle: FILE: [ ][ ][ ][ ] ^ Chunk 1 Properties: Chunks replicated on multiple chunkservers (default is 3) for reliability. Chunk size is 64MB which is much larger than normal file system blocks. Lazy space allocation avoids wasting space. Advantages:» Reduces interaction w/ master.» Reduces metadata stored on master. Disadvantages:» Small files may become hotspots. 8
Master Master: Single node maintains all of the metadata such as namespace, ACLs, mapping from files to chunks, and current location of chunks. Also set's policies regarding chunk management (garbage collection, migration, etc). Properties: Metadata kept in memory: File and chunk namespaces. Mapping from files to chunks. Locations of chunk's replicas. Operation log is used to persistantly store metadata operations and record order of concurrent operations. Recovers filesystem by replaying this log. Checkpoints used to minimize startup time. Replicated to local disk and remote machines. Periodic scans enable garbage collection, re-replication and chunk migration. Single master ensures that file namespace mutations are atomic. Shadow masters provide read-only access to file system when master is down. 9
Chunkservers Chunkservers: Multiple storage nodes that store chunks on local disks as Linux files and read/write data specified by chunk handle. Properties: Store chunk location information and sends to master on startup. Architecture -----[Chunkserver] _= Local Storage =_ / /[Chunk][Chunk][Chunk] [Master]- HB -[Chunkserver]---[Chunk][Chunk][Chunk] \ \[Chunk][Chunk][Chunk] -----[Chunkserver] 10
Clients do not cache data, but do cache metadata. Chunkservers do not manually cache data because Linux's buffer cache will do it. Read [ Application ] -- 1. file name, chunk index -> [ GFS Master ] [ GFS Client ] <- 2. chunk handle, locations -- [ namespace, metadata ] ^^ - 3. chunk handle, byte range -> [ GFS Chunkserver ] ==== 4. chunk data == [ Linux File system ] -[=] -[=]... Write [ Application ] -- 1. file name, chunk index -> [ GFS Master ] [ GFS Client ] <- 2. chunk handle, locations -- [ namespace, metadata ] ^ = 3. chunk handle, data ======= vv [ GFS Chunkserver Secondary ] ^ 5. serialized mutations vv v 6. acknowledgment - 4. write request -> [ GFS Chunkserver Primary ] -------7. success, failure, errors - ^ 6. acknowledgment vv v 5. serialized mutations [ GFS Chunkserver Secondary ] 11
Interface Provides familiar: create, delete, open, close, read, write through client library, rather than POSIX. Adds: snapshot: creates a copy of a file or directory tree at low cost. Uses standard copy-on-write technique (i.e. AFS). record append: allows multiple clients to append data to same file concurrently. Operation guarantees that data is appended atomically at least once; it is up to the client to handle duplicates. 12
Measurements Read Micro-benchmark One client reaches about 10 MB/s or 80% of physical limit of 12.5 MB/s. Aggregate read reaches 94 MB/s, which is 75% of physical limit of 125 MB/s. Drop due to possibility of multiple readers reading from same chunkserver. Write Micro-benchmark One client reaches 6.3 MB/s or half of physical limit. Aggregate write reaches 35 MB/, which is half of physical limit of 67 MB/s (b/c need to write to 3 chunkservers). RW Micro-benchmarks show that system scales as number of readers increases; Total system throughput increases. 13
Fault-tolerance results Took down servers and measured time to recover. Master Operations Open and FindLocation are most requested operations. Can possibly reduce FindLocation w/ caching. 14
Comparison to Other Systems Provides location independent namespace which enables data to be move transparently for load balance and fault tolerance (i.e. AFS). Spreads data across storage servers unlinke AFS. Unlike RAID uses simple file replication. Does not provide caching below the filesystem. Single master, rather than distributed. Provides POSIX-like interface, but not full support. HDFS (Hadoop) is an open source implementation of Google File System written in Java. It follows the same overall design, but differs in supported features and implementation details: Does not support random writes. Does not support appending to existing files. Does not support multiple concurrent writers. 15
Questions What are the advantages of Google File System over AFS, NFS? Disadvantages? What workloads/applicatioins would perform well on GFS? Poorly? What are the constraints put on by having a single master? What are the advantages? Can you put a POSIX interface to the filesystem? Why or why not? 16