INF3190:Distributed Systems - Examples Thomas Plagemann & Roman Vitenberg Outline Last week: Definitions Transparencies Challenges&pitfalls Architecturalstyles Today: Examples Googel File System (Thomas) Midas Data Space (Thomas) Publish-subscribe (Roman) Summary (Roman)
Example: Google File-System Early days Challenges: today - Scalability - Fault-tolerance - Auto recovery Frank Eliassen, Ifi/UiO 3 Google Platform Characteristics 100s to 1000s of PCs in cluster Many modes of failure for each PC: App bugs, OS bugs Human error Disk failure, memory failure, net failure, power supply failure Connector failure Monitoring, fault tolerance, auto-recovery essential 4
Google File System: Design Criteria Detect, tolerate, recover from failures automatically Large files, >= 100 MB in size Large, streaming reads (>= 1 MB in size) Read once Large, sequential writes that append Write once Concurrent appends by multiple clients (e.g., producer-consumer queues) Want atomicity for appends without synchronization overhead among clients 6
GFS: Architecture One master server (state replicated on backups) Many chunk servers (100s 1000s) Spread across racks; intra-rack b/w greater than interrack Chunk: 64 MB portion of file, identified by 64-bit, globally unique ID Many clients accessing same and different files stored on same cluster 7 Master Server Holds all metadata: Namespace (directory hierarchy) Access control information (per-file) Mapping from files to chunks Holds Current all metadata locations in of RAM; chunks very (chunkservers) fast operations on file Delegates system metadata consistency management Garbage collects orphaned chunks Migrates chunks between chunkservers
Chunkserver Stores 64 MB file chunks on local disk using standard Linux filesystem, each with version number and checksum Read/write requests specify chunk handle and byte range Chunks replicated on configurable number of chunkservers (default: 3) No caching of file data (beyond standard Linux buffer cache) 9 Client Issues control (metadata) requests to master server Issues data requests directly to chunkservers Caches metadata Does no caching of data No consistency difficulties among clients Streaming reads (read once) and append writes (write once) don t benefit much from caching at client 10
GFS: Architecture (2) 11 Client API Not a filesystem in traditional sense Not POSIX compliant Does not use kernel VFS interface Library that apps can link in for storage access API: open, delete, read, write (as expected) snapshot: quickly create copy of file append: at least once, possibly with gaps and/or inconsistencies among clients 12
Client Read Client sends master: read(file name, chunk index) Master s reply: chunk ID, chunk version number, locations of replicas Client sends closest chunkserverw/replica: read(chunk ID, byte range) Closest determined by IP address on simple rackbased network topology Chunkserver replies with data 13 Client Write Some chunkserver is primary for each chunk Master grants lease to primary (typically for 60 sec.) Leases renewed using periodic heartbeat messages between master and chunkservers Client asks master for primary and secondary replicas for each chunk Client sends data to replicas in daisy chain Pipelined: each replica forwards as it receives Takes advantage of full-duplex Ethernet links 14
Client Write (3) All replicas acknowledge data write to client Client sends write request to primary Primary assigns serial number to write request, providing ordering Primary forwards write request with same serial number to secondaries Secondaries all reply to primary after completing write Primary replies to client 15 Client Write (2) 16
Example distributed shared data space: MIDAS Data Space Information sharing through a database-like distributed system called MIDAS Data Space Application Select, Insert, Implementation challenges: -Availability -Fault-tolerance -Scalability -Consistency -Efficiency Emergency area without communication infrastructure Frank Eliassen, Ifi/UiO 17 Emergency and Rescue Operations Collaborative work of rescue personnel Information access and sharing is mission critical Increased efficiency through improved information flow Important information: Medical records of injured persons Layout of buildings, installations, dangerous goods Collected medical data Collected evidence Status reports for coordination Information sources: Mobile end-user devices, stationary devices, Internet Operation duration Some hours to a few days 18 Source: applica.no
Communication Network Characteristics Need fast deployment Cannot rely on available infrastructure Mobile ad-hoc networks (MANETs) Formed by wireless devices brought into the disaster area Limited bandwidth Frequent and/or long-term network partitions and remerges 19 Where is Emergency & Rescue in M. Ammar s Mobility/Density Space? High RelativeMobility Gather all people and treat them Search people in the woods Could be everywhere in this space and can change during operation Low Accident in train station 2 groups at the exits of the tunnel High 20 Node Density Low
Information Sharing RATP Requirements: high availability of data continuous operation in presence of network partitions all data retained for analysis and replay purposes Problems addressed in MIDAS Data Space (MDS): what kinds of shareable data exist common sense of time consistency management in disrupted networks life time of data 21 Which Shareable Data Exists Where? Understanding semantics: Predefined schema: Virtual Tables & Table Instance Achieve high availability through replication MDS API MDS user VT 1 VT 2 VT n Logical MDS Node A Node B Node X TI1 TI1 TIn TI1 TI2 TIn Physical MDS Keep track where which table is: Global Metadata Manager (GMDM) VT x : Virtual Table X TI x : Instance of Virtual Table x 22
Common Sense of Time Cannot maintain global clock Cannot maintain perfect synchronization of all local clocks Prior to event: Synchronize using external timing device During event: Apply existing synchronization protocols within network partition Use GPS if available to synchronize across partitions Live with small clock drift Store for each write operation local timestamp 23 Replication System Design Choices [Saito and Shapiro] Pessimistic vs. optimisticreplicacoordination Single vs. multi-master systems State vs. operation transfer Propagationstrategy: Lazy/eager/hybrid Consistencyguarantees: Eventual consistency Conflictresolution: None/manual/applicationspecific 24
MIDAS Data Space (MDS) Core ideas: Short events storage space no issue Never update in place Versioned data, i.e, append-only local storage Replication Replicas on selected nodes All replicas are read & write, no primary replica 25 User-defined and MDS Schema Definitions n id : submitting node s id t nid : submission time at n id op: type of operation (insert, update, delete) t lc : time of insertion in local replica 26
Invoked and Executed Write and Read Operations Operations invoked by applications Insert_record(tableName, key, values) Update_record(tableName, key, values) Delete_record(tableName, key) Operations executed within the Data Space Insert_record(tableName, key, n id, t nid, t lc, insert, values) Insert_record(tableName, key, n id, t nid, t lc, update, values) Insert_record(tableName, key, n id, t nid, t lc, delete, tombstonevalues) Read_record(tableName, key) Read_record(tableName, key, readpolicy appl ) 27 MDS Optimistic Replication Consistency management Replica coordination: Exchange missing tuples (records) Application-specific conflict resolution Can at most support eventual consistency: If a network partition remains stable sufficiently long, and there are no further update operations, then the update propagation algorithm guarantees that eventually all replicas within the network partition become consistent 28
Conflict Resolution Applications have different availability requirements different consistency model requirements (at tuple/record level) Middleware must support different requirements high availability, push-style propagation consistency models to the extent that they do not conflict with high availability assist application in enforcing its consistency model 29 Update Propagation Eager update to all replicas known to updater Lazy update to reach replicas not known at time of update to (re-)obtain consistency at network merge How and when? How many replicas availability vs. bandwidth Where to place replicas availability even at network partitions Update propagation algorithms 30
MIDAS Data Space Architecture Local storage - 3rd party RDBMS Global Metadata Manager (global data dictionary) Data allocator Data synchronizer Subscription manager Query analyzer Access control 31 Query MDS 32
Modify a Virtual Table 33