Distributed Systems. 16. Distributed Lookup. Paul Krzyzanowski. Rutgers University. Fall 2017

Similar documents
Distributed Systems. 17. Distributed Lookup. Paul Krzyzanowski. Rutgers University. Fall 2016

Dynamo. Smruti R. Sarangi. Department of Computer Science Indian Institute of Technology New Delhi, India. Motivation System Architecture Evaluation

Presented By: Devarsh Patel

CS 655 Advanced Topics in Distributed Systems

March 10, Distributed Hash-based Lookup. for Peer-to-Peer Systems. Sandeep Shelke Shrirang Shirodkar MTech I CSE

Dynamo: Amazon s Highly Available Key-value Store. ID2210-VT13 Slides by Tallat M. Shafaat

Scaling Out Key-Value Storage

Dynamo: Key-Value Cloud Storage

Dynamo: Amazon s Highly Available Key-value Store

Horizontal or vertical scalability? Horizontal scaling is challenging. Today. Scaling Out Key-Value Storage

CompSci 356: Computer Network Architectures Lecture 21: Overlay Networks Chap 9.4. Xiaowei Yang

DISTRIBUTED SYSTEMS CSCI 4963/ /4/2015

C 1. Last Time. CSE 486/586 Distributed Systems Distributed Hash Tables. Today s Question. What We Want. What We Want. What We Don t Want

Dynamo: Amazon s Highly Available Key-value Store

Reprise: Stability under churn (Tapestry) A Simple lookup Test. Churn (Optional Bamboo paper last time)

DYNAMO: AMAZON S HIGHLY AVAILABLE KEY-VALUE STORE. Presented by Byungjin Jun

Today. Why might P2P be a win? What is a Peer-to-Peer (P2P) system? Peer-to-Peer Systems and Distributed Hash Tables

CS Amazon Dynamo

Peer-to-Peer Systems and Distributed Hash Tables

Dynamo: Amazon s Highly Available Key-Value Store

CS 138: Dynamo. CS 138 XXIV 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

08 Distributed Hash Tables

CSE 544 Principles of Database Management Systems. Magdalena Balazinska Winter 2015 Lecture 14 NoSQL

Last Time. CSE 486/586 Distributed Systems Distributed Hash Tables. What We Want. Today s Question. What We Want. What We Don t Want C 1

CSE 486/586 Distributed Systems

Recap. CSE 486/586 Distributed Systems Case Study: Amazon Dynamo. Amazon Dynamo. Amazon Dynamo. Necessary Pieces? Overview of Key Design Techniques

Scaling KVS. CS6450: Distributed Systems Lecture 14. Ryan Stutsman

Introduction to Distributed Data Systems

CS 640 Introduction to Computer Networks. Today s lecture. What is P2P? Lecture30. Peer to peer applications

EECS 498 Introduction to Distributed Systems

Telematics Chapter 9: Peer-to-Peer Networks

Distributed Hash Tables Chord and Dynamo

Page 1. Key Value Storage"

DISTRIBUTED COMPUTER SYSTEMS ARCHITECTURES

CS /15/16. Paul Krzyzanowski 1. Question 1. Distributed Systems 2016 Exam 2 Review. Question 3. Question 2. Question 5.

Recap. CSE 486/586 Distributed Systems Case Study: Amazon Dynamo. Amazon Dynamo. Amazon Dynamo. Necessary Pieces? Overview of Key Design Techniques

Peer-to-Peer Systems. Chapter General Characteristics

PEER-TO-PEER NETWORKS, DHTS, AND CHORD

Background. Distributed Key/Value stores provide a simple put/get interface. Great properties: scalability, availability, reliability

Page 1. How Did it Start?" Model" Main Challenge" CS162 Operating Systems and Systems Programming Lecture 24. Peer-to-Peer Networks"

Peer-to-Peer (P2P) Systems

Peer-to-peer computing research a fad?

Internet Technology. 06. Exam 1 Review Paul Krzyzanowski. Rutgers University. Spring 2016

Internet Technology 3/2/2016

CAP Theorem, BASE & DynamoDB

CSE 124 Finding objects in distributed systems: Distributed hash tables and consistent hashing. March 8, 2016 Prof. George Porter

Distributed Systems Final Exam

Making Gnutella-like P2P Systems Scalable

Overview Computer Networking Lecture 16: Delivering Content: Peer to Peer and CDNs Peter Steenkiste

Overlay and P2P Networks. Unstructured networks. Prof. Sasu Tarkoma

EE 122: Peer-to-Peer Networks

: Scalable Lookup

A Survey of Peer-to-Peer Content Distribution Technologies

Finding Data in the Cloud using Distributed Hash Tables (Chord) IBM Haifa Research Storage Systems

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Scaling Problem Millions of clients! server and network meltdown. Peer-to-Peer. P2P System Why p2p?

Overlay networks. To do. Overlay networks. P2P evolution DHTs in general, Chord and Kademlia. Turtles all the way down. q q q

Distributed Hash Tables

There is a tempta7on to say it is really used, it must be good

416 Distributed Systems. Mar 3, Peer-to-Peer Part 2

Scaling Problem Millions of clients! server and network meltdown. Peer-to-Peer. P2P System Why p2p?

Performance and Forgiveness. June 23, 2008 Margo Seltzer Harvard University School of Engineering and Applied Sciences

Overlay and P2P Networks. Unstructured networks. PhD. Samu Varjonen

Kademlia: A P2P Informa2on System Based on the XOR Metric

Peer-to-Peer Protocols and Systems. TA: David Murray Spring /19/2006

Peer to Peer Networks

LECT-05, S-1 FP2P, Javed I.

Scalable overlay Networks

EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Overlay Networks: Motivations

Peer to Peer Networks

Dynamo Tom Anderson and Doug Woos

CS 43: Computer Networks BitTorrent & Content Distribution. Kevin Webb Swarthmore College September 28, 2017

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Distributed Hash Tables: Chord

Overlay and P2P Networks. Unstructured networks. Prof. Sasu Tarkoma

Indexing Large-Scale Data

Dynamo: Amazon s Highly Available Key-value Store

Overlay Networks: Motivations. EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Motivations (cont d) Goals.

Architectures for Distributed Systems

11/12/2018 Week 13-A Sangmi Lee Pallickara. CS435 Introduction to Big Data FALL 2018 Colorado State University

Scalability In Peer-to-Peer Systems. Presented by Stavros Nikolaou

CIS 700/005 Networking Meets Databases

Scaling Problem Computer Networking. Lecture 23: Peer-Peer Systems. Fall P2P System. Why p2p?

Flooded Queries (Gnutella) Centralized Lookup (Napster) Routed Queries (Freenet, Chord, etc.) Overview N 2 N 1 N 3 N 4 N 8 N 9 N N 7 N 6 N 9

Large-Scale Data Stores and Probabilistic Protocols

Special Topics: CSci 8980 Edge History

EE 122: Peer-to-Peer (P2P) Networks. Ion Stoica November 27, 2002

CS November 2017

Badri Nath Rutgers University

Lecture 8: Application Layer P2P Applications and DHTs

Finding Data in the Cloud using Distributed Hash Tables (Chord) IBM Haifa Research Storage Systems

Department of Computer Science Institute for System Architecture, Chair for Computer Networks. File Sharing

Introduction to P2P Computing

L3S Research Center, University of Hannover

Goals. EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Solution. Overlay Networks: Motivations.

References. NoSQL Motivation. Why NoSQL as the Solution? NoSQL Key Feature Decisions. CSE 444: Database Internals

Overlay networks. Today. l Overlays networks l P2P evolution l Pastry as a routing overlay example

CS514: Intermediate Course in Computer Systems

Overlay networks. T o do. Overlay networks. P2P evolution DHTs in general, Chord and Kademlia. q q q. Turtles all the way down

CS November 2018

Transcription:

Distributed Systems 16. Distributed Lookup Paul Krzyzanowski Rutgers University Fall 2017 1

Distributed Lookup Look up (key, value) Cooperating set of nodes Ideally: No central coordinator Some nodes can be down 2

Approaches 1. Central coordinator Napster 2. Flooding Gnutella 3. Distributed hash tables CAN, Chord, Amazon Dynamo, Tapestry, 3

1. Central Coordinator Example: Napster Central directory Identifies content (names) and the servers that host it lookup(name) {list of servers} Download from any of available servers Pick the best one by pinging and comparing response times Another example: GFS Controlled environment compared to Napster Content for a given key is broken into chunks Master handles all queries but not the data 4

1. Central Coordinator - Napster Pros Super simple Search is handled by a single server (master) The directory server is a single point of control Provides definitive answers to a query Cons Master has to maintain state of all peers Server gets all the queries The directory server is a single point of control No directory, no service! 5

2. Query Flooding Example: Gnutella distributed file sharing Well-known nodes act as anchors Nodes with files inform an anchor about their existence Nodes select other nodes as peers 6

2. Query Flooding Send a query to peers if a file is not present locally Each request contains: Query key Unique request ID Time to Live (TTL, maximum hop count) Peer either responds or routes the query to its neighbors Repeat until TTL = 0 or if the request ID has been processed If found, send response (node address) to the requestor Back propagation: response hops back to reach originator 7

Overlay network An overlay network is a virtual network formed by peer connections Any node might know about a small set of machines Neighbors may not be physically close to you Underlying IP Network 8

Overlay network An overlay network is a virtual network formed by peer connections Any node might know about a small set of machines Neighbors may not be physically close to you Overlay Network 9

Flooding Example: Overlay Network 10

Flooding Example: Query Flood Query TTL=0 TTL=1 TTL=2 TTL=1 TTL=0 TTL=2 TTL=0 TTL=2 TTL=1 TTL=1 TTL=0 Found! TTL=1 TTL=0 TTL=1 TTL=0 TTL = Time to Live (hop count) TTL=0 11

Flooding Example: Query response Found! Back propagation 12

Flooding Example: Download 13

What s wrong with flooding? Some nodes are not always up and some are slower than others Gnutella & Kazaa dealt with this by classifying some nodes as special ( ultrapeers in Gnutella, supernodes in Kazaa,) Poor use of network resources Potentially high latency Requests get forwarded from one machine to another Back propagation (e.g., in Gnutella s design), where the replies go through the same chain of machines used in the query, increases latency even more 14

3. Distributed Hash Tables 15

Locating content How do we locate distributed content? A central server is the easiest Napster Gnutella & Kazaa BitTorrent Central server Network flooding Optimized to flood supernodes but it s still flooding Nothing! It s somebody else s problem Can we do better? 16

Hash tables Remember hash functions & hash tables? Linear search: O(N) Tree: O(logN) Hash table: O(1) 17

What s a hash function? (refresher) Hash function A function that takes a variable length input (e.g., a string) and generates a (usually smaller) fixed length result (e.g., an integer) Example: hash strings to a range 0-7: hash( Newark ) 1 hash( Jersey City ) 6 hash( Paterson ) 2 Hash table Table of (key, value) tuples Look up a key: Hash function maps keys to a range 0 N-1 table of N elements i = hash(key) table[i] contains the item No need to search through the table! 18

Considerations with hash tables (refresher) Picking a good hash function We want uniform distribution of all values of key over the space 0 N-1 Collisions Multiple keys may hash to the same value hash( Paterson ) 2 hash( Edison ) 2 table[i] is a bucket (slot) for all such (key, value) sets Within table[i], use a linked list or another layer of hashing Think about a hash table that grows or shrinks If we add or remove buckets need to rehash keys and move items 19

Distributed Hash Tables (DHT) Create a peer-to-peer version of a (key, value) data store How we want it to work 1. A peer (A) queries the data store with a key 2. The data store finds the peer (B) that has the value 3. That peer (B) returns the (key, value) pair to the querying peer (A) Make it efficient! A query should not generate a flood! B C A D E 20

Consistent hashing Conventional hashing Practically all keys have to be remapped if the table size changes Consistent hashing Most keys will hash to the same value as before On average, K/n keys will need to be remapped K = # keys, n = # of buckets Example: splitting a bucket Only the keys in slot c get remapped slot a slot b slot c slot d slot e slot a slot b slot c1 slot c2 slot d slot e 21

3. Distributed hashing Spread the hash table across multiple nodes Each node stores a portion of the key space lookup(key) node ID that holds (key, value) lookup(node_id, key) value Questions How do we partition the data & do the lookup? & keep the system decentralized? & make the system scalable (lots of nodes with dynamic changes)? & fault tolerant (replicated data)? 22

Distributed Hashing Case Study CAN: Content Addressable Network 23

CAN design Create a logical grid x-y in 2-D but not limited to 2-D Separate hash function per dimension h x (key), h y (key) A node Is responsible for a range of values in each dimension Knows its neighboring nodes 24

CAN key node mapping: 2 nodes y=y max x = hash x (key) y = hash y (key) if x < (x max /2) n 1 has (key, value) n 1 n 2 if x (x max /2) n 2 has (key, value) n 2 is responsible for a zone x=(x max /2.. x max ), y=(0.. y max ) y=0 x=0 x max /2 x=x max 25

CAN partitioning y=y max n 1 Any node can be split in two either horizontally or vertically y max /2 n 2 n 0 y=0 x=0 x max /2 x=x max 26

CAN key node mapping y=y max x = hash x (key) n 1 y = hash y (key) y max /2 n 2 if x < (x max /2) { if y < (y max /2) n 0 has (key, value) else n 1 has (key, value) } n 0 if x (x max /2) n 2 has (key, value) y=0 x=0 x max /2 x=x max 27

CAN partitioning y=y max y max /2 n 0 n 1 n 2 n 11 n 3 n 5 n 6 Any node can be split in two either horizontally or vertically Associated data has to be moved to the new node based on hash(key) n 4 n 7 n 9 Neighbors need to be made aware of the new node y=0 x=0 x max /2 n 8 n 10 x=x max A node knows only of its neighbors 28

CAN neighbors y=y max n 0 n 1 n 2 n 11 Neighbors refer to nodes that share adjacent zones in the overlay network y max /2 n 3 n 4 only needs to keep track of n 5, n 7, or n 8 as its right neighbor. n 5 n 6 n 9 n 7 n 4 n 8 n 10 y=0 x=0 x max /2 x=x max 29

CAN routing y=y max y max /2 n 0 n 1 n 2 n 11 n 3 n 5 n 6 lookup(key) on a node that does not own the value Compute hash x (key), hash y (key) and route request to a neighboring node n 4 n 7 n 9 Ideally: route to minimize distance to destination n 8 n 10 y=0 x=0 x max /2 x=x max 30

CAN Performance For n nodes in d dimensions # neighbors = 2d Average route for 2 dimensions = O( n) hops To handle failures Share knowledge of neighbor s neighbors One of the node s neighbors takes over the failed zone 31

Distributed Hashing Case Study Chord 32

Chord & consistent hashing A key is hashed to an m-bit value: 0 (2 m -1) A logical ring is constructed for the values 0... (2 m -1) Nodes are placed on the ring at hash(ip address) 14 15 0 1 2 Node hash(ip address) = 3 13 3 12 4 11 5 10 9 8 7 6 33

Key assignment Example: n=16; system with 4 nodes (so far) Key, value data is stored at a successor a node whose value is hash(key) No nodes at these empty positions Node 14 is responsible for keys 11, 12, 13, 14 14 15 0 1 2 Node 3 is responsible for keys 15, 0, 1, 2, 3 13 3 12 4 Node 10 is responsible for keys 9, 10 11 5 10 9 8 7 6 Node 8 is responsible for keys 4, 5, 6, 7, 8 34

Handling query requests Any peer can get a request (insert or query). If the hash(key) is not for its ranges of keys, it forwards the request to a successor. The process continues until the responsible node is found Worst case: with p nodes, traverse p-1 nodes; that s O(N) (yuck!) Average case: traverse p/2 nodes (still yuck!) Query( hash(key)=9 ) Node 14 is responsible for keys 11, 12, 13, 14 13 14 15 0 1 2 3 Node 3 is responsible for keys 15, 0, 1, 2, 3 12 4 Node 10 is responsible for keys 9, 10 11 5 Node #10 can process the request 10 9 8 7 6 Node 8 is responsible for keys 4, 5, 6, 7, 8 35

Let s figure out three more things 1. Adding/removing nodes 2. Improving lookup time 3. Fault tolerance 36

Adding a node Some keys that were assigned to a node s successor now get assigned to the new node Data for those (key, value) pairs must be moved to the new node Node 14 is responsible for keys 11, 12, 13, 14 14 15 0 1 2 Node 3 is responsible for keys 15, 0, 1, 2, 3 12 13 New node added: ID = 6 3 4 Node 6 is responsible for keys 4, 5, 6 Node 10 is responsible for keys 9, 10 11 5 10 9 8 7 6 Node 8 was responsible for keys 4, 5, 6, 7, 8 Now it s responsible for keys 7, 8 37

Removing a node Keys are reassigned to the node s successor Data for those (key, value) pairs must be moved to the successor Node 14 was responsible for keys 11, 12, 13, 14 Node 14 is now responsible for keys 9, 10, 11, 12, 13, 14 14 15 0 1 2 Node 3 is responsible for keys 15, 0, 1, 2, 3 12 13 Move (key, value) data to node 14 Node 10 removed 3 4 Node 6 is responsible for keys 4, 5, 6 Node 10 was responsible for keys 9, 10 11 5 10 9 8 7 6 Node 8 is responsible for keys 7, 8 38

Fault tolerance Nodes might die (key, value) data should be replicated Create R replicas, storing each one at R-1 successor nodes in the ring Need to know multiple successors A node needs to know how to find its successor s successor (or more) Easy if it knows all nodes! When a node is back up, it needs to check with successors for updates Any changes need to be propagated to all replicas 39

Performance We re not thrilled about O(N) lookup Simple approach for great performance Have all nodes know about each other When a peer gets a node, it searches its table of nodes for the node that owns those values Gives us O(1) performance Add/remove node operations must inform everyone Maybe not a good solution if we have millions of peers (huge tables) 40

Finger tables Compromise to avoid large tables at each node Use finger tables to place an upper bound on the table size Finger table = partial list of nodes, progressively more distant At each node, i th entry in finger table identifies node that succeeds it by at least 2 i-1 in the circle finger_table[0]: immediate (1 st ) successor finger_table[1]: successor after that (2 nd ) finger_table[2]: 4 th successor finger_table[3]: 8 th successor O(log N) nodes need to be contacted to find the node that owns a key not as cool as O(1) but way better than O(N) 41

Improving performance even more Let s revisit O(1) lookup Each node keeps track of all current nodes in the group Is that really so bad? We might have thousands of nodes so what? Any node will now know which node holds a (key, value) Add or remove a node: send updates to all other nodes 42

Distributed Hashing Case Study Amazon Dynamo 43

Amazon Dynamo Not exposed as a web service Used to power parts of Amazon Web Services (such as S3) Highly available, key-value storage system In an infrastructure with millions of components, something is always failing! Failure is the normal case A lot of services within Amazon only need primary-key access to data Best seller lists, shopping carts, preferences, session management, sales rank, product catalog No need for complex querying or management offered by an RDBMS Full relational database is overkill: limits scale and availability Still not efficient to scale or load balance RDBMS on a large scale 44

Core Assumptions & Design Decisions Two operations: get(key) and put(key, data) Binary objects (data) identified by a unique key Objects tend to be small (< 1MB) ACID gives poor availability Use weaker consistency (C) for higher availability. Apps should be able to configure Dynamo for desired latency & throughput Balance performance, cost, availability, durability guarantees. At least 99.9% of read/write operations must be performed within a few hundred milliseconds: Avoid routing requests through multiple nodes Dynamo can be thought of as a zero-hop DHT 45

Core Assumptions & Design Decisions Incremental scalability System should be able to grow by adding a storage host (node) at a time Symmetry Every node has the same set of responsibilities Decentralization Favor decentralized techniques over central coordinators Heterogeneity (mix of slow and fast systems) Workload partitioning should be proportional to capabilities of servers 46

Consistency & Availability Strong consistency & high availability cannot be achieved simultaneously Optimistic replication techniques eventually consistent model propagate changes to replicas in the background can lead to conflicting changes that have to be detected & resolved When do you resolve conflicts? During writes: traditional approach reject write if cannot reach all (or majority) of replicas but don't deal with conflicts Resolve conflicts during reads: Dynamo approach Design for an "always writable" data store - highly available read/write operations can continue even during network partitions Rejecting customer updates won't be a good experience A customer should always be able to add or remove items in a shopping cart 47

Consistency & Availability Who resolves conflicts? Choices: the data store system or the application? Data store Application-unaware, so choices limited Simple policy, such as "last write wins Application App is aware of the meaning of the data Can do application-aware conflict resolution E.g., merge shopping cart versions to get a unified shopping cart. Fall back on "last write wins" if app doesn't want to bother 48

Reads & Writes Two operations: get(key) returns 1. object or list of objects with conflicting versions 2. context (resultant version per object) put(key, context, value) stores replicas context: ignored by the application but includes version of object key is hashed with MD5 to create a 128-bit identifier that is used to determine the storage nodes that serve the key hash(key) identifies node 49

Partitioning the data Break up database into chunks distributed over all nodes Key to scalability Relies on consistent hashing Regular hashing: change in # slots requires all keys to be remapped Consistent hashing: K/n keys need to be remapped, K = # keys, n = # slots Logical ring of nodes: just like Chord Each node assigned a random value in the hash space: position in ring Responsible for all hash values between its value and predecessor s value Hash(key); then walk ring clockwise to find first node with position>hash Adding/removing nodes affects only immediate neighbors 50

Partitioning: virtual nodes A node is assigned to multiple points in the ring Each point is a virtual node 51

Dynamo virtual nodes A physical node holds contents of multiple virtual nodes In this example: 2 physical nodes, 5 virtual nodes Node 14: keys 11, 12, 13, 14 Node 1: keys 15, 0, 1 14 15 0 1 2 Node 3: keys 2, 3 13 Node A 3 12 Node B 4 Node 10: keys 9, 10 11 5 10 9 8 7 6 Node 8: keys 4, 5, 6, 7, 8 52

Partitioning: virtual nodes Advantage: balanced load distribution If a node becomes unavailable, load is evenly dispersed among available nodes If a node is added, it accepts an equivalent amount of load from other available nodes # of virtual nodes per system can be based on the capacity of that node Makes it easy to support changing technology and addition of new, faster systems 53

Replication Data replicated on N hosts (N is configurable) Key is assigned a coordinator node (via hashing) = main node Coordinator is in charge of replication Coordinator replicates keys at the N-1 clockwise successor nodes in the ring 54

Dynamo Replication Coordinator replicates keys at the N-1 clockwise successor nodes in the ring Node 14 holds replicas for Nodes 1 and 3 Example: N=3 14 15 0 1 2 13 3 12 4 Node 10 holds replicas for Node 14 and 1 11 5 10 9 8 7 6 Node 8 holds replicas for Nodes 10 and 14 55

Versioning Not all updates may arrive at all replicas Clients may modify or read stale data Application-based reconciliation Each modification of data is treated as a new version Vector clocks are used for versioning Capture causality between different versions of the same object Vector clock is a set of (node, counter) pairs Returned as a context from a get() operation 56

Availability Configurable values R: minimum # of nodes that must participate in a successful read operation W: minimum # of nodes that must participate in a successful write operation Metadata hints to remember original destination If a node was unreachable, the replica is sent to another node in the ring Metadata sent with the data contains a hint stating the original desired destination Periodically, a node checks if the originally targeted node is alive if so, it will transfer the object and may delete it locally to keep # of replicas in the system consistent Data center failure System must handle the failure of a data center Each object is replicated across multiple data centers 57

Storage Nodes Each node has three components 1. Request coordination Coordinator executes read/write requests on behalf of requesting clients State machine contains all logic for identifying nodes responsible for a key, sending requests, waiting for responses, retries, processing retries, packaging response Each state machine instance handles one request 2. Membership and failure detection 3. Local persistent storage Different storage engines may be used depending on application needs Berkeley Database (BDB) Transactional Data Store (most popular) BDB Java Edition MySQL (for large objects) in-memory buffer with persistent backing store 58

Amazon S3 (Simple Storage Service) Commercial service that implements many of Dynamo s features Storage via web services interfaces (REST, SOAP, BitTorrent) Stores more than 449 billion objects 99.9% uptime guarantee (43 minutes downtime per month) Proprietary design Stores arbitrary objects up to 5 TB in size Objects organized into buckets and within a bucket identified by a unique user-assigned key Buckets & objects can be created, listed, and retrieved via REST or SOAP http://s3.amazonaws/bucket/key objects can be downloaded via HTTP GET or BitTorrent protocol S3 acts as a seed host and any BitTorrent client can retrieve the file reduces bandwidth costs S3 can also host static websites 59

The end 60