TCPeer: Rate Control in P2P over IP Networks

Similar documents
Bandwidth Trading in Unstructured P2P Content Distribution Networks

Utility-Based Rate Control in the Internet for Elastic Traffic

Scalability of the BitTorrent P2P Application

XCP: explicit Control Protocol

Congestion Control. Andreas Pitsillides University of Cyprus. Congestion control problem

CS268: Beyond TCP Congestion Control

! Network bandwidth shared by all users! Given routing, how to allocate bandwidth. " efficiency " fairness " stability. !

INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN

CS644 Advanced Networks

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

CS 268: Lecture 7 (Beyond TCP Congestion Control)

Lecture 14: Congestion Control"

Promoting the Use of End-to-End Congestion Control in the Internet

One More Bit Is Enough

Principles of congestion control

Implementing stable TCP variants

Multipath TCP. Prof. Mark Handley Dr. Damon Wischik Costin Raiciu University College London

Analyzing the Receiver Window Modification Scheme of TCP Queues

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Doctoral Written Exam in Networking, Fall 2008

Congestion Control for High Bandwidth-Delay Product Networks

Congestion Control in the Network

Congestion Control In the Network

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet

Promoting the Use of End-to-End Congestion Control in the Internet

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service

On packet marking at priority queues

Congestion Control in Communication Networks

Problems with IntServ. EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) DiffServ (cont d)

Multipath TCP. Prof. Mark Handley Dr. Damon Wischik Costin Raiciu University College London

BitTorrent Fairness Analysis

Reliable Transport II: TCP and Congestion Control

TCP and BBR. Geoff Huston APNIC. #apricot

A Relative Bandwidth Allocation Method Enabling Fast Convergence in XCP

An Enhanced Slow-Start Mechanism for TCP Vegas

Lecture 14: Congestion Control"

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

TCP and BBR. Geoff Huston APNIC

Optimal congestion control with multipath routing using TCP-FAST and a variant of RIP

Cloud Control with Distributed Rate Limiting. Raghaven et all Presented by: Brian Card CS Fall Kinicki

A COMPARATIVE STUDY OF TCP RENO AND TCP VEGAS IN A DIFFERENTIATED SERVICES NETWORK

The Variation in RTT of Smooth TCP

CS 556 Advanced Computer Networks Spring Solutions to Midterm Test March 10, YOUR NAME: Abraham MATTA

OVER THE last decades, there has been a significant

Stability Analysis of Active Queue Management Algorithms in Input and Output Queued Switches in Heterogeneous Networks

ADVANCED TOPICS FOR CONGESTION CONTROL

QoS for Real Time Applications over Next Generation Data Networks

TCP and BBR. Geoff Huston APNIC

Performance Analysis of TCP Variants

Implementation and Evaluation of Coupled Congestion Control for Multipath TCP

On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks

DiffServ Architecture: Impact of scheduling on QoS

Enhancing TCP Throughput over Lossy Links Using ECN-Capable Capable RED Gateways

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Internet Congestion Control for Future High Bandwidth-Delay Product Environments

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

CSE 123A Computer Networks

On the Transition to a Low Latency TCP/IP Internet

Congestion control in TCP

A NEW CONGESTION MANAGEMENT MECHANISM FOR NEXT GENERATION ROUTERS

CS 268: Computer Networking

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control for High Bandwidth-delay Product Networks

RED behavior with different packet sizes

On the Deployment of AQM Algorithms in the Internet

TCP START-UP BEHAVIOR UNDER THE PROPORTIONAL FAIR SCHEDULING POLICY

Congestion Control for High-Bandwidth-Delay-Product Networks: XCP vs. HighSpeed TCP and QuickStart

Page 1. Review: Internet Protocol Stack. Transport Layer Services. Design Issue EEC173B/ECS152C. Review: TCP

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links

Core-Stateless Proportional Fair Queuing for AF Traffic

TCP and BBR. Geoff Huston APNIC

STUDIES ON THE PERFORMANCE IMPROVEMENT OF WINDOW ADJUSTMENT PROCEDURE IN HIGH BANDWIDTH DELAY PRODUCT NETWORK

554 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 13, NO. 3, JUNE Ian F. Akyildiz, Fellow, IEEE, Özgür B. Akan, Member, IEEE, and Giacomo Morabito

A Proposal to add Explicit Congestion Notification (ECN) to IPv6 and to TCP

Page 1. Review: Internet Protocol Stack. Transport Layer Services EEC173B/ECS152C. Review: TCP. Transport Layer: Connectionless Service

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs

Design and Performance Evaluation of High Efficient TCP for HBDP Networks

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

Investigation of Multi-path Transmission Protocols for Congestion Control

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

Price-Based Resource Allocation in Wireless Ad Hoc Networks

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

MP-DSM: A Distributed Cross Layer Network Control Protocol

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

Routing Basics. What is Routing? Routing Components. Path Determination CHAPTER

A Probabilistic Approach for Achieving Fair Bandwidth Allocations in CSFQ

CPSC 826 Internetworking. Congestion Control Approaches Outline. Router-Based Congestion Control Approaches. Router-Based Approaches Papers

Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks

Greed Considered Harmful

image 3.8 KB Figure 1.6: Example Web Page

Open Box Protocol (OBP)

Reliable Transport II: TCP and Congestion Control

One More Bit Is Enough

Increase-Decrease Congestion Control for Real-time Streaming: Scalability

ECSE-6600: Internet Protocols Spring 2007, Exam 1 SOLUTIONS

Lecture 21. Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov.

A Framework For Managing Emergent Transmissions In IP Networks

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Dynamic Fair Bandwidth Allocation for DiffServ Classes

Transcription:

TCPeer: Rate Control in P2P over IP Networks Kolja Eger and Ulrich Killat Institute of Communication Networks Hamburg University of Technology (TUHH) 21071 Hamburg, Germany {eger, killat}@tu-harburg.de Abstract. The prevalent mechanism to avoid congestion in IP networks is the control of the sending rate with TCP. Dynamic routing strategies at the IP layer are not deployed because of problems like route oscillations and out-of-order packet deliveries. With the adoption of P2P technology, routing is done also in these overlay networks. With multi-source download protocols peers upload and download to/from other peers in parallel. Based on congestion pricing for IP networks this paper proposes a rate control algorithm for P2P over IP networks. A peer adopts the functionality of TCP and extends the congestion window mechanism with information from the overlay network. Thus, a sending peer is able to shift traffic from a congested route to an uncongested one. This change in the rate allocation will be balanced by other peers in the overlay. Hence, the receiving peers experience no degradation of their total download rate. Key words: Congestion pricing, Cross-layer optimisation, Rate control, P2P networks 1 Introduction Peer-to-peer (P2P) networks gained in popularity in recent years. In particular file-sharing applications are used in these overlay networks. The advantages are obvious. While in the client-server-architecture the total load must be carried by the server(s), it is distributed among the users in a P2P network. Namely, each peer acts as a client and a server at the same time. A very interesting approach is the multi-source download (or swarming principle), which normally is applied for large files like movies or music. The file of interest is fragmented into blocks. When a peer completes the download of a single block, it offers it to other peers that so far have not downloaded this block. Thus, a file request of a user results in several requests at different peers for different parts of the file and consequently parts of the file are downloaded from multiple peers in parallel. Most of the research in this area studies the efficiency and fairness of these P2P protocols at the overlay network. For example, [1] and [2] study the popular BitTorrent application [3] analytically and by simulation, respectively. Thereby, the order in which blocks are downloaded at

different peers or to which peers data is uploaded is discussed. In this paper we study the interaction between the P2P overlay and (in the case of the Internet) the underlying IP network. With multi-source download protocols peers upload and download to/from multiple peers in parallel. Thus, by controlling the sending rates between the peers we may achieve two desirable properties: Load balancing at the peers, which upload data, and the avoidance of congested links in the IP network. Considering load balancing, P2P technology has found already its way to content distribution networks [4]. But most of the work assumes that a file request is routed to a single server (in a sophisticated way). Multi-source download protocols adapt faster to changes at the servers or in the network because data is requested at multiple peers in parallel. Hence, the total bandwidth of all serving peers can be allocated in a fair manner to the client peers. Furthermore, also congestion can be avoided. When a connection between two peers becomes overloaded, the uploading peer can favour another peer on an uncongested route. Thus, it uses its upload bandwidth more efficiently. On the other hand the downloading peer could ask for a higher download rate at other providers of the file. Thus, it could receive the same download rate as in an uncongested network. In this paper we model the allocation of resources for P2P over IP networks as a cross-layer optimisation problem. Our approach extends congestion pricing for IP networks proposed in [5 7]. In IP networks the TCP sender controls the sending rate. The corresponding TCP flow consumes bandwidth at each hop along its path to the sink. But the routers charge a price for the usage of bandwidth, which is summed up to a path price for each TCP flow and controls the future sending rate of the flow. Besides single-path routing also multi-path routing is studied with congestion pricing [8, 9]. In these papers traffic is sent between a specific source-destination pair over multiple paths. To support this [9] proposes the usage of an overlay network. In our approach we take advantage of the overlay formed by a P2P application. Furthermore, with multi-source downloads routing decisions in the overlay are done with respect to blocks of the file rather than specific source-destination pairs. We denote our approach as TCPeer, because a peer adopts the functionality of TCP and extends it with information from the overlay network. The service requesting peers in the overlay make price offers depending on their download rates. The upload bandwidths of the serving peers are allocated based on the price offers of the remote peers minus the path price charged by the links along the routes of these flows. Flows with high price offers and low path prices receive higher rates than others. This work builds on previous work by the authors [10 12], in which only the overlay network is taken into account. To the best of our knowledge this paper is the first which studies the allocation of resources of P2P over IP networks in a single model. The paper is structured as follows. In Section 2 we model the resource allocation problem of P2P over IP networks as optimisation problem. A distributed implementation is discussed in Section 3. First simulation results are presented and discussed in Section 4. Section 5 concludes the paper.

2 Network Model We model the allocation of resources in the network as a cross-layer optimisation problem. In general, we say the network consists of three different entities. These are the servers 1 or service providers, the clients or service customers and the links or service carriers. We associate a client with a user, which requests a file for download. To differentiate between the different entities in a mathematical model we introduce the set of servers S, the set of clients C, and the set of links L. The capacity of the servers and the links is finite and is denoted by C. In a P2P application that supports a multi-source download a client uses several flows to different servers to download a file. We denote the set of servers that can be used by a client c as S(c). The other way around C(s) is the set of clients which can be served by server s. The client/server architecture can be interpreted as a special case of a P2P application where the set S(c) consists of one single server. Furthermore, we define a flow as a rate allocation from a server s S to a client c C over a route, which is a non-empty subset of all links, denoted as L(s, c). To identify a flow that uses the link l we introduce δ l sc, where δl sc = 1 if l L(s, c) and δ l sc = 0 otherwise. Suppose the utility of a client c is defined by a concave, strictly increasing utility function U, which depends on the total download rate y c. Thereby, y c is the sum of the rates x sc from all servers which are known to c and have parts of the file in which c is interested in. Similar to the rate control algorithm for IP networks [5] we model the resource allocation for an IP network with a P2P overlay as a global optimisation problem SYSTEM : maximise c C U c (y c ) (1) subjectto s S(c) c C(s) s S c C(s) x sc = y c, c C (2) x sc C s, s S (3) δ l scx sc C l, l L (4) over x sc 0 (5) Hereby, maximising the aggregated utility of the service rate y c over all service customers is the objective of the whole system. The problem is constrained by the capacity at the servers and the links. Although servers and links face the same constraint, we differentiate between both in the model, because of their different functionalities: A server can freely allocate its bandwidth over its 1 We use the terms server and client also in the context of P2P networks. Thus, we mean by server a peer which offers a service and by client a peer which requests a service. A peer can be a server and a client at the same time.

clients, whereas a link only forwards or discards packets. With a concave utility function this optimisation problem has a unique optimum with respect to y c. The rates x sc are not necessarily unique at the optimum, because different rate allocations may sum up to the same total download rate y c. Thus, many possible rate allocations may exist with respect to x sc. The Lagrangian of (1-5) is L (x, y; λ, µ, ν; m, n) = (U c (y c ) λ c y c ) + x sc λ c µ s ν l c C s S + s S c C(s) l L(s,c) µ s (C s m s ) + l L ν l (C l n l ), (6) where λ c, µ s and ν l are Lagrange multipliers and m s 0 and n l 0 are slack variables. Like in [6] Lagrange multipliers can be interpreted as prices for the usage of resources. Hence, we say λ c is the price per unit offered by the customer c and µ s and ν l are prices per unit charged by the server s and the link l, respectively. By looking at the Lagrangian in (6) we see that the global optimisation problem in (1-5) is separable into sub-problems for the clients, the servers and the links. Furthermore, maximising the total of a sum is equivalent to maximising each summand. Hence, we can decompose the Lagrangian into the sub-problems CLIENT c : maximise U c (y c ) λ c y c (7) SERVER s : over y c 0 (8) maximise λ c x sc (9) c C(s) subjectto c C(s) l L(s,c) ν l x sc C s (10) over x sc 0 (11) LINK l : maximise ν l (C l n l ) (12) over ν l 0 (13) Thereby, for each sub-problem only locally available information is needed. Only a server needs to be informed about the price offers λ c of all connected clients minus the prices charged by the links along the used route. The economical interpretation of the sub-problems is as follows. A client is selfish and tries to maximise its own utility, which depends on the rate y c. However, the client has to pay a price for using bandwidth. Since λ c is a price per unit bandwidth, the product λ c y c reflects the total price that client c pays. On the other hand a server gets paid for the allocation of bandwidth. It is interested in maximising its total revenue. The total revenue is the sum of the revenue for each client, which is computed by price times quantity. The price the server earns for a unit bandwidth is the price paid by the client minus the cost charged by

the links which are needed for forwarding the traffic to the client. Thus, also the server behaves selfishly since it is interested in its own revenue only. Therefore, it will allocate bandwidth preferentially to flows, where clients pay a high price and costs for using the routes to the clients are low. Also the links maximise their revenue. Since the slack variable n l can be interpreted as the spare capacity on the link, subtracting n l from the capacity C l reflects the total rate of forwarded traffic of this link. By multiplying it with the price per unit charged by the link we obtain the revenue of that link. As already well studied for IP networks [5] we set the utility function for all clients to U c (y c ) = w c ln (y c ), (14) where w c is the willingness-to-pay. This utility function ensures a weighted proportional fair resource allocation. Furthermore, in the context of P2P networks the willingness-to-pay can be interpreted as the contribution of a peer to the network. Thus, we could set w c to the upload bandwidth, which peer c allocates to the P2P application (see [12] for details). Using (14) the optimum (y, λ ) of (1-5) can be computed by differentiating (6) with respect to y c and x sc. Hence, yc = x sc = w c λ if S(c) {} (15) s S(c) c λ c = µ s + νl if x sc > 0 (16) l L(s,c) µ s + l L(s,c) ν l if x sc = 0 (17) Furthermore, we deduce from L/ m s = µ s and L/ n l = ν l that the price at a server or at a link is only greater zero, if the corresponding slack variable is zero. Hence, a price is charged only when the resources are fully used and competition for this resource is present. In case the bottlenecks of the network are the uplink connections of the servers and all link prices are zero, a detailed derivation of the equilibrium can be found in [10]. 3 Implementation An implementation in a decentralised architecture can only use locally available information. Thus, the problems CLIENT, SERVER and LINK from Section 2 provide a good starting point for the development of a distributed algorithm. CLIENT is an optimisation problem with respect to y c, but in a real implementation a client has no direct influence on the total download rate. It can only vary its price offer λ c. Similar, a link has only influence over its load by varying the charged price. This price depends on the level of congestion. We assume that a server receives the difference of the price offered by a client minus the price

charged by the links along the used route and adapts its sending rate to the client. Thus, we propose the following distributed algorithm CLIENT c : λ c (t + 1) = SERVER s : x sc (t + 1) = max LINK l : w c max (η, y c (t)) = ( ɛ, x sc (t) + γx sc (t) ν l (t + 1) = max 0, ν l (t) + κ s S where w c ( max η, s S(c) x sc(t) ( c C(s) p cs (t) = λ c (t) p cs (t) ) (18) d C(s) x sd(t)p ds (t) C s )) (19) δscx l sc (t) C l (20) l L(s,c) ν l (t) (21) is the price from client c received by server s and the parameters ɛ and η are small positive constants. Hence, the service rate x sc does not fall below ɛ. This models a kind of probing, which is necessary to receive information about the price p cs. Similar, η ensures a bounded price offer, if the total download rate is zero. (19) can be interpreted as follows. Since p cs is the price per unit bandwidth, which server s receives from client c, the product x sc p cs is the total price which c pays to s. Thus, the sum in the numerator of (19) represents the total revenue of server s. Multiplying the total revenue by x sc /C s specifies the average revenue server s generates by allocating the bandwidth x sc. If the total price of client c is higher than the average price (i.e. the left term in the inner bracket of (19) is greater than the right one), then server s increases the rate to client c. Otherwise, server s decreases the rate. The distributed algorithm represents a market for bandwidth where prices control the rate allocations. Since the capacity of a server or a link is a non-storable commodity the market is a spot market, where the current supply and demand affect the future price. A prove that the stable point of the distributed algorithm corresponds to the optimum of (1-5) is presented in [10] for the case when all link prices are zero (i.e. the bottlenecks in the network are the uplink capacities of the servers). Similarly, it can be proven for the extended model in this paper. The link prices in (20) are greater zero when the input rate exceeds the capacity of a link and thus the link is overloaded. (20) corresponds to the link price rule PC1 in [13]. Thus, although we extend the congestion pricing model to P2P networks, we do not change the router implementation of previous work for IP networks. Furthermore, we show in the following that the rate adaptation of a

server in (19) reduces to a differential equation very similar to the primal algorithm (Equation 5) in [6] for the case of a single client-server pair. A client-server pair is a server handling a single client, which is served only by this server. Hence, C(s) = {c} S(c) = {s} holds for a server s and a client c and (19) reduces to d dt x sc(t) = γ w c x sc (t) l L(s,c) ( ν l (t). 1 x ) sc(t) (22) C s When the uplink of the server is not the bottleneck the term (1 x sc /C s ) is just a scaling of the step size γ and (22) corresponds to Equation 5 in [6]. Packet-level implementation: The distributed algorithm discussed above considers the problem at flow-level. Now, we transform the algorithm to packetlevel. Like in any other TCP implementation the sending rate is specified by the congestion window cwnd. cwnd is the number of packets being sent per roundtrip time (RTT). Hence, the sending rate can be computed with x = cwnd/rtt. The congestion window is adapted on each acknowledgement. These are received with rate cwnd/rtt by assuming that the RTT is constant for small time intervals. Therefore, updating the congestion window with cwnd sc (t) =γrtt sc (t) p cs(t) cwnd sd (t) d C(s) RTT sd (t) p ds(t) C s (23) at packet-level corresponds to (19) at flow-level. The link rule in (20) computes the price based on the current utilisation only and does not take the queue size into account. To penalise large queues (20) can be extended to ν l (t+t l ) = max 0, ν l (t) + κ α l (b l (t) b 0 ) + δscx l sc (t) C l, (24) s S c C(s) where b 0 is the target and b l (t) the current queue size. α l 0 weights the influence of the queue size compared to the current utilisation. Furthermore, the time between two updates is T l. (Similarly, we assume that (18) is updated every T c in a packet-level implementation.) (24) corresponds to PC3 in [13]. In this work we assume explicit prices of the links and clients are communicated to the server (e.g. in the header of the packets). Although this does not conform to the IP and TCP standards, it demonstrates the functionality of the proposed approach in general. For IP networks a large body of research (e.g. [13,14]) study marking strategies where a single bit (see explicit congestion notification (ECN) [15]) is used to convey the pricing information. We assume this approach is applicable also for our model, but this is part of future research (e.g. ECN is used to transport the price information of the links and the P2P protocol is used to communicate the price offers from the clients).

4 Evaluation To demonstrate the functionality of the distributed algorithm we study two simple examples in ns-2. The network of interest is depicted in Figure 1. The Fig. 1. Network first example consists of the two servers s1 and s2 and the two clients c1 and c2. The bottlenecks in the network are the uplink capacities of the servers and C s1 = C s2 = 10 Mbps. The capacity of the links in the core (i.e. in Figure 1 the links connecting routers denoted by r) and of the downlinks to the clients is much larger, e.g. C = 100 Mbps. The delay of all links is 10ms. Assume both clients can download from both servers in parallel, if they are online. This represents a small P2P overlay. At t = 0 s only server s1 and client c1 are online. c2 and s2 are switched on at t = 20 s and t = 40 s, respectively. At t = 60 s client c1 goes offline, followed by server s1 at t = 80 s. The total download rate of the two clients and the price information received by the servers (see (21)) are depicted in Figure 2. For the first 20 seconds only c1 and s1 are active and the download rate of c1 converges to the capacity of s1 (which is around 874 packets per second when using a packet size of 1500B). With the link rule in (20) the queue size at the uplink of s1 converges to 12 packets. Also the price p c1s1 converges to a steady-state value of around 1.15, which can be computed by dividing the willingness-to-pay of c1 with the server capacity in packets [10]. When c2 gets active at t = 20 s the server s1 receives higher payments because the total willingness-to-pay in the network doubles. The download rates of both clients converge to a fair share of the bottleneck capacity in the following. Hereby, oscillations occur since the estimated download rate (and consequently its price offer) of the new client c2 is inaccurate at the beginning. As depicted in Figure 2(b) the price received by the server can fall below zero because the server allocates to much bandwidth and the uplink bandwidth is exceeded such that the link price of the server s uplink increases. When a new server gets online at t = 40 s and one client leaves at t = 60 s the spare capacity is used efficiently by the online client(s). Thus, this example shows that the proposed algorithm can be used for load balancing because it realises a fair allocation of the servers capacities. For the case, where the uplink

1800 4 1600 3.5 Total download rate y c [Pkts/s] 1400 1200 1000 800 600 400 Client 1 Client 2 Received Price p cs 3 2.5 2 1.5 1 0.5 0 200 0.5 0 0 10 20 30 40 50 60 70 80 90 100 Time t 1 0 10 20 30 40 50 60 70 80 90 100 Time t (a) Total download rate (b) Received price Fig.2. Example 1 (w c = 1000 c C, γ = 0.1RTT, PC1: κ = 0.1, T l = T c = 0.1 s) capacities are the only bottlenecks in the network, further simulation results at flow-level can be found in previous work of the authors [11,12]. In the following we investigate the influence of congestion in the core of the network. Therefore, we reduce the capacity for the links r1 r3, r1 r4, r2 r3 and r2 r4 in Figure 1 to C core = 12 Mbps. To avoid large queuing delays we compute link prices with (24) in this example. We start with the two servers and the two clients from the previous example, but new client-server pairs get active during the simulation. Assume s3 c3, s4 c4, s5 c5 and s6 c6 start at 20 s, 40 s, 60 s and 80 s, respectively. Furthermore, only c1 and c2 are served in parallel by s1 and s2. The other clients are served by a single server only (indicated by the same number). The total download rate for each client is depicted in Figure 3. At the beginning the clients c1 and c2 share the available bandwidth equally between each other as in the previous example. Looking at the congestion window of the four connections in Figure 4 we see that not all connections are used equally. It suffices that the total download rate of the two clients is the same. When s3 and c3 start at t = 20 s the link r1 r3 gets congested. This can be seen by an increasing link price in Figure 5. Thus, the congestion window between s1 c1 as well as the total download rate of client 1 decrease. However, this increases the price offer of client 1 and therefore the congestion window between s2 c1. The servers s1 and s2 shift traffic from one connection to the other one (see Fig. 4) and the total download rates of the three active clients converge to a fair and efficient allocation (see Fig. 3). The link price of r1 r3 falls to zero again, which indicates that it is not a bottleneck in the network. When the next client-server pair starts at t = 40 s prices for the links r1 r3 and r1 r4 increase and stagnate. The servers s1 and s2 restart to shift the traffic but cannot avoid congestion in the core network. However, by changing the rate they still ensure a fair rate allocation since all four clients receive roughly the same total download rate.

1200 1000 Client 2 Client 5 Total download rate y c 800 600 400 Client 4 Client 3 Client 1 200 0 0 10 20 30 40 50 60 70 80 90 100 Time t Fig.3. Example 2: Total download rate (w c = 1000, γ = 0.1RTT, PC3: κ = 0.1, α = 0.1, b 0 = 5, T l = T c = 0.1s) 60 50 s2 c2 40 s1 c1 cwnd 30 20 s1 c2 10 s2 c1 0 0 10 20 30 40 50 60 70 80 90 100 Time t Fig. 4. Example 2: Congestion window

3 2.5 r1 r4 r2 r4 2 r1 r3 Link Price 1.5 r1 r3 r1 r4 1 r1 r3 r2 r3 0.5 0 0 10 20 30 40 50 60 70 80 90 100 Time t Fig.5. Example 2: Link price At t = 60 s the connection s5 c5 starts. Thereby, the link r2 r3 becomes overloaded and its link price increases. Now, the network is in a congested state, where a trade-off between fairness and efficiency has to be done. Only connection s2 c2 uses the uncongested core link r2 r4. Thus, c2 has the largest download rate. Since most of the server capacity is allocated to the connection s2 c2, client 5 and client 4 benefit from small link prices at r2 r3 and r1 r4. The link price on r1 r3 is the highest one. Hence, client 1 and client 3 get smaller download rates as compared to the other clients. Finally, with s6 c6 starting at t = 80 s all core links get congested and the servers s1 and s2 control the rate to c1 and c2 such that bandwidth is allocated in a fair manner. The price oscillates around 1.43, which is the equilibrium price. This example demonstrates that the proposed algorithm is able to shift traffic from congested to uncongested routes. Thus, resources of the network are used efficiently while preserving fairness between clients. 5 Conclusion and Future Work This paper models the allocation of resources in an IP network when it is used by a P2P overlay. It extends the well-known congestion pricing approach for IP networks to multi-source download P2P protocols. Furthermore, a distributed algorithm is proposed, where sending peers control the rate depending on the price offers by the remote peers and the path prices charged by the routers in the IP network. Since sending rates depend on the price offers of the remote peers the algorithm ensures fairness between peers. Furthermore, indicated by the path price a peer is aware of the level of congestion in the underlying IP network. Thus,

it can shift upload bandwidth from congested to uncongested flows. Because of the change in price offers at the receiving peers other senders will change their sending rates accordingly. The rate allocation of all peers converges to an efficient and fair allocation of bandwidth in the network. Thereby, P2P traffic avoids congested links when uncongested routes between some peers exist. We validate the functionality of the new approach analytically and by simulations for a small network. Thereby, explicit price information is assumed. Part of future work is to show the functionality of the algorithm for limited price information using ECN to be compliant with the IP protocol. Since the deployment of ECN is slow, another possible direction is to study the interaction between P2P and predominantly used TCP versions. These can be modelled by different utility functions as described in [16]. References 1. Qiu, D. Srikant, R.: Modeling and performance analysis of BitTorrent-like peerto-peer networks. Computer Communication Review 34(4) (2004) 367 378 2. Bharambe, A., Herley, C., Padmanabhan, V.: Analyzing and improving BitTorrent performance. Technical Report MSR-TR-2005-03, Microsoft Research (2005) 3. Cohen, B.: Incentives build robustness in BitTorrent. In: Proc. 1st Workshop on Economics of Peer-to-Peer Systems, Berkeley (2003) 4. Turrini, E., Panzieri, F.: Using P2P techniques for content distribution internetworking: A research proposal. In: Proc. IEEE P2P 2002. (2002) 5. Kelly, F.: Charging and rate control for elastic traffic. In: European Transactions on Telecommunications. Volume 8. (1997) 33 37 6. Kelly, F., Maulloo, A., Tan, D.: Rate control in communication networks: shadow prices, proportional fairness and stability. In: Journal of the Operational Research Society. Volume 49. (1998) 237 252 7. Low, S., Lapsley, D.: Optimization flow control, I: basic algorithm and convergence. IEEE/ACM Transactions on Networking 7(6) (1999) 861 874 8. Wang, W., Palaniswami, M., Low, S.: Optimal flow control and routing in multipath networks. Perform. Eval. 52(2-3) (2003) 119 132 9. Han, H., Shakkottai, S., Hollot, C.: Overlay TCP for multi-path routing and congestion control (2003) 10. Eger, K., Killat, U.: Resource pricing in peer-to-peer networks. IEEE Communications Letters 11(1) (2007) 82 84 11. Eger, K., Killat, U.: Fair resource allocation in peer-to-peer networks. In: Proc. SPECTS 06, Calgary, Canada (2006) 39 45 12. Eger, K., Killat, U.: Bandwidth trading in unstructured P2P content distribution networks. In: Proc. IEEE P2P 2006, Cambridge, UK (2006) 39 46 13. Athuraliya, S., Low, S.: Optimization flow control, II: Implementation. Technical report, Melbourne University (2000) 14. Zimmermann, S., Killat, U.: Resource marking and fair rate allocation. In: Proc. ICC 2002, New York. Volume 2. (2002) 1310 1314 15. Ramakrishnan, K., Floyd, S., Black, D.: The addition of explicit congestion notification (ECN) to IP. RFC 3168 (2001) 16. Low, S., Srikant, R.: A mathematical framework for designing a low-loss, low-delay internet. Networks and Spatial Economics 4 (2004) 75 101