Modelling the Optimum Chunk Size for Efficient Communication over Named Data Networking

Size: px
Start display at page:

Download "Modelling the Optimum Chunk Size for Efficient Communication over Named Data Networking"

Transcription

1 ling the Optimum Chunk Size for Efficient Communication over Named Data Networking Xiaoke Jiang 1,2,3, Jun Bi 1,2,3, and Lifeng Lin 4 1 Institute for Network Sciences and Cyberspace, Tsinghua University 2 Department of Computer Science, Tsinghua University 3 Tsinghua National Laboratory for Information Science and Technology (TNList) 4 College of Computer and Information Sciences, Fujian Argiculture and Forestry University Abstract Named Data Networking (NDN) shifts the focus of communication from where to what by operating on named data at layer-3 directly. To this end, NDN data chunks bundle additional bits, e.g., signature for security, freshness period for caching, etc, with application data. Unfortunately, those bits are up to hundreds of bytes, and become significant overhead during data transmission. Large overhead results in deficiency of network data transmission, while enlarging the chunk size blindly to increase the payload ratio cannot solve this issue since giant chunks are more likely to be lost. This inspires us to model the efficiency of network data retrieval in terms of goodput to throughput (G2T) ratio, which is able to determine the optimum chunk size to maximize G2T. We model the G2T of data retrieval via single hop, multiple hops or multiple paths in NDN, and adopt two-state continuous time Markov chain to analyze the influence of frame loss rate of links. Our model is validated by simulations, and the average relative error between experimental results and that derived from our mathematical model is 0.97%. Furthermore, we design and implement an Adaptive Chunk Mechanism (ACM) to enable applications to leverage the optimum chunk size, which improves 12% of G2T compared to the default chunk size in our experiments. Keywords: 1 Introduction Named Data Networking; Information Centric Networking; ling Named Data Networking (NDN) [1, 2, 3], a promising future Internet architecture, provides information-centric primitives operating on data at layer-3 directly. Data chunks become the basic units to which transmission, caching and security are applied. To this end, NDN data chunks bundle additional bits, e.g., signature for data verification, freshness period for caching, etc, with application data. From the perspective of applications, the payload in the chunk, i.e., the application data, are the useful bits that matter ultimately. Unfortunately, those additional bits are usually up to hundreds of bytes for state-of-the-art implementation, which becomes significant overhead for communication over NDN. Compared to the header of TCP/IP packets, the overhead 1 of an NDN data chunk is too large to be neglected during network communication. While enlarging the chunk size blindly to cannot solve this issue since giant chunks are more likely to be lost. Therefore, a proper chunk size is of great value for NDN communication given that frame/packet loss occur on links. Here we present an example to show the importance of chunk size. Here we use an example to show how the overhead and chunk size affect the efficiency of network data retrieval when frame loss leads to packet loss. But we first need an indicator to measure the efficiency. Payload ratio is usually used to indicate the percentage of useful bits in a chunk P ayloadratio = payload payload + overhead. However, payload ratio is insufficient to measure the network transmission efficiency when frame/packet 2 loss occurs in network. For a chunk transmission over NDN, its payload is valid only if the packet, including the shock.jiang@gmail.com junbi@tsinghua.edu.cn linlf.2007@163.com 1 Header is not used in this place because additional bits lie in the front as well as the end of NDN data chunks. 2 NDN runs over different services, including Ethernet, WiFi, IP, UDP, TCP and so on, wherein those technologies provide link service between adjacent NDN nodes. If a packet outsizes the MTU defined by the link service, the packet will be fragmented into multiple frames. Loss of any frame, if not recovered by the link service, leads to the loss of the whole packet on NDN layer (Section 2.3). To avoid misleading, in this paper, the transmission data unit of the link service is called frame, and that of NDN is called packet; in particular, NDN Data packet is called chunk. 1

2 additional bits, is transmitted successfully. If we extend this scenario to a generalized one wherein a lot of, instead of ONE, chunks are transmitted, it is the percentage of goodput (sum of the payload in successfully transmitted chunks per time slot, excluding duplicate copies) in the throughput (all the network traffic per time slot) that is able to indicate the portion of useful bits. We define Goodput to Throughput ratio as follows G2T = Goodput T hroughput payload (1 ChunkLossRate) = (payload + overhead) = P ayloadratio (1 ChunkLossRate). (1) Compared to payload ratio, G2T indicates the percentage of useful bits in the overall transmitted traffic, rather than that in an isolated still chunk. Assume there are 21K bytes application data to be fetched, and the overhead of each chunk is 0.5K bytes. For the link between sender and receiver, its MTU is 1.5K bytes and frame loss rate is 2% without retransmission. Figure 1 illustrates two possible solutions with different chunk size to transmit the application data. Solution A leverages 21 small chunks, which introduces significant overhead (0.5K x 21) and requires 21 frames for data transmission. Solution B constructs a 21.5K giant chunk which introduces 0.5K overhead only and the chunk will be fragmented into 15 frames. P Solution A: 1.5K-size Chunks x 21 MTU: 1.5K, frame loss rate: 2% Solution B: 21.5K-size Chunk x 1 overhead per chunk: 0.5K app data: 21K C Figure 1: Two solutions to transmit 21K data from data producer (P) to consumer (C). As we can see, Solution A minimizes its chunk loss rate with small chunks (1.5K = MTU) while Solution B maximizes its payload ratio with the largest chunk. However, neither achieve the best performance as shown in Table 1 since they facilitates one factor but ignore the other. In fact, the best solution is segmented the chunks to 7.5K, whose G2T is 84.4%, larger than that of Solution A (65.3%) and B (72.1%), indicating it requires 24.9K traffic on network-layer in order to retrieve 21K application data, while Solution A and B require 32.1K and 29.1K respectively. Table 1: Performance Comparison A: 1.5K x 21 B: 21.5K x 1 Best: 7.5K x 3 Payload Ratio 66.7% 97.7% 93.3% Chunk Loss Rate 2% 26.1% 9.6% G2T 65.3% 72.1% 84.4% Expected Traffic 32.1K 29.1K 24.9K Two conclusions are drawn from the example: 1) compared to payload ratio, G2T does better in describing the efficiency of network communication; 2) neither small or large chunk can maximize the efficiency, but there should be a chunk size can achieve the best performance. Information-centric design of NDN supports multipath, multiple data sources, in-network caching in data retrieval, which is beyond the scope of traditional IP s end-to-end service model, and desires a completely different and sophisticated mathematical model. In this paper we model the G2T of data retrieval via single hop, multiple hops or multiple paths in NDN, and then adopt two-state continuous time Markov chain to analyze the influence of frame loss rate of links. Furthermore, we design and implement an Adaptive Chunk Mechanism (ACM) to enable applications to leverage the optimum chunk size. We also conduct extensive evaluations to evaluate our model, and apply ACM to improve an online video player. The contributions of this paper are three-fold. 2

3 We point out the non-negligible overhead in NDN chunk as the cost of the information-centric design, which is not a "problem" in TCP/IP architecture and deserves more attention. We model the efficiency of data retrieval over NDN, which can derive the optimum chunk size. To the best of our knowledge, this is the first NDN model to study on the overhead and optimum chunk size. This model is validated by our extensive simulations, whose relative error between experimental results and that derived from our mathematical model is 0.97%. We design and implement an Adaptive Chunk Mechanism (ACM), which leverages our model to adjust chunk size for better performance in practice. G2T improves 12% by ACM in our experiments. The rest of this paper is organized as follows. Section 2 introduces the basic design of NDN (Section 2.1), link services (Section 2.2), and application data unit, segmentation and fragmentation (Section 2.3). We present the model in Section 3, introduce ACM in Section 4, and then discuss some remaining issues in Section 5. The extensive evaluation of the model and prototype of ACM are later described in Section 6. Finally, we conclude this paper in Section 7. 2 Background of Named Data Networking 2.1 Basic Design of Named Data Networking As its name implies, every piece of data, rather than host, is named in the context of NDN [1, 2, 3], which makes data the first-class citizen in the network. NDN defines two kinds of packets, i.e., Interest and Data (or chunk in this paper), both carrying a name as the basic identifier as shown in Figure 2. Communication over NDN is pull- Interest packet Name Selectors MinMaxSuffixComponents, Exclude, Nonce Life/me Interest Data packet Name MetaInfo ContentType, FreshnessPeriod, FinalBlockId Content SignatureInfo SignatureType, KeyLocator, SignatureValue Data Figure 2: TLV Wired Format of NDN Packets based. Data consumers first express an Interest containing the name of desired chunk. The Interest is forwarded based on its name, and leaves soft state that keeps a record of the Interest, its incoming and outgoing interfaces on intermediate routers. Once an Interest meets its name-matched chunk, either at intermediate routers or at the data producers, the chunk is returned following the symmetric path marked by the soft state. We call the node at which Interest and chunk meet each other meeting point. The soft state will be eased when the requested chunk is returned or the state expires. Moreover, the soft state eliminates Interest forwarding loop by dropping duplicate Interests identified by Nonce field (Figure 2), and aggregates Interests for identical data, effectively enabling data multicast. Another remarkable usage of soft state is adaptive forwarding [4]. By observing the Interest/chunk exchange, intermediate routers are able to select the best available link based on the realtime traffic for Interest forwarding. Moreover, if retrieval failure associated with one pending Interest is detected, NDN routers can re-forward the pending Interest to alternative link(s) for fast recovery, instead of leaving it to end hosts. It is worth noting that packet loss plays an important role in adaptive forwarding, which is usually used as signal to trigger link selection and fast failure recovery. As we can see from the above design, chunk is the basic data unit to which transmission, caching and security are applied. Data is named, signed, requested, transmitted, cached and verified at the granularity of chunk. To this end, additional bits are contained in the chunk besides the Content fields (Figure 2), Name field contains a persistent unique name, decoupling requests and responses in both space and time dimensions; ContentType defines types of the chunk (e.g., NACK [4], Certificate, Data, etc); FreshnessPeriod defines the maximal time 3

4 that routers can take the data as fresh; FinalBlockId indicates the identifier of the final block in a sequence of segmentation; SignatureInfo contains the signature bits and the information of the keys used to generate the signature. As to the engineering implementation, there are two branches of NDN statck prototypes, i.e., NFD [5] and NDNx [6]. Roughly speaking, the typical overhead per chunk in state-of-the-art implementations is about 650 bytes. 2.2 NDN Runs over Everything NDN only requires simple best-effort transport as link service between adjacent NDN nodes. It can run on top of any layer-2 technology or above [1], including Ethernet, WiFi, IP, UDP, TCP and so on. Even a link protocol called NDN link protocol (NDNLP) [7] is designed for NDN. Those technologies can be categorized to two types based on services they provide: 1) reliable link: including TCP, WiFi and NDNLP, they provides reliable data transmission service wherein frame loss is recovered by themselves, and is transparent to NDN; 2) best-effort link: including Ethernet, IP, UDP, they provide best-effort service to NDN wherein frame loss leads to loss of NDN packet. Maximal transmit unit (MTU) is defined by link service as the largest size of a frame. The primary advantage of reliable link service is, frame loss will not lead to packet loss. However, the reliable service usually relies on timer and forwarding state for frame retransmission, which introduce implicit expense; what is worse, in the context of NDN, if frame loss is covered by the link, fast failure recovery cannot be triggered, nor can routers select the best links for future Interest forwarding. It is still unclear which link service fits NDN the most. We will distinguish different services in our model. In the present stage, the typical NDN running scenario is, NDN runs over UDP/TCP over IP over Ethernet/WiFi in order to leverage the current IP-based Internet infrastructure. In the current Internet infrastructure, the de facto MTU of IP is 1500 bytes since larger IP packets are likely to be blocked by middleboxes (firewall and NATs). In this case, the typical frame can carry 1472/1460 bytes, since typical IP and UDP headers are 20 and 8 bytes, respectively. Furthermore, the typical end-to-end IP packet loss rate in current Internet infrastructure is between 1% and 2% [8, 9]. 2.3 Application Data Unit, Segmentation and Fragmentation NDN-based applications produce and/or consume data at the granularity of application data unit (ADU), e.g., a HTTP response. ADUs are encapsulated in data chunks. NDN is able to split a ADU into multiple segments with different segment numbers in their names [10], and each segment is an individual chunk that requires an individual Interest to pull back. A packet that outsizes the MTU of a link, needs to be fragmented when being transmitted on the link, which is exactly the same issue for IP packet transmission. IPv4 fragments packets on routers and reassembles them on end host [11], while IPv6 performs both fragmentation and reassembly on end host [12]. But neither approach can be ported to NDN due to the following reasons: 1) an Interest packet must be delivered to NDN routers completely, otherwise routers do not know what data is requested; 2) data matching, caching and verification on routers are at the granularity of a chunk. Therefore, NDN packets must be fragmented, if needed, at the sender side of the link and reassembled at the receiver side. This is so-called hop-by-hop fragmentation and reassembly (HBH-FR) advocated by NDN Team [13]. It is feasible to avoid fragmentation by segmenting each ADU to chunks smaller than possible MTUs (at least the de facto MTU), as what is done by Solution A in Figure 1. But this is very likely to be insufficient because: 1) the overhead would be significant which consumes more bandwidth; and 2) the number of Interests expressed to retrieve the same set of application data may increase severalfold, which greatly aggravates pressure on routers. Naturally, the question, what the optimum chunk size is for ADU segmentation, is raised. This question is answered by our model. If ADU is too large, our model is able to determine how big each segment should be to maximize G2T; else if the ADU is very small, this model can determine how to compress multiple ADUs into one chunk, if possible. Our model also has practical significance since more and more applications, such as, are built over NDN. For now, 4096 bytes are widely adopted by the applications as default payload size of data chunks. 3 ling In this section, we model the optimum chunk size towards the goal of maximizing G2T. First we define some basic notations as shown in Table 2. For a given application and its running network,, M are more or less fixed. We assume that 1) all links provide best-effort services, and 2) Ω is fixed, at least it has an a relatively fixed average value; we will relax this two assumptions later. m and ρ are intermediate variables deduced from existing parameters. p is the variable, 4

5 Table 2: Notations in the Notation Meaning overhead of a chunk M, M ij MTU of a i-th link of j-th path Ω, Ω ij frame loss rate of i-th link of j-th path p payload size of a chunk m ρ P the number of frames of a chunk chunk loss rate the optimum payload size and (P + ) is the optimum chunk size. Since is fixed, we may let payload size (p) represent the chunk size (p + ) in our analysis. R1 data P1 data data interest interest interest C P2 R2 Figure 3: Topology used during the modelling. Consumer (C) hopes to retrieve data produced by P 1 &P 2. Due to the support of multipath, multiple data sources, in-network caching, the possible paths include <P 1 C>, <P 1 R 1 C>, <P 2 R 2 C>, <R 1 C>, and <R 2 C>, indicating that consumer does not have a predefined correspondent. In the rest of this section, we build our model step by step guided by the topology shown in Figure 3. First we model the G2T of data retrieval via single hop ( <P 1 C>, <R 1 C>, <R 2 C> in Figure 3), then we extend the path to multiple hops ( <P 1 R 1 C>, <P 2 R 2 C>), and thirdly we take all the possible paths into consideration, including retrieving data from multiple data sources, in-network cache or by Interest aggregation. This model reflects the fundamental differences between NDN and TCP/IP. For TCP/IP, data is sent from a certain end to another certain end, and usually routing decides a sole certain path between the two ends; while for NDN, data can be returned from any node or any path. Furthermore, we study the data retrieval via heterogeneous links, including reliable and best-effort links (Section 3.4); finally we analyze the G2T when frame loss is not fixed, but follows Gilbert loss model [14], and we use Markov chain to study on this problem (Section 3.5). 3.1 Data Retrieval via Single Hop The most basic scenario is retrieving data within one hop, as the path <P 1 C> or <R 1 C> or <R 2 C> shown in Figure 3. For a chunk with overhead and payload p, the number of frames needed to carry the chunk is or more precisely 3, m = +p M. Thus, chunk loss rate is m = + p M, (2) ρ = 1 (1 Ω) m. (3) 3 For simplicity s sake, we use Equation (2) during the modelling, and correct p to make m an integer. The following equations also follow this rule. 5

6 And the goodput-to-throughput ratio is G2T = p+ p (1 Ω) M ( + p). (4) In Equation (4), Ω,, M are treated as constants, and p is the only variable. We can infer the optimum p which maximizes G2T by taking the derivative of G2T with respect to p, and then letting the derivative be zero P = ln(1 Ω) + ( ln(1 Ω)) 2 4 Mln(1 Ω). (5) 2ln(1 Ω) The value of P derived from Equation (5) needs correction. From the perspective of engineering, the correction means to fully fill up each frame. Mathematically, the correction means to make m an integer, i.e., P = MT U i, i N, thus, we obtain { p P = MT U MT U, p } MT U max{g2t MT U. (6) } Now we present insightful analysis on the above result with a typical scenario wherein NDN runs over UDP over IP over Ethernet/WiFi. Assume the frame loss rate can be 0, 1% or 3% Error-free Ω = 1% Ω = 3% Payload Size (Bytes) Figure 4: G2T under the typical scenario, i.e., NDN over UDP over IP over Ethernet/WiFi. = 650, M = Figure 4 shows the G2T of different chunk sizes. When the network is error-free (red curve, frame loss rate is 0), G2T is a monotone increasing function. The more payload is contained in a single chunk, the greater G2T is achieved. Except for the error-free link, there is a peak point when frame loss occurs on the link; and the peak point of the orange curve (Ω = 1%) is (8832, 86.8%). Moreover, from Figure 4 we can conclude that the G2T for different chunk sizes varies in a wide range (e.g., 47% 88% when Ω = 1% and payload ranges from 500 to 20,000 bytes) indicating remarkable optimization space by adjusting the chunk to a proper size. Figure 5 presents the optimum payload size and maximal G2T when frame loss rate ranges from 1 to 10%. As we can see, the optimum payload size decreases rapidly when frame loss rate increases from 1 to 2%, which covers the range that typical end-to-end IP loss rate locates, indicating a remarkable necessity of adjusting chunk size. Another essential conclusion is that the possible values of optimum chunk size are discrete, which is of great significance when we apply the model in practice given the side-effect of adjusting chunk size to caching and Interest aggregation as discussed in Section

7 Payload Size (Bytes) typical loss rate Max G2T G2T of 4096B Optimum Payload Frame Loss Rate Figure 5: Optimum payload size and maximal G2T when frame loss rage ranges from 1 to 11%. Ω = 1%, M = Data Retrieval via Multiple Hops Generally, a chunk is transmitted to data consumers through multiple hops from where it is located. Assume there are N hops along the path, and Ω i and M i is the frame loss rate and MTU of the i-th hop. Chunk loss rate perceived on consumer side is thus, we obtain i=n ρ = 1 G2T = (1 Ω i ) p+ M i, (7) p i=n p + (1 Ω i ) p+ M i, We let the derivative of G2T (with respect to p) be zero, thus we obtain where E is P = E + E E, (8) 2E i=n E = ln(1 Ω i ) M i. (9) Compared to Equation (8) and 9, Equation (5) shares the the form of P s expression. In fact, Equation (5) is already represented by Equation (8), since path of single hop is a special case of that of multiple hops. E s expression in Equation (9) indicates the cumulative effect of frame loss on all links, since any frame loss on any hop results in the failure of data retrieval. 3.3 Data Retrieval via Multiple Paths, including multiple data sources, in-network caching and Interest aggregation From the perspective of data consumers, the chunks they retrieve may come from different paths or even different data sources (including routers). Assume there are K data retrieval paths in total, including multiple paths from 7

8 consumer to all producers, and paths from intermediate routers to consumers. 4 Assume the j-th path carries w j percentage of the total traffic, we obtain G2T = p j=k i=n p + j w j (1 Ω ij ) p+ M ij, (10) subject to j=1 j=k j=1 w j = 1. We then try to find the derive of G2T with respect to p, and we get: where D j is 0 = j=k j=1 { } w j D p+ j [lnd j p 2 + lnd j p + ], (11) D j = i=n j (1 Ω ij ) 1 M ij. We cannot get the analytic solution of P based on Equation (11), but we can obtain the numerical solution by some numerical methods, such as Newton s Method If all MTUs are almost the same (e.g., about 1500 bytes), the chunk size derived from our model should be corrected in order to fully fill up each frame. Thus, the possible values of chunk size after correction are discrete and fall into the same set with possible values derived with single hop model. It is worth noting that consumers does not differentiate the meeting point of returned data, either a data source or a intermediate router. As to Interest aggregation, a path from the aggregation point (the node at which a Interest is aggregated and no longer forwarded) to the consumer can be applied in our model. 3.4 Relax Assumption: Path consisting of Heterogeneous Links In the real network, the data forwarding path may contain heterogeneous links, which might include reliable and best-effort services. Links with different types of services and different bandwidths construct the path from where an Interest meets its desired data to its consumer. But so far we assume all the links provide best-effort service only, and now we relax this assumption. From the perspective of data consumers, packet loss is recovered by reliable link service itself without triggering fast failure recovery, which is transparent to data consumers. Therefore, the perceived G2T on consumer side when there are heterogeneous links is still described by Equation (10). But we can also model the accurate G2T for heterogeneous links theoretically. For one reliable link, frame loss is recovered by the link itself, and its theoretical G2T is quite straightforward: G2T = p (1 Ω) p +. (12) Figure 6 presents the G2T gap between UDP, TCP and error-free Ethernet. As we can see, G2T of TCP (reliable link service) and Error-free link service are monotonically increasing with payload. Assume that there are T j links providing reliable service for j-th path. We obtain G2T = p p + j=k j=1 w j i=n j i=t j (1 Ω ij ) p+ M ij (1 Ω ij ). (13) As we can see Equation (13) is very similar with Equation (10). If we use the product of w j and i=t j (1 Ω ij) as a new variable in Equation (13), these two equations have exactly the same form G2T = p p + j=k j=1 w j i=n j (1 Ω ij ) p+ M ij, (14) where w j is i=t j w j = w j (1 Ω ij ). 4 Interest aggregated by intermediate routers can be treated as being satisfied by caching. 8

9 Ethernet: Ω = 0 TCP: Ω = 3% UDP: Ω = 3% Payload Size (Bytes) Figure 6: G2T of different link services. of Ethernet, TCP and UDP are 1500, 1460, 1472 bytes respectively. Therefore, the optimum payload size can be calculated with Equation (11). Note again Equation (14) describes the accurate G2T, while Equation (10) describes the perceived one on data consumers. So far, our model can handle heterogeneous links. 3.5 Relaxing Assumption: Ω follows Gilbert loss model In above sections, Ω is assumed as a fixed parameter, and now we relax this assumption by letting it follow Gilbert loss model [14], which is widely used in existing studies. Gilbert loss model can be expressed as a twostate continuous time Markov chain. Let χ ij (t) {G, B} (G for Good, B for Bad) denote the state of i-th link of j-th path (link ij for short) at time t. If χ ij (t) = G then the frame is successfully delivered and vice versa. πij G and πb ij denote the stationary probabilities that link is good or bad. Let ξg ij and ξb ij represent the transition probability from B to G and G to B, respectively. Under this assumption, two system-dependent parameters are used to specify the continuous time Markov chain frame loss model: (1) the channel loss rate πij B, and (2) the average loss burst length 1/ξij G. Then we have ξ G ij ξ B ij πij G = ξij G +, and π ξb ij B = ij ξij G +. ξb ij Let c ij denote a m-tuple that represents the state of link ij when delivering a chunk containing m frames, and c n ij the n-th state in the m-tuple. By considering all possible combinations of c ij, we obtain the frame loss rate Ω ij = 1 m c ij [ ] L(c ij ) P(c ij ), (15) where L(c ij ) (0 L(c ij ) m) denotes the number of frame loss on the link, and P(c ij ) indicates the probability of failure of combination c ij. Further, L(c ij ) can be expressed as L(c ij ) = m 1 {c n ij =B}. (16) n=1 Let f s,t ij (τ)(s, t {G, B}) denote the transition probability of link ij from state s to state t in time τ. Therefore, we can obtain f s,t ij (τ) = P{χ ij(τ) = t χ ij (0) = s}. (17) 9

10 According to the classical transient behavior of continuous time Markov chain, we obtain the state transition matrix in time τ [ ] f G,G ij (τ) f G,B [ ij (τ) π G f B,G ij (τ) f B,B = ij + πij B κ ij πij B πb ij κ ] ij ij (τ) πij G πg ij κ ij πij B + πg ij κ, ij where κ ij = exp[ (ξij B + ξg n,n+1 ij ) τ]. Let τij denote the transmission interval between the n-th and (n+1)-th frames on the link ij, and we obtain Finally, we obtain Ω ij = 1 m P(c ij ) = π c1 ij ij { c ij m 1 n=1 L(c ij ) π c1 ij ij [ ] f cn ij,cn+1 ij ij (τ n,n+1 ij ). (18) m 1 n=1 [ f cn ij,cn+1 ij ij (τ n,n+1 ij )] }. (19) The accurate frame loss rate of link ij is expressed with Equation (19). When the stochastic process approaches to the steady state, the result derived from Equation (19) is determined by channel loss rate and the average loss burst length [15]. And the accurate G2T of each chunk can be expressed with Equation (14) by replacing Ω ij with Equation (19). However, one should not overlook the simplified model that covers different data retrieval situations, where fixed Ω can be treated as the average frame loss rate of links. In fact, ACM leverages the simplified model as engineering approach to determine the optimum chunk size. 4 Adaptive Chunk Mechanism (ACM) So far, we have built a model that can derive the optimum size towards the goal of maximizing G2T. In this section we introduce an Adaptive Chunk Mechanism (ACM) which enables applications to leverage the optimum chunk size for efficient data retrieval. The basic idea of ACM is to let applications adjust the chunk size dynamically according to the frame loss rate. We use Equation (5) and Equation (8) to determine the optimum chunk size, rather than the sophisticated model for two reasons: 1) as an engineering approach, the simplified model is enough to derive the chunk size close to the optimum one; and 2) it is usually impossible to obtain enough parameters for accurate model in practice. ACM can be used in different scenarios, e.g., point-to-point communication and large-scale data distribution for content service providers (CSPs). CSPs serve millions or even billions of consumers that widely scatter in the world and access the contents with different links and bandwidth. In this case, it is merely possible to optimize chunk size for consumers individually. But it is feasible to segment data into several chunk sizes pre-stored on servers, and let consumers request the best available one. Since the optimum chunk sizes after correction are discrete, the number of possible sizes are limited. For point-to-point applications, such as video conference, the data can be produced with the optimum chunk size in realtime. Since the communication paradigm of NDN is pull-based, it is the data consumers, instead of producers, who actively express what they want. Therefore, ACM enables data consumers to express the size of requested chunk explicitly by embedding corresponding information in the names of Interests. Furthermore, we propose two approaches to determine the optimum chunk size, i.e., active detection and frame loss rate estimation. 4.1 ACM Naming and Data Retrieval Naming is the first and foremost design element for NDN-based applications. ACM leverages two name components in data name: chunk size and segment number, while the rest part of data name is left to be decided by application themselves. Two kinds of segment number markers suggested by [10] can be supported as the example shown in Figure 7. When the sequence number is used as the segment marker, it implies that the byte offset of encapsulated data is ChunkSize SequenceN umber. We create 0xAC as the adaptive chunk size marker, and ACM does not restrict other components in the name allowing applications to express application-level meaning via names. ACM follows the general rules of NDN. Consumers have to express the right name in order to retrieve corresponding data. At the beginning of ACM, Interests with a default chunk size decided by applications are sent to explore the optimum chunk size. After several rounds of Interest/chunk exchange, the optimum chunk size can be derived via our model. But ACM naming uses different names for the same content (but different chunk sizes), this may reduce the efficiency of in-network caching and Interest aggregation. But we think the side-effect is limited as analyzed in Section

11 1. sequence number as segment number /0xAC8832/0x0010 Resource Name determined by application chunk size 2. explicit byte offset as segment number segment number /0xAC8832/0xFB88320 Resource Name determined by application chunk size segment number Figure 7: ACM naming example: a chunk contains 8832 bytes of application data whose byte offset is Determination of Optimum Chunk Size In the context of ACM, consumers can select the optimum chunk size with two different approaches: active detection and frame loss rate estimation. These two approaches can cooperate with each other. Since possible values of optimum chunk size are discrete and limited, the optimum chunk size can be detected by sending Interests with different chunk sizes and measuring the G2T. This approach is simple, and can be applied when sophisticated NDN forwarding is used leading to inaccurate estimated frame loss rate. The active detection approach should be performed periodically or triggered by the jitter of G2T in order to adapt to the dynamic network change. Frame loss rate is an important parameter to determine the optimum chunk size. Frame loss rate estimation approach can adapt to the situation where network status changes frequently. For data retrieval via single hop, it is feasible to infer the frame loss rate from chunk loss rate with Equation (3). For data retrieval via multiple hops, E can be inferred from chunk loss rate with Equation (7). Since we cannot differentiate the data retrieval path from received chunk, we approach the frame loss rate with Equation (7) with correction in generalized NDN communication. However, the estimated frame loss is smaller than the real one due to fast failure recovery on intermediate routers. Next we present a way to correct the estimated frame loss rate with the following two inequalities: ChunkLoss# 1 (1 Ω)m Ω lower = = Ω (20) RequestedP acket# m Ω upper = ChunkLoss# RequestedChunk# = 1 (1 Ω)m Ω (21) By measuring the Ω lower and Ω upper, we can get the range of Ω, and the estimated value can be constructed with Ω lower and Ω upper : Ω = α Ω lower + (1 α) Ω upper, (22) wherein α [0, 1]. The frame loss rate can be corrected based on Equation (22). For now, we pick 1/8 as the value of α based on our experimental results. We are aware that the above frame loss estimation approach is still inaccurate, and the value of α should be explored with more evaluations. And better frame loss estimation approaches and parameters are regarded as our future works. 5 Discussion 5.1 Side-effect to Caching and Interest Aggregation In-network caching and Interest aggregation eliminating requests for the same data are of great importance to large-scale data distribution. And their performances rely on the request distribution, i.e., the more of the same requests there are, the better the performance. However, ACM requires consumers to adjust the Interest (its name) based on the network status. Consequently, to retrieve the same set of application data, consumers may send several, instead of ONE, possible sets of requests. This shapes the request distribution to one that decreases the probability of re-using cached data and aggregating requests. We argue that the side-effect to Interest aggregation and in-network caching can be confined in a reasonable range, because: 1) the possible values of optimum chunk sizes are discrete and limited in ACM, which implies that Interests will fall to limited number of sets; and 2) end consumers located in the same or nearby networks 11

12 are more likely to share the same optimum chunk size, therefore, they would send the same set of Interests for the same set of application data. 5.2 Retrieval Delay The model introduced in this paper does not take retrieval delay, another important indicator of quality of service into consideration, here we give a simple analysis on transmission delay. Assume the bandwidth of a link is B and the size of ADU is p, the transmission delay is delay = p G2T B. (23) With our model, we can maximize G2T, and therefore, minimize transmission delay of ADU according to Equation (23). The conclusion is significant for non-realtime applications, e.g., ftp-like file transmission, and video on-demand where buffering is enabled. This means, it is possible to minimize the delay and maximize the G2T simultaneously. However, for realtime applications whose delay is critical, e.g., realtime video conference, things become complicated. According to Equation (23), decreasing the size of ADU can effectively reduces the delay, although the delay to retrieve all the data may be longer (there are more ADUs since each ADU is smaller). Therefore, realtime applications may try to minimize the ADU on application level, e.g., encapsulating each encoded video/audio sample data as a single ADU. As a consequence, the ADU may be smaller than the optimum size, and thus, G2T cannot be maximized. Moreover, the total number of ADUs for a given dataset is increased, and the total transmission delay is longer. The trade-off between delay and G2T should be addressed by applications themselves in this situation. 5.3 Bandwidth Consumed by Interests Interest/Chunk exchange is the way in which data retrieval is realized in NDN. But our model does not take the bandwidth consumed by Interests into consideration. Now we improve the model by leveraging symmetrical Interest/chunk exchange path. Since a chunk cannot be sent actively without sending the corresponding Interest first, the bandwidth of Interest can be treated as part of overhead, and thus, more accurate results can be inferred with the current model. In this case, the loss of any bit of an Interest also leads to failure of data retrieval, which is exactly equivalent to loss of any bit of a chunk. 6 Prototype and Evaluations We have prototyped ACM as a library, named ndnflow [16] which provides APIs to facilitate the development of NDN-based applications. Furthermore, based on the ACM library we developed a NDN-based online video player, called nplayer [17], whose user interface is shown in Figure 8(a). nplayer retrieves and plays two-way videos simultaneously, and visualizes data retrieval information in realtime with small figures at the bottom of Figure 8(a). (a) nplayer: NDN Video Player (b) Dynamic changes of chunk size (c) Dynamic changes of bit rate Figure 8: nplayer running instances. One instance is ACM-enabled, the other one adopts fixed payload size (4096 bytes). X-axis is index (sequence number) of sent Interests. However, the real network status is merely managed (by us), therefore, in addition to ndnflow and nplayer which are used to explore the effectiveness of ACM, we also leverage the most popular NDN simulator, ndnsim [18] to validate our model. Since both NDN forwarder [5, 6] and ndnsim set a upper bound for the packets size, which is about 9000 bytes, chunk size in our simulation is not bigger than this upper bound. 6.1 Effectiveness of ACM Mechanism To explore whether ACM does maximize G2T or not, we evaluate ACM on the real network. 12

13 G2T ACM 0.5 1K 4096 ACM-1K ACM ACM+1K 8.5K Chunk Size (a) G2T in similar situation Packet Frame Loss Rate (b) G2T in different situations Figure 9: ACM Experiments. (a) shows that decreasing or increasing the size decided by ACM will lower the efficiency. (b) shows ACM fits different network situations. First, we compare the G2T of five different chunk size determination strategies: the optimum chunk size determined by ACM, optimum chunk size (by ACM) minus 1K bytes (ACM-1K), optimum chunk size plus 1K bytes (ACM+1K), and two fixed chunk sizes (1K and 8.5K bytes respectively). And the G2T of those five situations are presented in Figure 9(a). As we can see, fixed chunk sizes (1K and 8.5K bytes) are the least efficient since it cannot adapt to network change, while ACM mechanism achieves the highest G2T. The G2Ts of ACM-1K and ACM+1K are worse than that of ACM, indicating that modifying chunk size determined by ACM does decrease the G2T. The above experiments are performed in a relatively short period, and thus, we assume they are performed in the same network status. And the similar result is confirmed in several rounds of running. We also measure the G2T when the network status is different. As control group, chunk size is fixed and ranges from 1K to 8K bytes, respectively. We run the experiment in different network topologies, and thus gain quite a few different results whose estimated frame loss rate varies between 0 to 20%, as shown in Figure 9(b). As we can see, for a specific frame loss rate, G2T of ACM, in most cases, is higher than the others Estimated Value Ω upper Ω lower Frame Loss Rate Chunk Size (x1472 bytes) Figure 10: Ω estimation. Based on our experiments, we choose 1/8 as value of α. Frame loss rate is the most important parameter to be estimated for ACM. And we run quite a few experiments in order to evaluate the approach described in Section 4 and α in Equation (22). We vary the chunk size ranging from 1xMTU (1472 bytes) to 5xMTUs. When the chunk size is 1xMTU, frame loss rate is equivalent to chunk loss rate since no fragmentation is performed. And we finish the whole process in quite a short period, therefore, this frame loss rate can be treated as the benchmark value. We measure Ω lower and Ω upper under different chunk 13

14 sizes as shown in Figure 10. It appears that, in our experiment, values of Ω upper are slightly larger than the benchmark value, while those of Ω lower are quite smaller than the benchmark. It means the estimated value should be closer to Ω upper than to Ω lower, thus, we pick 1/8 as the value of α. In this case, the estimated value is relatively stable as the red curve shown in Figure 10. Even the current α value is based on quite a few running instances (see the error bars in Figure 10), we are aware that more experiments are needed in order to determine α. 6.2 Validation Here we present a series of simulation results to validate the model, including the simplified model and sophisticated one, following the steps in Section 3 with topology in Figure 3. The overhead of each chunk, including corresponding Interest, is 650 bytes. Y-axis in sub-figures of Figure 15 stands for G2T, while X-axis stands for payload size. The green bars represent the measured G2T values on consumer side based on the simulation, and red curves are theoretical G2T under different payload sizes derived from our model. Since caching hit is hard to monitored in the simulation, we disable caching in order to avoid its interference. Figure 11(a)-11(c) validate the model of data retrieval via single hop under different frame loss rates (Ω [1%, 3%, 5%]) (Section 3.1). Figure 12(a)-12(c) validate the model of data retrieval via multiple hops (Section 3.2). Figure 13(a)-13(i) validate model of data retrieval via multiple paths (Section 3.3). And Figure 13(a) - 13(f) evaluate on multiple paths from the same data source; Figure 13(g) - 13(i) evaluates on multiple paths from different data sources; Figure 13(d) - 13(f) evaluate on different load percentages. Figure 14(a)-14(c) validate the model of data retrieval via heterogeneous link (Section 3.4), the G2T shown in the figures is perceived value on consumer side, since the real G2T cannot be measured by consumers. Figure 15(a)-15(c) validate the G2T when frame loss rate follows Gilbert loss model (Section 3.5) with the two key parameters: channel loss rates are 1%, 3%, and 5%, respectively; and loss length follows a uniform distribution between 1 and 4, whose average loss length is 2.5. Although our figures mainly presents the G2T under different payload sizes, the optimum chunk size (= payload + overhead) can be easily identified by selecting the highest bar in each sub-figure. These optimum payload sizes presented in the sub-figure also agree with our theoretical results. Each scenario runs 20 times, and the mean/max/min values are presented via bar charts with standard errors. In addition, we extend the evaluation to broader scenarios, e.g., frame loss rate is 10 percent, different load distribution on multiple paths, pure reliable links, fixed bytes packet loss model and so on. And all the simulation results fit very well with our model. According to our detailed analysis, the average relative error between experimental results and those derived from our mathematical model is 0.9%, and the standard variance of relative errors is (a) Ω = 1% (b) Ω = 3% (c) Ω = 5% Figure 11: Single Hop (path <P 1 C>, <R 1 C>, <R 2 C>), frame loss rate is 1%, 3% and 5%, respectively. (a) Ω = 1% (b) Ω = 3% (c) Ω = 5% Figure 12: Multiple Hops (path <P 1 R 1 C>, <P 2 R 2 C>) 7 conclusions In this paper, we model the G2T of network data retrieval via single hop, multiple hops and multiple paths in NDN. And we extensively study the influence frame loss rate with two-state continuous time Markov chain. We 14

15 (a) Ω = 1% w 1=0.5, w 2=0.5, w 3 = 0 (b) Ω = 3% w 1=0.5, w 2=0.5, w 3 = 0 (c) Ω = 5% w 1=0.5, w 2=0.5, w 3 = 0 (d) Ω = 5% w 1=0.1, w 2=0.9, w 3 = 0 (e) Ω = 5% w 1=0.3, w 2=0.7, w 3 = 0 (f) Ω = 5% w 1=, w 2=, w 3 = 0 (g) Ω = 1% w 1=0.3, w 2=, w 3 = 0.3 (h) Ω = 3% w 1=0.3, w 2=, w 3 = 0.3 (i) Ω = 5% w 1=0.3, w 2=, w 3 = 0.3 Figure 13: Multiple Paths. w 1, w 2, w 3 represent the load on path <P 1 R 1 C>, <P 1 C> and <P 2 R 2 C> respectively; (a) - (f) evaluate on multiple paths from the same data source; (g) - (i) evaluates on multiple paths from different data sources; (d) - (f) evaluate on different load percentages. (a) Ω CR = 1% (b) Ω CR = 3% (c) Ω P R=5% Figure 14: Heterogeneous Links(path <P 1 R 1 C> or <P 2 R 2 C>) (a) Ω = 1% (b) Ω = 3% (c) Ω = 5% Figure 15: Ω follows Gilbert Loss 15

16 can derive the optimum chunk size with the model, based on which we design and implement an ACM mechanism to enable applications to leverage the optimum chunk size. We conclude our work as follows: The overhead of NDN chunk should be considered for communication over NDN, which is a new issue and requires more attention. In this case, we advocate goodput to throughput (G2T) ratio to measure the efficiency of network data retrieval when frame/packet loss occurs. According to our model, we can derive the optimum chunk size which maximizes the G2T, effectively improving the efficiency of data retrieval. Our simulations and experiments validate this conclusion. Besides the optimum chunk size, based on our discussion, our model implies the feasibility of maximizing G2T and minimizing transmission delay simultaneously. Our model also works for other information-centric networking architectures, wherein data chunks usually contain large overhead. The overhead of each chunk differs for different information-centric designs or different implementations, but we believe our model should be capable of fitting those situations. References [1] L. Zhang, D. Estrin, J. Burke, V. Jacobson, J.D. Thornton, D.K. Smetters, B. Zhang, G. Tsudik, D. Massey, C. Papadopoulos, et al. Named data networking (ndn) project. Technical report, Tech. report ndn-0001, PARC, [2] V. Jacobson, D.K. Smetters, J.D. Thornton, M.F. Plass, N.H. Briggs, and R.L. Braynard. Networking named content. In Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies. ACM, [3] Lixia Zhang, Alexander Afanasyev, Jeffrey Burke, Van Jacobson, Patrick Crowley, Christos Papadopoulos, Lan Wang, Beichuan Zhang, et al. Named Data Networking. ACM SIGCOMM Computer Communication Review, 44(3):66 73, [4] Cheng Yi, Alexander Afanasyev, Lan Wang, Beichuan Zhang, and Lixia Zhang. Adaptive forwarding in named data networking. ACM SIGCOMM Computer Communication Review, 42(3):62 67, [5] NFD: NDN Forwarding Daemon. [6] NDNx: Software Router of NDN. [7] Junxiao Shi and Beichuan Zhang. NDNLP: A Link Protocol for NDN. NDN Technical Report NDN-0006, [8] Amogh Dhamdhere, Hao Jiang, and Constantinos Dovrolis. Buffer sizing for congested internet links. In INFOCOM th Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings IEEE, volume 2, pages IEEE, [9] Y Angela Wang, Cheng Huang, Jin Li, and Keith W Ross. Queen: Estimating packet loss rate between arbitrary internet hosts. In Passive and Active Network Measurement, pages Springer, [10] NDN Project Team. NDN Technical Memo: Naming Conventions. Technical report, NDN Technical Report NDN-0022, [11] Jon Postel et al. RFC 791: Internet Protocol [12] S. Deering and R. Hinden. RFC 2460: Internet Protocol, version 6 (IPv6) Specification [13] Alexander Afanasyev, Junxiao Shi, Lan Wang, Beichuan Zhang, and Lixia Zhang. Packet Fragmentation in NDN: Why NDN Uses Hop-By-Hop Fragmentation. Technical Report NDN-0032, [14] Edgar N Gilbert. Capacity of a burst-noise channel. Bell system technical journal, 39(5): , [15] Paul G Hoel, Sidney C Port, and Charles J Stone. Introduction to stochastic processes. Waveland Press, [16] NDN Flow: Adaptive Chunk Mode Prototype. [17] nplayer: NDN-based Video Player. [18] Spyridon Mastorakis, Alexander Afanasyev, Ilya Moiseenko, and Lixia Zhang. ndnsim 2.0: A new version of the NDN simulator for NS-3. Technical Report NDN-0028, NDN, January

MCBS: Matrix Computation Based Simulator of NDN

MCBS: Matrix Computation Based Simulator of NDN JOURNAL OF COMPUTERS, VOL. 9, NO. 9, SEPTEMBER 2014 2007 MCBS: Matrix Computation Based Simulator of NDN Xiaoke Jiang, Jun Bi, You Wang Institute for Network Sciences and Cyberspace, Tsinghua University

More information

What Benefits Does NDN Have in Supporting Mobility

What Benefits Does NDN Have in Supporting Mobility What Benefits Does NDN Have in Supporting Mobility Xiaoke Jiang, Jun Bi, You Wang {jiangxk, wangyou}@mails.tsinghua.edu.cn, junbi@tsinghua.edu.cn Institute for Network Sciences and Cyberspace, Tsinghua

More information

What Benefits Does NDN Have in Supporting Mobility

What Benefits Does NDN Have in Supporting Mobility What Benefits Does NDN Have in Supporting Mobility Xiaoke Jiang, Jun Bi, You Wang Institute for Network Sciences and Cyberspace, Tsinghua University Department of Computer Science and Technology, Tsinghua

More information

Experimental Extensions to RSVP Remote Client and One-Pass Signalling

Experimental Extensions to RSVP Remote Client and One-Pass Signalling 1 Experimental Extensions to RSVP Remote Client and One-Pass Signalling Industrial Process and System Communications, Darmstadt University of Technology Merckstr. 25 D-64283 Darmstadt Germany Martin.Karsten@KOM.tu-darmstadt.de

More information

Optimal Cache Allocation for Content-Centric Networking

Optimal Cache Allocation for Content-Centric Networking Optimal Cache Allocation for Content-Centric Networking Yonggong Wang, Zhenyu Li, Gaogang Xie Chinese Academy of Sciences Gareth Tyson, Steve Uhlig QMUL Yonggong Wang, Zhenyu Li, Gareth Tyson, Steve Uhlig,

More information

CC-SCTP: Chunk Checksum of SCTP for Enhancement of Throughput in Wireless Network Environments

CC-SCTP: Chunk Checksum of SCTP for Enhancement of Throughput in Wireless Network Environments CC-SCTP: Chunk Checksum of SCTP for Enhancement of Throughput in Wireless Network Environments Stream Control Transmission Protocol (SCTP) uses the 32-bit checksum in the common header, by which a corrupted

More information

Routing and Forwarding in ntorrent using ndnsim

Routing and Forwarding in ntorrent using ndnsim Routing and Forwarding in ntorrent using ndnsim Akshay Raman University of California, Los Angeles akshay.raman@cs.ucla.edu arxiv:1807.05061v1 [cs.ni] 22 Jun 2018 Abstract BitTorrent is a popular communication

More information

Introduction. IP Datagrams. Internet Service Paradigm. Routers and Routing Tables. Datagram Forwarding. Example Internet and Conceptual Routing Table

Introduction. IP Datagrams. Internet Service Paradigm. Routers and Routing Tables. Datagram Forwarding. Example Internet and Conceptual Routing Table Introduction Datagram Forwarding Gail Hopkins Service paradigm IP datagrams Routing Encapsulation Fragmentation Reassembly Internet Service Paradigm IP Datagrams supports both connectionless and connection-oriented

More information

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

More information

Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks

Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks Dodda Sunitha Dr.A.Nagaraju Dr. G.Narsimha Assistant Professor of IT Dept. Central University

More information

Worst-case Ethernet Network Latency for Shaped Sources

Worst-case Ethernet Network Latency for Shaped Sources Worst-case Ethernet Network Latency for Shaped Sources Max Azarov, SMSC 7th October 2005 Contents For 802.3 ResE study group 1 Worst-case latency theorem 1 1.1 Assumptions.............................

More information

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

Named Data Networking (NDN) CLASS WEB SITE: NDN. Introduction to NDN. Updated with Lecture Notes. Data-centric addressing

Named Data Networking (NDN) CLASS WEB SITE:   NDN. Introduction to NDN. Updated with Lecture Notes. Data-centric addressing CLASS WEB SITE: http://upmcsms.weebly.com/ Updated with Lecture Notes Named Data Networking (NDN) Introduction to NDN Named Data Networking (NDN) IP NDN Host-centric addressing Data-centric addressing

More information

Measuring Over-the-Top Video Quality

Measuring Over-the-Top Video Quality Contents Executive Summary... 1 Overview... 2 Progressive Video Primer: The Layers... 2 Adaptive Video Primer: The Layers... 3 Measuring the Stall: A TCP Primer... 4 Conclusion... 5 Questions to Ask of

More information

Network-Adaptive Video Coding and Transmission

Network-Adaptive Video Coding and Transmission Header for SPIE use Network-Adaptive Video Coding and Transmission Kay Sripanidkulchai and Tsuhan Chen Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213

More information

PART IV. Internetworking Using TCP/IP

PART IV. Internetworking Using TCP/IP PART IV Internetworking Using TCP/IP Internet architecture, addressing, binding, encapsulation, and protocols in the TCP/IP suite Chapters 20 Internetworking: Concepts, Architecture, and Protocols 21 IP:

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

Chapter 4 Network Layer: The Data Plane

Chapter 4 Network Layer: The Data Plane Chapter 4 Network Layer: The Data Plane A note on the use of these Powerpoint slides: We re making these slides freely available to all (faculty, students, readers). They re in PowerPoint form so you see

More information

IP - The Internet Protocol. Based on the slides of Dr. Jorg Liebeherr, University of Virginia

IP - The Internet Protocol. Based on the slides of Dr. Jorg Liebeherr, University of Virginia IP - The Internet Protocol Based on the slides of Dr. Jorg Liebeherr, University of Virginia Orientation IP (Internet Protocol) is a Network Layer Protocol. IP: The waist of the hourglass IP is the waist

More information

UNIT IV -- TRANSPORT LAYER

UNIT IV -- TRANSPORT LAYER UNIT IV -- TRANSPORT LAYER TABLE OF CONTENTS 4.1. Transport layer. 02 4.2. Reliable delivery service. 03 4.3. Congestion control. 05 4.4. Connection establishment.. 07 4.5. Flow control 09 4.6. Transmission

More information

Chapter 13 TRANSPORT. Mobile Computing Winter 2005 / Overview. TCP Overview. TCP slow-start. Motivation Simple analysis Various TCP mechanisms

Chapter 13 TRANSPORT. Mobile Computing Winter 2005 / Overview. TCP Overview. TCP slow-start. Motivation Simple analysis Various TCP mechanisms Overview Chapter 13 TRANSPORT Motivation Simple analysis Various TCP mechanisms Distributed Computing Group Mobile Computing Winter 2005 / 2006 Distributed Computing Group MOBILE COMPUTING R. Wattenhofer

More information

FEC Performance in Large File Transfer over Bursty Channels

FEC Performance in Large File Transfer over Bursty Channels FEC Performance in Large File Transfer over Bursty Channels Shuichiro Senda, Hiroyuki Masuyama, Shoji Kasahara and Yutaka Takahashi Graduate School of Informatics, Kyoto University, Kyoto 66-85, Japan

More information

Oi! Short Messaging in Opportunistic Wireless Named-Data Networks (Version 1.0)

Oi! Short Messaging in Opportunistic Wireless Named-Data Networks (Version 1.0) Oi! Short Messaging in Opportunistic Wireless Named-Data Networks (Version 1.0) Technical Report COPE-SITI-TR-18-03 January 31 st, 2018 Editor Omar Aponte (COPELABS/ULHT) Authors Omar Aponte (COPELABS/ULHT)

More information

ECE 650 Systems Programming & Engineering. Spring 2018

ECE 650 Systems Programming & Engineering. Spring 2018 ECE 650 Systems Programming & Engineering Spring 2018 Networking Transport Layer Tyler Bletsch Duke University Slides are adapted from Brian Rogers (Duke) TCP/IP Model 2 Transport Layer Problem solved:

More information

CH : 15 LOCAL AREA NETWORK OVERVIEW

CH : 15 LOCAL AREA NETWORK OVERVIEW CH : 15 LOCAL AREA NETWORK OVERVIEW P. 447 LAN (Local Area Network) A LAN consists of a shared transmission medium and a set of hardware and software for interfacing devices to the medium and regulating

More information

Mobile Transport Layer

Mobile Transport Layer Mobile Transport Layer 1 Transport Layer HTTP (used by web services) typically uses TCP Reliable transport between TCP client and server required - Stream oriented, not transaction oriented - Network friendly:

More information

WITH the evolution and popularity of wireless devices,

WITH the evolution and popularity of wireless devices, Network Coding with Wait Time Insertion and Configuration for TCP Communication in Wireless Multi-hop Networks Eiji Takimoto, Shuhei Aketa, Shoichi Saito, and Koichi Mouri Abstract In TCP communication

More information

IP - The Internet Protocol

IP - The Internet Protocol IP - The Internet Protocol 1 Orientation IP s current version is Version 4 (IPv4). It is specified in RFC 891. TCP UDP Transport Layer ICMP IP IGMP Network Layer ARP Network Access Link Layer Media 2 IP:

More information

NT1210 Introduction to Networking. Unit 10

NT1210 Introduction to Networking. Unit 10 NT1210 Introduction to Networking Unit 10 Chapter 10, TCP/IP Transport Objectives Identify the major needs and stakeholders for computer networks and network applications. Compare and contrast the OSI

More information

Outline 9.2. TCP for 2.5G/3G wireless

Outline 9.2. TCP for 2.5G/3G wireless Transport layer 9.1 Outline Motivation, TCP-mechanisms Classical approaches (Indirect TCP, Snooping TCP, Mobile TCP) PEPs in general Additional optimizations (Fast retransmit/recovery, Transmission freezing,

More information

II. Principles of Computer Communications Network and Transport Layer

II. Principles of Computer Communications Network and Transport Layer II. Principles of Computer Communications Network and Transport Layer A. Internet Protocol (IP) IPv4 Header An IP datagram consists of a header part and a text part. The header has a 20-byte fixed part

More information

6.1 Internet Transport Layer Architecture 6.2 UDP (User Datagram Protocol) 6.3 TCP (Transmission Control Protocol) 6. Transport Layer 6-1

6.1 Internet Transport Layer Architecture 6.2 UDP (User Datagram Protocol) 6.3 TCP (Transmission Control Protocol) 6. Transport Layer 6-1 6. Transport Layer 6.1 Internet Transport Layer Architecture 6.2 UDP (User Datagram Protocol) 6.3 TCP (Transmission Control Protocol) 6. Transport Layer 6-1 6.1 Internet Transport Layer Architecture The

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

Video Conferencing with Content Centric Networking

Video Conferencing with Content Centric Networking Video Conferencing with Content Centric Networking Kai Zhao 1,2, Xueqing Yang 1, Xinming Ma 2 1. Information Engineering College, North China University of Water Rescources and Electric Power,Zhengzhou,china

More information

14-740: Fundamentals of Computer and Telecommunication Networks

14-740: Fundamentals of Computer and Telecommunication Networks 14-740: Fundamentals of Computer and Telecommunication Networks Fall 2018 Quiz #2 Duration: 75 minutes ANSWER KEY Name: Andrew ID: Important: Each question is to be answered in the space provided. Material

More information

Information Network Systems The network layer. Stephan Sigg

Information Network Systems The network layer. Stephan Sigg Information Network Systems The network layer Stephan Sigg Tokyo, November 1, 2012 Error-detection and correction Decoding of Reed-Muller codes Assume a second order (16, 11) code for m = 4. The r-th order

More information

NFD Development Progress. Beichuan Zhang The University Of Arizona

NFD Development Progress. Beichuan Zhang The University Of Arizona NFD Development Progress Beichuan Zhang The University Of Arizona NFD: NDN Forwarding Daemon A year ago, we made the first public release Open source (GPL3+) New flexible packet format based on TLV Modular

More information

Your Name: Your student ID number:

Your Name: Your student ID number: CSC 573 / ECE 573 Internet Protocols October 11, 2005 MID-TERM EXAM Your Name: Your student ID number: Instructions Allowed o A single 8 ½ x11 (front and back) study sheet, containing any info you wish

More information

Sequence Number. Acknowledgment Number. Data

Sequence Number. Acknowledgment Number. Data CS 455 TCP, Page 1 Transport Layer, Part II Transmission Control Protocol These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang's courses at GMU can make

More information

TCP and UDP Fairness in Vehicular Ad hoc Networks

TCP and UDP Fairness in Vehicular Ad hoc Networks TCP and UDP Fairness in Vehicular Ad hoc Networks Forouzan Pirmohammadi 1, Mahmood Fathy 2, Hossein Ghaffarian 3 1 Islamic Azad University, Science and Research Branch, Tehran, Iran 2,3 School of Computer

More information

CC231 Introduction to Networks Dr. Ayman A. Abdel-Hamid. Internet Protocol Suite

CC231 Introduction to Networks Dr. Ayman A. Abdel-Hamid. Internet Protocol Suite CC231 Introduction to Networks Dr. Ayman A. Abdel-Hamid College of Computing and Information Technology Arab bacademy for Science &T Technology and Maritime Transport Internet Protocol Suite IP Suite Dr.

More information

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ Networking for Data Acquisition Systems Fabrice Le Goff - 14/02/2018 - ISOTDAQ Outline Generalities The OSI Model Ethernet and Local Area Networks IP and Routing TCP, UDP and Transport Efficiency Networking

More information

How to Establish Loop-Free Multipath Routes in Named Data Networking?

How to Establish Loop-Free Multipath Routes in Named Data Networking? How to Establish Loop-Free Multipath Routes in Named Data Networking? NDNcomm 2017 Klaus Schneider, Beichuan Zhang March 24, 2017 The University of Arizona 1 Routing and Forwarding in IP Networks No Loop

More information

SWAP and TCP performance

SWAP and TCP performance SWAP and TCP performance Jean Tourrilhes, HPLB 23 March 98 1 Introduction The SWAP protocol that we have proposed [4] the HRFWG is designed to carry TCP/IP traffic. Of course, we would never had proposed

More information

RD-TCP: Reorder Detecting TCP

RD-TCP: Reorder Detecting TCP RD-TCP: Reorder Detecting TCP Arjuna Sathiaseelan and Tomasz Radzik Department of Computer Science, King s College London, Strand, London WC2R 2LS {arjuna,radzik}@dcs.kcl.ac.uk Abstract. Numerous studies

More information

ECE/CSC 570 Section 001. Final test. December 11, 2006

ECE/CSC 570 Section 001. Final test. December 11, 2006 ECE/CSC 570 Section 001 Final test December 11, 2006 Questions 1 10 each carry 2 marks. Answer only by placing a check mark to indicate whether the statement is true of false in the appropriate box, and

More information

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 25, 2018

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 25, 2018 CMSC 417 Computer Networks Prof. Ashok K Agrawala 2018 Ashok Agrawala Message, Segment, Packet, and Frame host host HTTP HTTP message HTTP TCP TCP segment TCP router router IP IP packet IP IP packet IP

More information

IPv4 addressing, NAT. Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley.

IPv4 addressing, NAT. Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley. IPv4 addressing, NAT http://xkcd.com/195/ Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley Some materials copyright 1996-2012 J.F Kurose and K.W. Ross, All Rights

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

QoS Provisioning Using IPv6 Flow Label In the Internet

QoS Provisioning Using IPv6 Flow Label In the Internet QoS Provisioning Using IPv6 Flow Label In the Internet Xiaohua Tang, Junhua Tang, Guang-in Huang and Chee-Kheong Siew Contact: Junhua Tang, lock S2, School of EEE Nanyang Technological University, Singapore,

More information

Managing Caching Performance and Differentiated Services

Managing Caching Performance and Differentiated Services CHAPTER 10 Managing Caching Performance and Differentiated Services This chapter explains how to configure TCP stack parameters for increased performance ant throughput and how to configure Type of Service

More information

===================================================================== Exercises =====================================================================

===================================================================== Exercises ===================================================================== ===================================================================== Exercises ===================================================================== 1 Chapter 1 1) Design and describe an application-level

More information

Achieve Significant Throughput Gains in Wireless Networks with Large Delay-Bandwidth Product

Achieve Significant Throughput Gains in Wireless Networks with Large Delay-Bandwidth Product Available online at www.sciencedirect.com ScienceDirect IERI Procedia 10 (2014 ) 153 159 2014 International Conference on Future Information Engineering Achieve Significant Throughput Gains in Wireless

More information

Part 1: Introduction. Goal: Review of how the Internet works Overview

Part 1: Introduction. Goal: Review of how the Internet works Overview Part 1: Introduction Goal: Review of how the Internet works Overview Get context Get overview, feel of the Internet Application layer protocols and addressing Network layer / Routing Link layer / Example

More information

Review. Some slides are in courtesy of J. Kurose and K. Ross

Review. Some slides are in courtesy of J. Kurose and K. Ross Review The Internet (IP) Protocol Datagram format IP fragmentation ICMP: Internet Control Message Protocol NAT: Network Address Translation Routing in the Internet Intra-AS routing: RIP and OSPF Inter-AS

More information

Real-Time Protocol (RTP)

Real-Time Protocol (RTP) Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Stefano Vissicchio UCL Computer Science COMP0023 Recap: Last Lecture Transport Concepts Layering context Transport goals Transport mechanisms and design

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START MIDTERM EXAMINATION #2 NETWORKING CONCEPTS 03-60-367-01 U N I V E R S I T Y O F W I N D S O R - S c h o o l o f C o m p u t e r S c i e n c e Fall 2011 Question Paper NOTE: Students may take this question

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Optical Packet Switching

Optical Packet Switching Optical Packet Switching DEISNet Gruppo Reti di Telecomunicazioni http://deisnet.deis.unibo.it WDM Optical Network Legacy Networks Edge Systems WDM Links λ 1 λ 2 λ 3 λ 4 Core Nodes 2 1 Wavelength Routing

More information

CSE 4215/5431: Mobile Communications Winter Suprakash Datta

CSE 4215/5431: Mobile Communications Winter Suprakash Datta CSE 4215/5431: Mobile Communications Winter 2013 Suprakash Datta datta@cse.yorku.ca Office: CSEB 3043 Phone: 416-736-2100 ext 77875 Course page: http://www.cse.yorku.ca/course/4215 Some slides are adapted

More information

An Evaluation of Adaptive Multimedia Communication from a QoS Perspective

An Evaluation of Adaptive Multimedia Communication from a QoS Perspective U Linz Telekooperation - 1 An Evaluation of Adaptive Multimedia Communication from a QoS Perspective Michael Welzl Johannes Kepler University Linz / Austria Max Mühlhäuser TU Darmstadt Germany U Linz Telekooperation

More information

TCP /IP Fundamentals Mr. Cantu

TCP /IP Fundamentals Mr. Cantu TCP /IP Fundamentals Mr. Cantu OSI Model and TCP/IP Model Comparison TCP / IP Protocols (Application Layer) The TCP/IP subprotocols listed in this layer are services that support a number of network functions:

More information

Mobile Communications Chapter 9: Mobile Transport Layer

Mobile Communications Chapter 9: Mobile Transport Layer Prof. Dr.-Ing Jochen H. Schiller Inst. of Computer Science Freie Universität Berlin Germany Mobile Communications Chapter 9: Mobile Transport Layer Motivation, TCP-mechanisms Classical approaches (Indirect

More information

ENHANCING ENERGY EFFICIENT TCP BY PARTIAL RELIABILITY

ENHANCING ENERGY EFFICIENT TCP BY PARTIAL RELIABILITY ENHANCING ENERGY EFFICIENT TCP BY PARTIAL RELIABILITY L. Donckers, P.J.M. Havinga, G.J.M. Smit, L.T. Smit University of Twente, department of Computer Science, PO Box 217, 7 AE Enschede, the Netherlands

More information

Lecture 3. The Network Layer (cont d) Network Layer 1-1

Lecture 3. The Network Layer (cont d) Network Layer 1-1 Lecture 3 The Network Layer (cont d) Network Layer 1-1 Agenda The Network Layer (cont d) What is inside a router? Internet Protocol (IP) IPv4 fragmentation and addressing IP Address Classes and Subnets

More information

Networking interview questions

Networking interview questions Networking interview questions What is LAN? LAN is a computer network that spans a relatively small area. Most LANs are confined to a single building or group of buildings. However, one LAN can be connected

More information

Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin,

Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin, Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin, ydlin@cs.nctu.edu.tw Chapter 1: Introduction 1. How does Internet scale to billions of hosts? (Describe what structure

More information

Enhancing TCP Throughput over Lossy Links Using ECN-Capable Capable RED Gateways

Enhancing TCP Throughput over Lossy Links Using ECN-Capable Capable RED Gateways Enhancing TCP Throughput over Lossy Links Using ECN-Capable Capable RED Gateways Haowei Bai Honeywell Aerospace Mohammed Atiquzzaman School of Computer Science University of Oklahoma 1 Outline Introduction

More information

Data Communication & Networks G Session 7 - Main Theme Networks: Part I Circuit Switching, Packet Switching, The Network Layer

Data Communication & Networks G Session 7 - Main Theme Networks: Part I Circuit Switching, Packet Switching, The Network Layer Data Communication & Networks G22.2262-001 Session 7 - Main Theme Networks: Part I Circuit Switching, Packet Switching, The Network Layer Dr. Jean-Claude Franchitti New York University Computer Science

More information

CSE 1 23: Computer Networks

CSE 1 23: Computer Networks CSE 1 23: Computer Networks Total Points: 47.5 Homework 2 Out: 10/18, Due: 10/25 1. The Sliding Window Protocol Assume that the sender s window size is 3. If we have to send 10 frames in total, and the

More information

CS519: Computer Networks. Lecture 5, Part 1: Mar 3, 2004 Transport: UDP/TCP demux and flow control / sequencing

CS519: Computer Networks. Lecture 5, Part 1: Mar 3, 2004 Transport: UDP/TCP demux and flow control / sequencing : Computer Networks Lecture 5, Part 1: Mar 3, 2004 Transport: UDP/TCP demux and flow control / sequencing Recall our protocol layers... ... and our protocol graph IP gets the packet to the host Really

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 27 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Research Article ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Address for Correspondence 1 Asst. Professor, Department of Electronics

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 30 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

Data and Computer Communications. Chapter 2 Protocol Architecture, TCP/IP, and Internet-Based Applications

Data and Computer Communications. Chapter 2 Protocol Architecture, TCP/IP, and Internet-Based Applications Data and Computer Communications Chapter 2 Protocol Architecture, TCP/IP, and Internet-Based s 1 Need For Protocol Architecture data exchange can involve complex procedures better if task broken into subtasks

More information

Multimedia! 23/03/18. Part 3: Lecture 3! Content and multimedia! Internet traffic!

Multimedia! 23/03/18. Part 3: Lecture 3! Content and multimedia! Internet traffic! Part 3: Lecture 3 Content and multimedia Internet traffic Multimedia How can multimedia be transmitted? Interactive/real-time Streaming 1 Voice over IP Interactive multimedia Voice and multimedia sessions

More information

Part 3: Lecture 3! Content and multimedia!

Part 3: Lecture 3! Content and multimedia! Part 3: Lecture 3! Content and multimedia! Internet traffic! Multimedia! How can multimedia be transmitted?! Interactive/real-time! Streaming! Interactive multimedia! Voice over IP! Voice and multimedia

More information

Lecture 8. Network Layer (cont d) Network Layer 1-1

Lecture 8. Network Layer (cont d) Network Layer 1-1 Lecture 8 Network Layer (cont d) Network Layer 1-1 Agenda The Network Layer (cont d) What is inside a router Internet Protocol (IP) IPv4 fragmentation and addressing IP Address Classes and Subnets Network

More information

An Efficient Bandwidth Estimation Schemes used in Wireless Mesh Networks

An Efficient Bandwidth Estimation Schemes used in Wireless Mesh Networks An Efficient Bandwidth Estimation Schemes used in Wireless Mesh Networks First Author A.Sandeep Kumar Narasaraopeta Engineering College, Andhra Pradesh, India. Second Author Dr S.N.Tirumala Rao (Ph.d)

More information

Table of Contents. Cisco How NAT Works

Table of Contents. Cisco How NAT Works Table of Contents How NAT Works...1 This document contains Flash animation...1 Introduction...1 Behind the Mask...2 Dynamic NAT and Overloading Examples...5 Security and Administration...7 Multi Homing...9

More information

What is the difference between unicast and multicast? (P# 114)

What is the difference between unicast and multicast? (P# 114) 1 FINAL TERM FALL2011 (eagle_eye) CS610 current final term subjective all solved data by eagle_eye MY paper of CS610 COPUTER NETWORKS There were 30 MCQs Question no. 31 (Marks2) Find the class in 00000001.001011.1001.111

More information

Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs.

Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs. Internetworking Multiple networks are a fact of life: Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs. Fault isolation,

More information

AN exam March

AN exam March AN exam March 29 2018 Dear student This exam consists of 7 questions. The total number of points is 100. Read the questions carefully. Be precise and concise. Write in a readable way. Q1. UDP and TCP (25

More information

Synthesizing Adaptive Protocols by Selective Enumeration (SYNAPSE)

Synthesizing Adaptive Protocols by Selective Enumeration (SYNAPSE) Synthesizing Adaptive Protocols by Selective Enumeration (SYNAPSE) Problem Definition Solution Approach Benefits to End User Talk Overview Metrics Summary of Results to Date Lessons Learned & Future Work

More information

The Internetworking Problem. Internetworking. A Translation-based Solution

The Internetworking Problem. Internetworking. A Translation-based Solution Cloud Cloud Cloud 1 The Internetworking Problem Internetworking Two nodes communicating across a network of networks How to transport packets through this heterogeneous mass? A B The Internetworking Problem

More information

THE TCP specification that specifies the first original

THE TCP specification that specifies the first original 1 Median Filtering Simulation of Bursty Traffic Auc Fai Chan, John Leis Faculty of Engineering and Surveying University of Southern Queensland Toowoomba Queensland 4350 Abstract The estimation of Retransmission

More information

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering

More information

Request for Comments: University of Twente/Ericsson J. Loughney Nokia S. Van den Bosch Alcatel June 2005

Request for Comments: University of Twente/Ericsson J. Loughney Nokia S. Van den Bosch Alcatel June 2005 Network Working Group Request for Comments: 4080 Category: Informational R. Hancock Siemens/RMR G. Karagiannis University of Twente/Ericsson J. Loughney Nokia S. Van den Bosch Alcatel June 2005 Status

More information

Internet Control Message Protocol

Internet Control Message Protocol Internet Control Message Protocol The Internet Control Message Protocol is used by routers and hosts to exchange control information, and to inquire about the state and configuration of routers and hosts.

More information

image 3.8 KB Figure 1.6: Example Web Page

image 3.8 KB Figure 1.6: Example Web Page image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the

More information

Performance Study of CCNx

Performance Study of CCNx Performance Study of CCNx Haowei Yuan Networking Research Seminar 3/18/2013 My Topic for Today Industry participation in content centric networking Emerging networks consortium Our performance study of

More information

NAT, IPv6, & UDP CS640, Announcements Assignment #3 released

NAT, IPv6, & UDP CS640, Announcements Assignment #3 released NAT, IPv6, & UDP CS640, 2015-03-03 Announcements Assignment #3 released Overview Network Address Translation (NAT) IPv6 Transport layer User Datagram Protocol (UDP) Network Address Translation (NAT) Hacky

More information

End-to-End Transport Layer Services in the MobilityFirst Network

End-to-End Transport Layer Services in the MobilityFirst Network End-to-End Transport Layer Services in the MobilityFirst Network Kai Su, Francesco Bronzino, Dipankar Raychaudhuri, K.K. Ramakrishnan WINLAB Research Review Winter 2014 Transport Services From TCP/IP to

More information

Reliable Transport I: Concepts and TCP Protocol

Reliable Transport I: Concepts and TCP Protocol Reliable Transport I: Concepts and TCP Protocol Brad Karp UCL Computer Science CS 3035/GZ01 29 th October 2013 Part I: Transport Concepts Layering context Transport goals Transport mechanisms 2 Context:

More information

Channel Quality Based Adaptation of TCP with Loss Discrimination

Channel Quality Based Adaptation of TCP with Loss Discrimination Channel Quality Based Adaptation of TCP with Loss Discrimination Yaling Yang, Honghai Zhang, Robin Kravets University of Illinois-Urbana Champaign Abstract TCP responds to all losses by invoking congestion

More information

Network Layer (1) Networked Systems 3 Lecture 8

Network Layer (1) Networked Systems 3 Lecture 8 Network Layer (1) Networked Systems 3 Lecture 8 Role of the Network Layer Application Application The network layer is the first end-to-end layer in the OSI reference model Presentation Session Transport

More information

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF MODULE 05 MULTIPROTOCOL LABEL SWITCHING (MPLS) AND LABEL DISTRIBUTION PROTOCOL (LDP) 1 by Xantaro IP Routing In IP networks, each router makes an independent

More information