Experimental Study of Parallel Downloading Schemes for Internet Mirror Sites

Size: px
Start display at page:

Download "Experimental Study of Parallel Downloading Schemes for Internet Mirror Sites"

Transcription

1 Experimental Study of Parallel Downloading Schemes for Internet Mirror Sites Spiro Philopoulos Dept. of Elect. and Comp. Eng. University of Manitoba Winnipeg, Canada Muthucumaru Maheswaran Dept. of Computer Science University of Manitoba Winnipeg, Canada ABSTRACT A common method used to reduce document retrieval times is the use of content replication i.e., mirror servers. The mirror servers provide several alternate sites to download a specific document and were traditionally used to increase the availability of content. Recently, several studies focused on using multiple mirror sites to concurrently download portions of a document from a set of mirror sites. Following are some of the issues involved in using multiple mirror sites concurrently: (a) selection of the best mirror servers from the client, (b) coping with dynamic overloading of the network and servers, and (c) coping with faults. This paper briefly examines two existing schemes for concurrent downloading or parallel-access downloading, or paraloading as it is called. It proposes a third paraloading scheme called the Dynamic Parallel Access. The performance of this scheme is experimentally evaluated. Recommendations for further improvements are also discussed. 1. Introduction As various applications are Internet-enabled, the number of repositories in the Internet that hold valuable content are increasing. For example Internet-based repositories are beginning to hold content such as multi-gigabit movie files, operating system and other large software distributions, and large multimedia documents. This creates a need for the clients to find faster downloading schemes for both online and offline usage. The conventional way of downloading files from an Internet-based server is to open one or more connections between it and the client. While opening multiple connections might reduce download times compared to a single connection, due to the following issues the performance enhancement can be limited: (a) server load and capacity, (b) bottleneck link bandwidth, (c) instantaneous bandwidth, (d) multi-connection client overhead, and (e) interconnection resource allocation. Replicating content such that it can be accessed from multiple locations is one way for decreasing the download times. Caching, content delivery, and mirroring are some of the techniques that replicate content using different policies and primary purposes. Mirror servers were traditionally used to improve the availability of the content. Recently, however, several projects have examined the concept of concurrently using multiple mirror servers to download content for a single client. The concurrent or parallel downloading is performed to reduce the download time for a client. One of the key problems with paraloading schemes is the mirror site selection problem. This problem is complicated because accurate performance information to select the best set of mirror servers is not available to the clients. Additionally, the problems mentioned above for the single server case still exist. Further, network conditions can change during the download that can lead to decreased performance. This observation motivated us to examine adaptive parallel download techniques that use a set of mirror servers. The membership of the mirror server set may dynamically change. In paraloading segments of the file are downloaded from each server in the set and are then reassembled at the client. The parallel-access scheme was first proposed by [4]. Following are some of the advantages of paraloading: Depending on the topology, in the ideal case, the aggregate bandwidth of all individual connections may increase the overall throughput to the client. Because multiple connections are used, paraloading is more resilient to link and general route failures. The inherent load balancing associated with parallel loading due to the fact that connections are spread out to many servers and not just one. Therefore, the scheme can be immune to individual server load fluctuations, bottleneck link bandwidth and traffic fluctuations. Section 2. examines three different paraloading schemes (two existing and one new) all of which use application-level negotiation to schedule the transmission of different segments of a file. Section 3. describes the experiments performed to evaluate the different schemes and examines the results obtained from the experiments.

2 2. Parallel Downloading Schemes 2.1 History-Based Parallel Access History-based parallel access [4] is a relatively simple scheme in which information regarding the previous transmission rates between the client and every mirror server is recorded in a database and used to determine how large a file segment is downloaded by each server. More specifically, for Å mirror servers, the file will be divided into Å unequally-sized disjoint segments each of which will be assigned to a mirror server based on expected data rates derived from historical data between the client and each mirror, i.e., the faster a server is (based on the history) the larger the file block it will be assigned to be downloaded from that server. File blocks are downloaded from each individual server using the HTTP 1.1 byte range header feature resulting in a server-side transparent solution requiring modifications only on the client side. The problem with the history-based parallel access is of course the validity of the recorded transmission rates i.e., how close those recorded rates are compared to the actual transmission rates in future downloads. The larger the divergence between historical and (future) actual data transmission rates, the worse the performance is since slower servers will be assigned file blocks larger than should have been and vice versa for mirror servers faster than what historical data proclaims. One issue that must be addressed with such a scheme is how exactly is server transmission rate data obtained and updated. 2.2 Semi-dynamic Parallel Access The semi-dynamic parallel access downloading method was first proposed by [4], where it is referred to as dynamic parallel access downloading but in this paper is referred to as semi-dynamic in order not to confuse with our own parallel downloading method examined next. The semidynamic scheme is conceptually simple: initially the receiver will obtain the size of the file it wishes to download (e.g., by polling one of the mirror servers) and then the file is segmented by the receiver into equal-size blocks. The receiver will request at the beginning from all the mirror servers to download one block. Once a server has completed sending the requested file block, the client will request a new (undelivered) block from the server. This will continue until all file blocks have been downloaded, at which point the receiver will reassemble the file from the individual downloaded blocks. As in history-based parallel access downloading, the HTTP 1.1 byte range feature is used to download individual blocks of the file from each mirror server. In order to reduce overhead, persistent TCP connections are used between the client and each server. One enhancement used is that if there are less file blocks left than servers, a file block that has already been requested but not yet completely downloaded can be downloaded simultaneously in parallel by another server leading potentially to a faster download for that block. In this scheme, faster servers provide larger portions (more blocks) of the total transmitted data. Based on the paraloading scheme proposed in [4], [3] proposed a modified paraloading scheme which is essentially identical to that proposed by [4] but with the following three enhancements: Minimizing the delay at startup by piggybacking onto a data block a request for the file size and a request for the list of mirror servers that posses the requested file. Minimizing the idle time between block downloads by pipelining the block requests. Minimizing the idle time in downloading the last block in one of three ways: a) use small block sizes b) dynamically adjust file block size for that last blocks c) send requests to the idle servers to download the remaining portions of the last block. 2.3 Dynamic Parallel Access In this section a new type of paraloading scheme developed by the authors of this paper is examined, called dynamic parallel access downloading. This new paraloading method is based to a certain extent on the semi-dynamic paraloading scheme first proposed by [4], as examined above, however it was designed with large size file transfers in mind. In this new scheme, like those before, the client segments the file into fixed-size blocks which are requested and downloaded from the individual mirror servers. More specifically, operation is as follows. Initially connections are opened to all the mirror servers. In the current implementation the list of available mirror servers is obtained from a file containing the server list, however in the future for actual practical use there must be a way to dynamically obtain the list of available servers. This could be done in various ways, such as retrieving the server list from some type of a directory service, or by extending the DNS system to provide such information [2]. In order to reduce overhead, persistent TCP connections are used between the client and each server so that the TCP connection-setup three-way handshake delay is avoided and also so that several slow-start phases are avoided. This paraloading scheme, as opposed to the history-based and semi-dynamic parallel access paraloading methods examined before, is not based on the HTTP protocol (using the HTTP byte range header to download a block of a file), but uses a proprietary paraloading server and client running on top of the TCP transport layer protocol. The file size is obtained from the first server with which the client comes into contact with and thus no time is lost by performing separate server probing for the explicit purpose of obtaining the file size value. The file, as mentioned above, is divided into fixed-size blocks and a request is made to each server to download a distinct block.

3 Once a given server has completed the assigned file block download a new block for downloading is assigned to it. Currently the block size used is set to 1 Mbyte. The selection of the block size is an important issue, in which three facts should be taken into account: The file block size should be such, such that the number of blocks is larger than the number of servers. Otherwise the faster servers will exhibit large idle times. Each block should be small enough so as to provide a fine enough granularity, for the same reason as mentioned in i). On the other hand, each block should be large enough so as to reduce the number of requests that need to be made to servers, thus reducing the ratio of idle time to download time. The major point of difference between this paraloading algorithm and semi-dynamic paraloading, is that the number of mirror servers used in downloading does not remain static. After a connection is established to each server and a block download request is made to every server, the so-called server downscaling testing commences in the case that 4 or more servers are currently being used. With downscaling testing, the transmission rate of every server is monitored during the parallel download. At given time intervals, the slowest of the servers (based on the recorded transmission rates) is selected to remain idle for a period of time by not being given any new block download requests. After the testing time has elapsed, the aggregate download transmission rate (i.e. the sum of the individual transmission rates of each server) is compared to the aggregate data rate of the time before the selected server was made idle. If the new aggregate rate is lower than the old aggregate rate by less than a certain threshold percentage (currently a value of 15% is used, but can be varied if desired) or the new rate happens to be equal or even higher, then the server is deemed to be unnecessary since it offers no substantial increase in bandwidth and is deleted from the list of active mirror servers for the current download and is used no further in downloading, thus freeing up unnecessarily tied up server and network resources. In this case server downscaling testing will continue by proceeding to the next server (which will be the slowest among the currently active servers, as before). Otherwise, if the drop in the aggregate download rate is above the threshold percentage, then the server is taken out of the idle state and used again in downloading. In addition, server downscaling will cease in this case since it is considered that we have now reached the ideal number of servers needed i.e. the least number of servers required to give the maximum possible download rate. Server downscaling will also terminate at any time if less than four servers are currently active participating in the download process. After server downscaling has terminated (i.e. no more servers will be deactivated in the particular download), server upscaling testing will commence. Namely, if a significant decrease in the aggregate download rate exists for a sustained period of time then an additional server will be added for use in paraloading. More specifically, at periodic time intervals (currently every 3 seconds, although this value can be adjusted) testing will start by monitoring the aggregate download rate twice at given intervals (1 seconds after test commencement and 1 seconds after that, again these values can be varied). If at both of those time intervals there is a significant drop in the aggregate download rate (of 15% or more, although as previous testing parameters this can be varied) then it is concluded that there is a sustained drop in the download rate and to remedy this a mirror server, if an available one exists of course, is added for use in downloading to try to increase the rate. The fact that paraloading starts using all available mirror servers and then downscales as opposed to for example starting from one server and then adding mirror servers until there was no substantial increase in bandwidth is deliberate. The reason is that the purpose is to decrease download times and thus bandwidth underutilization (using too few servers) is a much more significant factor than bandwidth overutilization (using too many servers). While bandwidth underutilization results in lost time, bandwidth overutilization simply results in temporary use of unneeded resources that will be released eventually. One observation that should briefly be made here is the duration of the testing intervals used in server downscaling testing as examined above. On the one hand it is desirable that testing periods be as short as possible so as to complete mirror server deletion as quickly as possible. On the other hand though these testing intervals must be long enough so as to obtain valid data (transmission rate readings) that can be used to make a valid decision. In other words when a server is made idle in order to measure the effects on the aggregate download rate, sufficient time must elapse to allow the network to enter into a steady-state so to speak so as to measure the real effects on the aggregate download rate. The value used currently, and believed to satisfy both requirements stated above, is 1 seconds. 3. Experimental Results and Analysis Experiments were conducted by measuring the download times using various downloading methods. More specifically, the download times using dynamic paraloading were compared to those of single server FTP file downloading and single server multiple parallel connection downloading using the dynamic paraloading client and server. In all tests conducted clients from the same domain were used along with a multitude of geographically dispersed servers in Canada and the United States. More specifically, a total of eight servers were used: three at the University of Victoria in British Colombia, two TRLabs servers in Winnipeg and Regina, two servers at the Purdue University and an additional server at the University of Illinois.

4 16 25 loading time/ (sec) hour of day (2:3 AM - 2:3 PM) Multiserver Paraloading Fastest FTP Slowest FTP loading time/ (sec) hour of day (2:3AM - 2:3PM) 19 Multiserver Paraloading Fastest Single Server Paraloading Slowest Single Server Paraloading Figure 1. Download times of multi-server dynamic paraloading and single server FTP. Figure 2. Download times of multi-server dynamic paraloading and single-server multiple connection paraloading. The tests (dynamic paraloading, single-server multiple connection paraloading and simple single-server FTP downloading) were conducted over a 24-hour period with results being obtained every hour in order to get results for the performance of each of the three downloading schemes used under relatively heavy and varying traffic conditions (during the day) and under lighter and less varying traffic conditions i.e during the early and late hours. In testing dynamic parallel access downloading, all eight remote hosts were used as mirror servers to download a 45 Mbyte file. The same 45 Mbyte file was used for testing the other two single-server downloading schemes also, testing with many different servers. For the single-server multiple connection paraloading scheme, a single mirror server was used each time with eight connections between client and server being used, i.e. as many as the number of mirror servers used in multi-server dynamic paraloading. Figure 1 compares the results obtained for multiserver dynamic paraloading with those obtained for singleserver FTP downloading (fastest and slowest cases). From the results it can be seen that while paraloading significantly outperformed the slowest FTP downloading case, being approximately 1 times faster, the difference is significantly smaller when compared to the fastest FTP download, but still much faster proving the obvious benefit of paraloading. The maximum theoretical performance of multi-server dynamic paraloading is if the aggregate download rate is equal to the sum of the individual server download rates which is not the case here. This is due to the fact that given a sufficiently large number of servers a saturation point is reached beyond which the aggregate rate can increase no more no matter how many servers are added due to a bottleneck at the receiving client and/or along the network path between client and mirror servers. This is also the reason for which server downscaling, as explained before, was added to the dynamic paraloading scheme. Figure 2 compares single-server multiple connection paraloading results to those obtained for multi-server dynamic paraloading. Here we notice that mutli-server paraloading is somewhat slower than the fastest case of singleserver multi-connection paraloading. This is not unexpected as its only natural that in multi-server paraloading the slower servers will degrade the performance and will very likely be somewhat slower than when using the same number of connections to the fastest server(s). The advantage of multi-server paraloading even in this case is the better load balancing it achieves by spreading the load across multiple servers and not just one server where with multiple connections to one server the server will very soon become congested. The case is the opposite though for the slowest single-server multi-connection paraloading, where multiserver paraloading significantly outperforms it due mainly to one of the advantages of using multiple servers which is higher performance mirror servers compensating for the lower performance servers. Figure 3 summarizes the results displayed in the previous two figures comparing multi-server dynamic paraloading performance to the best FTP and single-server paraloading performances. It is worthy of mention here that the download performances, particularly that of FTP, degrade during the morning and afternoon hours and the variation also of the download times increases during those hours. This applies mostly to FTP, with the two paraloading schemes being affected to a lesser extent, thus concluding that the use of multiple connections also has a smoothing effect on download performance isolating to a certain extent the overall download performance from the perfor-

5 loading time/ (sec) hour of day (2:3AM - 2:3PM) 19 Multiserver Paraloading Fastest Single Server Paraloading Fastest FTP by the authors of this paper was presented and examined. Dynamic parallel access downloading, based on the experiments performed, proved that it performed very well in terms of reducing download time even under varying traffic/network conditions. Additionally, dynamic parallel access possesses some advantages over the other two downloading methods briefly examined, such as being able to adjust the number of mirror servers that are active thus releasing server and network resources that are unnecessarily utilized, and also improved server load balancing compared to the other downloading schemes. It is strongly believed that performance can be further improved with additional enhancements. Some possible enhancements to dynamic parallel access paraloading that should be examined as future work are: Figure 3. Download times of multi-server dynamic paraloading and best single-server multi-connection paraloading and FTP performances. mance of any individual server or other traffic/network condition variations. 3.1 Comparison to Semi-Dynamic Paraloading From the experimental results, presented and analyzed in the previous section, it is apparent that the dynamic paraloading scheme performs very well increasing download performance significantly when compared to slower servers. One aspect were dynamic paraloading has an advantage over semi-dynamic, is the adjustable number of mirror servers that are used at any given time. Instead of using all the mirror servers that are available, dynamic paraloading can reduce the number of active servers thus releasing server and network resources that are unnecessarily utilized. Another advantage of dynamic paraloading is the better server load balancing achieved compared to the other downloading schemes. While all downloading schemes achieve server load balancing to a certain extent, by distributing the connections among all mirror servers, dynamic paraloading with its server downscaling feature releases servers that are unnecessarily utilized (i.e. that offer very little to the aggregate download rate) and this would include heavily loaded servers. The ability to be able to add additional connections between the client and a given server. More specifically, the ability to add a 2nd, 3rd etc. connection between the client and the fastest servers rather than just being able to add an additional mirror server. Use pipelining of block download requests in order to minimize the number of idle periods between block downloads. The development of a method to dynamically retrieve the list of mirror servers, such as a directory service for example. Determine the effects of paraloading in terms of network congestion. References [1] J. Byers, M. Luby,and M. Mitzenmacher, Accessing multiple mirror sites in parallel: Using tornado codes to speed up downloads, IEEE INFOCOM, [2] J. Kangasharju, K.W. Ross, and J. W. Roberts, Locating copies of objects using the domain name system, 4th International Caching Workshop, Mar. 2. [3] A. Miu and E. Shih, Performance Analysis of a Dynamic Parallel Downloading Scheme from Mirror Sites Throughout the Internet, Technical Report, Laboratory of Computer Science, MIT, 2. [4] P. Rodriguez, A. Kirpal, and E. Biersack, Parallelaccess for mirror sites in the Internet, IEEE INFO- COM, Conclusions and Future Work In this paper a new parallel access downloading scheme referred to as Dynamic Parallel Access scheme developed

Distributed System Chapter 16 Issues in ch 17, ch 18

Distributed System Chapter 16 Issues in ch 17, ch 18 Distributed System Chapter 16 Issues in ch 17, ch 18 1 Chapter 16: Distributed System Structures! Motivation! Types of Network-Based Operating Systems! Network Structure! Network Topology! Communication

More information

Module 15: Network Structures

Module 15: Network Structures Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and

More information

Module 16: Distributed System Structures. Operating System Concepts 8 th Edition,

Module 16: Distributed System Structures. Operating System Concepts 8 th Edition, Module 16: Distributed System Structures, Silberschatz, Galvin and Gagne 2009 Chapter 16: Distributed System Structures Motivation Types of Network-Based Operating Systems Network Structure Network Topology

More information

Parallel-Access for Mirror Sites in the Internet

Parallel-Access for Mirror Sites in the Internet 1 -Access for Mirror Sites in the Internet Pablo Rodriguez Andreas Kirpal Ernst W. Biersack Institut EURECOM 2229, route des Crêtes. BP 193 694, Sophia Antipolis Cedex, FRANCE frodrigue, kirpal, erbig@eurecom.fr

More information

Module 16: Distributed System Structures

Module 16: Distributed System Structures Chapter 16: Distributed System Structures Module 16: Distributed System Structures Motivation Types of Network-Based Operating Systems Network Structure Network Topology Communication Structure Communication

More information

Internet Content Distribution

Internet Content Distribution Internet Content Distribution Chapter 1: Introduction Jussi Kangasharju Chapter Outline Introduction into content distribution Basic concepts TCP DNS HTTP Outline of the rest of the course Kangasharju:

More information

A Survey on the Performance of Parallel Downloading. J. L. Chiang April 7, 2005

A Survey on the Performance of Parallel Downloading. J. L. Chiang April 7, 2005 A Survey on the Performance of Parallel Downloading J. L. Chiang April 7, 2005 Outline Parallel download schemes Static equal Static unequal Dynamic Performance comparison and issues adpd scheme Large-scale

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

Mobile Transport Layer

Mobile Transport Layer Mobile Transport Layer 1 Transport Layer HTTP (used by web services) typically uses TCP Reliable transport between TCP client and server required - Stream oriented, not transaction oriented - Network friendly:

More information

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

More information

SamKnows test methodology

SamKnows test methodology SamKnows test methodology Download and Upload (TCP) Measures the download and upload speed of the broadband connection in bits per second. The transfer is conducted over one or more concurrent HTTP connections

More information

What Is Congestion? Effects of Congestion. Interaction of Queues. Chapter 12 Congestion in Data Networks. Effect of Congestion Control

What Is Congestion? Effects of Congestion. Interaction of Queues. Chapter 12 Congestion in Data Networks. Effect of Congestion Control Chapter 12 Congestion in Data Networks Effect of Congestion Control Ideal Performance Practical Performance Congestion Control Mechanisms Backpressure Choke Packet Implicit Congestion Signaling Explicit

More information

The Google File System (GFS)

The Google File System (GFS) 1 The Google File System (GFS) CS60002: Distributed Systems Antonio Bruto da Costa Ph.D. Student, Formal Methods Lab, Dept. of Computer Sc. & Engg., Indian Institute of Technology Kharagpur 2 Design constraints

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff and Shun Tak Leung Google* Shivesh Kumar Sharma fl4164@wayne.edu Fall 2015 004395771 Overview Google file system is a scalable distributed file system

More information

Internet Traffic Characteristics. How to take care of the Bursty IP traffic in Optical Networks

Internet Traffic Characteristics. How to take care of the Bursty IP traffic in Optical Networks Internet Traffic Characteristics Bursty Internet Traffic Statistical aggregation of the bursty data leads to the efficiency of the Internet. Large Variation in Source Bandwidth 10BaseT (10Mb/s), 100BaseT(100Mb/s),

More information

Scalable Parallel-Access for Mirrored Servers

Scalable Parallel-Access for Mirrored Servers Scalable Parallel-Access for Mirrored Servers Amgad Zeitoun Hani Jamjoom Mohamed El-Gendy Department of Electrical Engineering and Computer Science, The University of Michigan 1301 Beal Ave. Ann Arbor,

More information

Chapter 4 NETWORK HARDWARE

Chapter 4 NETWORK HARDWARE Chapter 4 NETWORK HARDWARE 1 Network Devices As Organizations grow, so do their networks Growth in number of users Geographical Growth Network Devices : Are products used to expand or connect networks.

More information

White Paper. Performance in Broadband Wireless Access Systems

White Paper. Performance in Broadband Wireless Access Systems White Paper Performance in Broadband Wireless Access Systems Defining Broadband Services Broadband service is defined as high speed data that provides access speeds of greater than 256K. There are a myriad

More information

Internetworking Models The OSI Reference Model

Internetworking Models The OSI Reference Model Internetworking Models When networks first came into being, computers could typically communicate only with computers from the same manufacturer. In the late 1970s, the Open Systems Interconnection (OSI)

More information

6.1 Internet Transport Layer Architecture 6.2 UDP (User Datagram Protocol) 6.3 TCP (Transmission Control Protocol) 6. Transport Layer 6-1

6.1 Internet Transport Layer Architecture 6.2 UDP (User Datagram Protocol) 6.3 TCP (Transmission Control Protocol) 6. Transport Layer 6-1 6. Transport Layer 6.1 Internet Transport Layer Architecture 6.2 UDP (User Datagram Protocol) 6.3 TCP (Transmission Control Protocol) 6. Transport Layer 6-1 6.1 Internet Transport Layer Architecture The

More information

3. Evaluation of Selected Tree and Mesh based Routing Protocols

3. Evaluation of Selected Tree and Mesh based Routing Protocols 33 3. Evaluation of Selected Tree and Mesh based Routing Protocols 3.1 Introduction Construction of best possible multicast trees and maintaining the group connections in sequence is challenging even in

More information

Lecture 11: Networks & Networking

Lecture 11: Networks & Networking Lecture 11: Networks & Networking Contents Distributed systems Network types Network standards ISO and TCP/IP network models Internet architecture IP addressing IP datagrams AE4B33OSS Lecture 11 / Page

More information

The term "physical drive" refers to a single hard disk module. Figure 1. Physical Drive

The term physical drive refers to a single hard disk module. Figure 1. Physical Drive HP NetRAID Tutorial RAID Overview HP NetRAID Series adapters let you link multiple hard disk drives together and write data across them as if they were one large drive. With the HP NetRAID Series adapter,

More information

A method to vary the Host interface signaling speeds in a Storage Array driving towards Greener Storage.

A method to vary the Host interface signaling speeds in a Storage Array driving towards Greener Storage. A method to vary the Host interface signaling speeds in a Storage Array driving towards Greener Storage. Dr. M. K. Jibbe, Technical Director NetApp, Wichita, Ks USA Mahmoud.jibbe@netapp.com 316 636 8810

More information

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

ECE 610: Homework 4 Problems are taken from Kurose and Ross. ECE 610: Homework 4 Problems are taken from Kurose and Ross. Problem 1: Host A and B are communicating over a TCP connection, and Host B has already received from A all bytes up through byte 248. Suppose

More information

Performance of UMTS Radio Link Control

Performance of UMTS Radio Link Control Performance of UMTS Radio Link Control Qinqing Zhang, Hsuan-Jung Su Bell Laboratories, Lucent Technologies Holmdel, NJ 77 Abstract- The Radio Link Control (RLC) protocol in Universal Mobile Telecommunication

More information

"METHOD FOR IMPROVED BANDWIDTH UTILIZATION IN DATA DOWNLOADING SYSTEMS USING INTELLIGENT DYNAMIC CONNECTION LIMIT STEPPING"

METHOD FOR IMPROVED BANDWIDTH UTILIZATION IN DATA DOWNLOADING SYSTEMS USING INTELLIGENT DYNAMIC CONNECTION LIMIT STEPPING Technical Disclosure Commons Defensive Publications Series October 05, 2016 "METHOD FOR IMPROVED BANDWIDTH UTILIZATION IN DATA DOWNLOADING SYSTEMS USING INTELLIGENT DYNAMIC CONNECTION LIMIT STEPPING" Simon

More information

Chapter 13 TRANSPORT. Mobile Computing Winter 2005 / Overview. TCP Overview. TCP slow-start. Motivation Simple analysis Various TCP mechanisms

Chapter 13 TRANSPORT. Mobile Computing Winter 2005 / Overview. TCP Overview. TCP slow-start. Motivation Simple analysis Various TCP mechanisms Overview Chapter 13 TRANSPORT Motivation Simple analysis Various TCP mechanisms Distributed Computing Group Mobile Computing Winter 2005 / 2006 Distributed Computing Group MOBILE COMPUTING R. Wattenhofer

More information

Outline 9.2. TCP for 2.5G/3G wireless

Outline 9.2. TCP for 2.5G/3G wireless Transport layer 9.1 Outline Motivation, TCP-mechanisms Classical approaches (Indirect TCP, Snooping TCP, Mobile TCP) PEPs in general Additional optimizations (Fast retransmit/recovery, Transmission freezing,

More information

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2 Introduction :- Today single CPU based architecture is not capable enough for the modern database that are required to handle more demanding and complex requirements of the users, for example, high performance,

More information

Performance Monitoring User s Manual

Performance Monitoring User s Manual NEC Storage Software Performance Monitoring User s Manual IS025-32E NEC Corporation 2003-2017 No part of the contents of this book may be reproduced or transmitted in any form without permission of NEC

More information

Mobile Communications Chapter 9: Mobile Transport Layer

Mobile Communications Chapter 9: Mobile Transport Layer Prof. Dr.-Ing Jochen H. Schiller Inst. of Computer Science Freie Universität Berlin Germany Mobile Communications Chapter 9: Mobile Transport Layer Motivation, TCP-mechanisms Classical approaches (Indirect

More information

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering

More information

Transport layer issues

Transport layer issues Transport layer issues Dmitrij Lagutin, dlagutin@cc.hut.fi T-79.5401 Special Course in Mobility Management: Ad hoc networks, 28.3.2007 Contents Issues in designing a transport layer protocol for ad hoc

More information

UNIT IV -- TRANSPORT LAYER

UNIT IV -- TRANSPORT LAYER UNIT IV -- TRANSPORT LAYER TABLE OF CONTENTS 4.1. Transport layer. 02 4.2. Reliable delivery service. 03 4.3. Congestion control. 05 4.4. Connection establishment.. 07 4.5. Flow control 09 4.6. Transmission

More information

Multimedia Systems 2011/2012

Multimedia Systems 2011/2012 Multimedia Systems 2011/2012 System Architecture Prof. Dr. Paul Müller University of Kaiserslautern Department of Computer Science Integrated Communication Systems ICSY http://www.icsy.de Sitemap 2 Hardware

More information

Analyzing the Receiver Window Modification Scheme of TCP Queues

Analyzing the Receiver Window Modification Scheme of TCP Queues Analyzing the Receiver Window Modification Scheme of TCP Queues Visvasuresh Victor Govindaswamy University of Texas at Arlington Texas, USA victor@uta.edu Gergely Záruba University of Texas at Arlington

More information

Chapter 17: Distributed Systems (DS)

Chapter 17: Distributed Systems (DS) Chapter 17: Distributed Systems (DS) Silberschatz, Galvin and Gagne 2013 Chapter 17: Distributed Systems Advantages of Distributed Systems Types of Network-Based Operating Systems Network Structure Communication

More information

CSE 4215/5431: Mobile Communications Winter Suprakash Datta

CSE 4215/5431: Mobile Communications Winter Suprakash Datta CSE 4215/5431: Mobile Communications Winter 2013 Suprakash Datta datta@cse.yorku.ca Office: CSEB 3043 Phone: 416-736-2100 ext 77875 Course page: http://www.cse.yorku.ca/course/4215 Some slides are adapted

More information

Reliable Transport I: Concepts and TCP Protocol

Reliable Transport I: Concepts and TCP Protocol Reliable Transport I: Concepts and TCP Protocol Stefano Vissicchio UCL Computer Science COMP0023 Today Transport Concepts Layering context Transport goals Transport mechanisms and design choices TCP Protocol

More information

CS 356 Lab #1: Basic LAN Setup & Packet capture/analysis using Ethereal

CS 356 Lab #1: Basic LAN Setup & Packet capture/analysis using Ethereal CS 356 Lab #1: Basic LAN Setup & Packet capture/analysis using Ethereal Tasks: Time: 2:00 hrs (Task 1-6 should take 45 min; the rest of the time is for Ethereal) 1 - Verify that TCP/IP is installed on

More information

Switching and Forwarding Reading: Chapter 3 1/30/14 1

Switching and Forwarding Reading: Chapter 3 1/30/14 1 Switching and Forwarding Reading: Chapter 3 1/30/14 1 Switching and Forwarding Next Problem: Enable communication between hosts that are not directly connected Fundamental Problem of the Internet or any

More information

Continuous Real Time Data Transfer with UDP/IP

Continuous Real Time Data Transfer with UDP/IP Continuous Real Time Data Transfer with UDP/IP 1 Emil Farkas and 2 Iuliu Szekely 1 Wiener Strasse 27 Leopoldsdorf I. M., A-2285, Austria, farkas_emil@yahoo.com 2 Transilvania University of Brasov, Eroilor

More information

Coding for the Network: Scalable and Multiple description coding Marco Cagnazzo

Coding for the Network: Scalable and Multiple description coding Marco Cagnazzo Coding for the Network: Scalable and Multiple description coding Marco Cagnazzo Overview Examples and motivations Scalable coding for network transmission Techniques for multiple description coding 2 27/05/2013

More information

Application. Transport. Network. Link. Physical

Application. Transport. Network. Link. Physical Transport Layer ELEC1200 Principles behind transport layer services Multiplexing and demultiplexing UDP TCP Reliable Data Transfer TCP Congestion Control TCP Fairness *The slides are adapted from ppt slides

More information

BEng. (Hons) Telecommunications. Examinations for / Semester 2

BEng. (Hons) Telecommunications. Examinations for / Semester 2 BEng. (Hons) Telecommunications Cohort: BTEL/16B/FT Examinations for 2016 2017 / Semester 2 Resit Examinations for BTEL/15B/FT MODULE: NETWORKS MODULE CODE: CAN 1102C Duration: 2 ½ hours Instructions to

More information

Protocol Overview. TCP/IP Performance. Connection Types in TCP/IP. Resource Management. Router Queues. Control Mechanisms ITL

Protocol Overview. TCP/IP Performance. Connection Types in TCP/IP. Resource Management. Router Queues. Control Mechanisms ITL Protocol Overview TCP/IP Performance E-Mail HTTP (WWW) Remote Login File Transfer TCP UDP ITL IP ICMP ARP RARP (Auxiliary Services) ATM Ethernet, X.25, HDLC etc. 2/13/06 Hans Kruse & Shawn Ostermann, Ohio

More information

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute

More information

Monitor Qlik Sense sites. Qlik Sense Copyright QlikTech International AB. All rights reserved.

Monitor Qlik Sense sites. Qlik Sense Copyright QlikTech International AB. All rights reserved. Monitor Qlik Sense sites Qlik Sense 2.1.2 Copyright 1993-2015 QlikTech International AB. All rights reserved. Copyright 1993-2015 QlikTech International AB. All rights reserved. Qlik, QlikTech, Qlik Sense,

More information

Packet Switching - Asynchronous Transfer Mode. Introduction. Areas for Discussion. 3.3 Cell Switching (ATM) ATM - Introduction

Packet Switching - Asynchronous Transfer Mode. Introduction. Areas for Discussion. 3.3 Cell Switching (ATM) ATM - Introduction Areas for Discussion Packet Switching - Asynchronous Transfer Mode 3.3 Cell Switching (ATM) Introduction Cells Joseph Spring School of Computer Science BSc - Computer Network Protocols & Arch s Based on

More information

Virtualizing Agilent OpenLAB CDS EZChrom Edition with VMware

Virtualizing Agilent OpenLAB CDS EZChrom Edition with VMware Virtualizing Agilent OpenLAB CDS EZChrom Edition with VMware Technical Overview Abstract This technical overview describes the considerations, recommended configurations, and host server requirements when

More information

precise rules that govern communication between two parties TCP/IP: the basic Internet protocols IP: Internet protocol (bottom level)

precise rules that govern communication between two parties TCP/IP: the basic Internet protocols IP: Internet protocol (bottom level) Protocols precise rules that govern communication between two parties TCP/IP: the basic Internet protocols IP: Internet protocol (bottom level) all packets shipped from network to network as IP packets

More information

WHITE PAPER Application Performance Management. The Case for Adaptive Instrumentation in J2EE Environments

WHITE PAPER Application Performance Management. The Case for Adaptive Instrumentation in J2EE Environments WHITE PAPER Application Performance Management The Case for Adaptive Instrumentation in J2EE Environments Why Adaptive Instrumentation?... 3 Discovering Performance Problems... 3 The adaptive approach...

More information

Managing Caching Performance and Differentiated Services

Managing Caching Performance and Differentiated Services CHAPTER 10 Managing Caching Performance and Differentiated Services This chapter explains how to configure TCP stack parameters for increased performance ant throughput and how to configure Type of Service

More information

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University CS 555: DISTRIBUTED SYSTEMS [THREADS] Shrideep Pallickara Computer Science Colorado State University Frequently asked questions from the previous class survey Shuffle less/shuffle better Which actions?

More information

IBM InfoSphere Streams v4.0 Performance Best Practices

IBM InfoSphere Streams v4.0 Performance Best Practices Henry May IBM InfoSphere Streams v4.0 Performance Best Practices Abstract Streams v4.0 introduces powerful high availability features. Leveraging these requires careful consideration of performance related

More information

Experiments on TCP Re-Ordering March 27 th 2017

Experiments on TCP Re-Ordering March 27 th 2017 Experiments on TCP Re-Ordering March 27 th 2017 Introduction The Transmission Control Protocol (TCP) is very sensitive to the behavior of packets sent end-to-end. Variations in arrival time ( jitter )

More information

TCP Nicer: Support for Hierarchical Background Transfers

TCP Nicer: Support for Hierarchical Background Transfers TCP Nicer: Support for Hierarchical Background Transfers Neil Alldrin and Alvin AuYoung Department of Computer Science University of California at San Diego La Jolla, CA 9237 Email: nalldrin, alvina @cs.ucsd.edu

More information

Chapter 24 Congestion Control and Quality of Service 24.1

Chapter 24 Congestion Control and Quality of Service 24.1 Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control

More information

Visualization of Internet Traffic Features

Visualization of Internet Traffic Features Visualization of Internet Traffic Features Jiraporn Pongsiri, Mital Parikh, Miroslova Raspopovic and Kavitha Chandra Center for Advanced Computation and Telecommunications University of Massachusetts Lowell,

More information

Design, Implementation and Evaluation of Resource Management System for Internet Servers

Design, Implementation and Evaluation of Resource Management System for Internet Servers Design, Implementation and Evaluation of Resource Management System for Internet Servers Paper ID: 193 Total number of pages: 14 Abstract A great deal of research has been devoted to solving the problem

More information

Expanding the use of CTS-to-Self mechanism to improving broadcasting on IEEE networks

Expanding the use of CTS-to-Self mechanism to improving broadcasting on IEEE networks Expanding the use of CTS-to-Self mechanism to improving broadcasting on IEEE 802.11 networks Christos Chousidis, Rajagopal Nilavalan School of Engineering and Design Brunel University London, UK {christos.chousidis,

More information

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois

More information

Final Exam for ECE374 05/03/12 Solution!!

Final Exam for ECE374 05/03/12 Solution!! ECE374: Second Midterm 1 Final Exam for ECE374 05/03/12 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam.

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2008

Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2008 Rapid Bottleneck Identification A Better Way to do Load Testing An Oracle White Paper June 2008 Rapid Bottleneck Identification A Better Way to do Load Testing. RBI combines a comprehensive understanding

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver

A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver 1 A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver Jungmin So Dept. of Computer Science, and Coordinated Science Laboratory University of Illinois

More information

UNIT 2 TRANSPORT LAYER

UNIT 2 TRANSPORT LAYER Network, Transport and Application UNIT 2 TRANSPORT LAYER Structure Page No. 2.0 Introduction 34 2.1 Objective 34 2.2 Addressing 35 2.3 Reliable delivery 35 2.4 Flow control 38 2.5 Connection Management

More information

McGill University - Faculty of Engineering Department of Electrical and Computer Engineering

McGill University - Faculty of Engineering Department of Electrical and Computer Engineering McGill University - Faculty of Engineering Department of Electrical and Computer Engineering ECSE 494 Telecommunication Networks Lab Prof. M. Coates Winter 2003 Experiment 5: LAN Operation, Multiple Access

More information

Increase-Decrease Congestion Control for Real-time Streaming: Scalability

Increase-Decrease Congestion Control for Real-time Streaming: Scalability Increase-Decrease Congestion Control for Real-time Streaming: Scalability Dmitri Loguinov City University of New York Hayder Radha Michigan State University 1 Motivation Current Internet video streaming

More information

CSE 124: Networked Services Fall 2009 Lecture-19

CSE 124: Networked Services Fall 2009 Lecture-19 CSE 124: Networked Services Fall 2009 Lecture-19 Instructor: B. S. Manoj, Ph.D http://cseweb.ucsd.edu/classes/fa09/cse124 Some of these slides are adapted from various sources/individuals including but

More information

Guide To TCP/IP, Second Edition UDP Header Source Port Number (16 bits) IP HEADER Protocol Field = 17 Destination Port Number (16 bit) 15 16

Guide To TCP/IP, Second Edition UDP Header Source Port Number (16 bits) IP HEADER Protocol Field = 17 Destination Port Number (16 bit) 15 16 Guide To TCP/IP, Second Edition Chapter 5 Transport Layer TCP/IP Protocols Objectives Understand the key features and functions of the User Datagram Protocol (UDP) Explain the mechanisms that drive segmentation,

More information

Characterization of Performance of TCP/IP over PPP and ATM over Asymmetric Links

Characterization of Performance of TCP/IP over PPP and ATM over Asymmetric Links Characterization of Performance of TCP/IP over PPP and ATM over Asymmetric Links Kaustubh S. Phanse Luiz A. DaSilva Kalyan Kidambi (kphanse@vt.edu) (ldasilva@vt.edu) (Kalyan.Kidambi@go.ecitele.com) Bradley

More information

The Total Network Volume chart shows the total traffic volume for the group of elements in the report.

The Total Network Volume chart shows the total traffic volume for the group of elements in the report. Tjänst: Network Health Total Network Volume and Total Call Volume Charts Public The Total Network Volume chart shows the total traffic volume for the group of elements in the report. Chart Description

More information

Chapter 7. Results Test 1 Results

Chapter 7. Results Test 1 Results Chapter 7. Results Network performance was evaluated for each of the four test templates described in Chapter 6 based on the values for throughput, transaction rate, and response time for Tests 1 and 2,

More information

Client Level Framework for Parallel Downloading of Large File Systems

Client Level Framework for Parallel Downloading of Large File Systems Client Level Framework for Parallel Downloading of Large File Systems G.Narasinga Rao Asst.Professor Dept.of CSE GMR Institute of Technology RAJAM-532127, A.P, India. Srikakulam Dist Srinivasan Nagaraj

More information

Replication in Distributed Systems

Replication in Distributed Systems Replication in Distributed Systems Replication Basics Multiple copies of data kept in different nodes A set of replicas holding copies of a data Nodes can be physically very close or distributed all over

More information

Source Routing Algorithms for Networks with Advance Reservations

Source Routing Algorithms for Networks with Advance Reservations Source Routing Algorithms for Networks with Advance Reservations Lars-Olof Burchard Communication and Operating Systems Technische Universitaet Berlin ISSN 1436-9915 No. 2003-3 February, 2003 Abstract

More information

Reliable Transport I: Concepts and TCP Protocol

Reliable Transport I: Concepts and TCP Protocol Reliable Transport I: Concepts and TCP Protocol Brad Karp UCL Computer Science CS 3035/GZ01 29 th October 2013 Part I: Transport Concepts Layering context Transport goals Transport mechanisms 2 Context:

More information

Assignment 5. Georgia Koloniari

Assignment 5. Georgia Koloniari Assignment 5 Georgia Koloniari 2. "Peer-to-Peer Computing" 1. What is the definition of a p2p system given by the authors in sec 1? Compare it with at least one of the definitions surveyed in the last

More information

Multi-Channel MAC for Ad Hoc Networks: Handling Multi-Channel Hidden Terminals Using A Single Transceiver

Multi-Channel MAC for Ad Hoc Networks: Handling Multi-Channel Hidden Terminals Using A Single Transceiver Multi-Channel MAC for Ad Hoc Networks: Handling Multi-Channel Hidden Terminals Using A Single Transceiver Jungmin So Dept. of Computer Science, and Coordinated Science Laboratory University of Illinois

More information

Design, Implementation and Performance of Resource Management Scheme for TCP Connections at Web Proxy Servers

Design, Implementation and Performance of Resource Management Scheme for TCP Connections at Web Proxy Servers Design, Implementation and Performance of Resource Management Scheme for TCP Connections at Web Proxy Servers Takuya Okamoto Tatsuhiko Terai Go Hasegawa Masayuki Murata Graduate School of Engineering Science,

More information

Oracle Rdb Hot Standby Performance Test Results

Oracle Rdb Hot Standby Performance Test Results Oracle Rdb Hot Performance Test Results Bill Gettys (bill.gettys@oracle.com), Principal Engineer, Oracle Corporation August 15, 1999 Introduction With the release of Rdb version 7.0, Oracle offered a powerful

More information

z/os Heuristic Conversion of CF Operations from Synchronous to Asynchronous Execution (for z/os 1.2 and higher) V2

z/os Heuristic Conversion of CF Operations from Synchronous to Asynchronous Execution (for z/os 1.2 and higher) V2 z/os Heuristic Conversion of CF Operations from Synchronous to Asynchronous Execution (for z/os 1.2 and higher) V2 z/os 1.2 introduced a new heuristic for determining whether it is more efficient in terms

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

Question Score 1 / 19 2 / 19 3 / 16 4 / 29 5 / 17 Total / 100

Question Score 1 / 19 2 / 19 3 / 16 4 / 29 5 / 17 Total / 100 NAME: Login name: Computer Science 461 Midterm Exam March 10, 2010 3:00-4:20pm This test has five (5) questions. Put your name on every page, and write out and sign the Honor Code pledge before turning

More information

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management WHITE PAPER: ENTERPRISE AVAILABILITY Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management White Paper: Enterprise Availability Introduction to Adaptive

More information

MULTIMEDIA I CSC 249 APRIL 26, Multimedia Classes of Applications Services Evolution of protocols

MULTIMEDIA I CSC 249 APRIL 26, Multimedia Classes of Applications Services Evolution of protocols MULTIMEDIA I CSC 249 APRIL 26, 2018 Multimedia Classes of Applications Services Evolution of protocols Streaming from web server Content distribution networks VoIP Real time streaming protocol 1 video

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

Frame Relay. Frame Relay: characteristics

Frame Relay. Frame Relay: characteristics Frame Relay Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Network management and QoS provisioning - 1 Frame Relay: characteristics Packet switching

More information

Is BranchCache right for remote, serverless software distribution?

Is BranchCache right for remote, serverless software distribution? Is BranchCache right for remote, serverless software distribution? 1E Technical Whitepaper Microsoft BranchCache and System Center Configuration Manager 2007 Abstract BranchCache is a new feature available

More information

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Semra.gulder@crc.ca, mathieu.deziel@crc.ca Abstract: This paper describes a QoS mechanism suitable for Mobile Ad Hoc Networks

More information

UNIVERSITY OF TORONTO FACULTY OF APPLIED SCIENCE AND ENGINEERING

UNIVERSITY OF TORONTO FACULTY OF APPLIED SCIENCE AND ENGINEERING UNIVERSITY OF TORONTO FACULTY OF APPLIED SCIENCE AND ENGINEERING ECE361 Computer Networks Midterm March 09, 2016, 6:15PM DURATION: 75 minutes Calculator Type: 2 (non-programmable calculators) Examiner:

More information

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues 168 430 Computer Networks Chapter 13 Congestion in Data Networks What Is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity

More information

Impact of TCP Window Size on a File Transfer

Impact of TCP Window Size on a File Transfer Impact of TCP Window Size on a File Transfer Introduction This example shows how ACE diagnoses and visualizes application and network problems; it is not a step-by-step tutorial. If you have experience

More information

06/02/ Local & Metropolitan Area Networks 0. INTRODUCTION. 1. History and Future of TCP/IP ACOE322

06/02/ Local & Metropolitan Area Networks 0. INTRODUCTION. 1. History and Future of TCP/IP ACOE322 1 Local & Metropolitan Area Networks ACOE322 Lecture 5 TCP/IP Protocol suite and IP addressing 1 0. INTRODUCTION We shall cover in this topic: 1. The relation of TCP/IP with internet and OSI model 2. Internet

More information

Good Ideas So Far Computer Networking. Outline. Sequence Numbers (reminder) TCP flow control. Congestion sources and collapse

Good Ideas So Far Computer Networking. Outline. Sequence Numbers (reminder) TCP flow control. Congestion sources and collapse Good Ideas So Far 15-441 Computer Networking Lecture 17 TCP & Congestion Control Flow control Stop & wait Parallel stop & wait Sliding window Loss recovery Timeouts Acknowledgement-driven recovery (selective

More information

Congestion in Data Networks. Congestion in Data Networks

Congestion in Data Networks. Congestion in Data Networks Congestion in Data Networks CS420/520 Axel Krings 1 Congestion in Data Networks What is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet

More information